Testing is one of those practices that although people are becoming more conscious about, still rare to see in the majority of projects.
This might even be true for startups where the pace is incredibly fast. The team needs to be shipping feature after feature and dealing with a number of complicated and time consuming bugs that who knows when someone will be able to fix it.
However, it won’t be fair to generalize that. VTS is one of those dream places where we can sit and watch all the production code being automatically tested without a single click. Testing is taken very seriously and appreciated by everyone here, from CEO to the product team, tests are golden to all of us.
Our code has coverage throughout the entire stack, which is what really amazes me. Back-end, front-end, mobile apps, you name it. We got that covered to make sure our products are stable, robust and tested before every deployment.
Among many testing frameworks, services and techniques that we use, acceptance testing is one that still blows my mind.
There’s much more to acceptance testing than I wish I could cover here. But in a nutshell, acceptance testing in agile software development is often defined as a black box test that asserts that a functionality works. In other words, it’s literally clicking all over the place on the application just as a regular user or a QA staff would do to make sure the user stories are met.
The big difference is that it’s automated. So once the tests are written there’s no need to worry about some functionality mysteriously breaking due to a recent change, tests will fail and it decreases the likelihood of breaking existing features that matters to the user.
I’ve been writing a lot of those lately and the outcome is absolutely rewarding and joyful. Here’s a tiny example of it in action using the Selenium WebDriver:
Fascinating! isn’t it?! It’s actually more fun to write them as it is to watch!
Creating acceptance tests in small projects is usually pretty straightforward, but the challenge comes onces the application starts growing. In that case it’s usually a good idea to keep some basic guidelines in mind to prevent future headaches. Here’s a few:
- Reliability and Repeatability: Tests should give consistent/repeatable results
- Isolation: Tests should work in isolation and not dependent and not affected by other tests
- Ease of development: Tests should be as easy as possible to write, keeping them small and specific to a single logic assertion per test is often a good idea
- Ease of maintenance: Tests should be maintainable. When we change code that break tests, we want to spot it easily and fix them quickly
Each of these guidelines can be expanded in much broader topics but for brevity, we’ll stop here for now.
This is a good way to think about when writing acceptance tests as it can in general save projects, companies and teams some extra work required for maintenance and development. Plus, it’s fun!