While working on an PHP based application with an ever expanding team of developers and an emphasis on not breaking existing functionality at the same time as developing many new features it started becoming clear the current approach of mostly manual testing need some improvements.
The development team had recently gone through a bit of a transformation having switched to GIT over SVN and had just started using Docker for development over having individual installations of apache/mysql/php. Before docker there would often be a lot of time lost with installation issues and over time installations would become ‘customised’ with different php library’s/versions. The switch to use Docker containers in itself solved so many of the problems when deploying to production as the application was now being developed on a very similar configuration in terms of software versions to what exists on production. I can’t remember the last time I heard the phrase “it works on my machine” around the office – which is nice.
Testing had previously involved a lot of manual checks either by junior testers or by developers who tended to make assumptions that meant occasionally key things were missed. A nice solution was to start automating tests covering key functions of the system, that way we could define a repeatable test with very little cost that could all be run on demand.
What to test
To avoid having a large number of tests that were hard to maintain and delivered questionable value we identified the most critical parts of the application as well as areas that we knew had a higher risk of breaking. We then spent a decent amount of time making sure we had representative data in terms of volume and content to match what production would have to handle. Finally we started building our functional tests which went through our normal spec/dev procedure before code review/merge.
When to test
Developers have access via a docker image to the same test data and can run the same tests locally if they want. Additionally to this the development branch gets built every night, if the tests pass the development branch gets pushed to master branch (it’s the only thing that can push to master). If any tests fail overnight a slack message goes to the team so that it can be investigated. All releases are based from the master branch and for code to get to master all off the test have to have passed.
Cloud based vs in-house server
As a test of concept we did everything in the cloud as it was very quick to spin up servers which made it cheap in terms of both developer time and the cost of the cloud server. This was great as it proved how valuable automated testing was going to be however there were some things that made us reconsider as a longer term solution. They were:
- Scalability – This was the main one, as our tests grew the cpu power/memory would need to be increased which could considerably rack up the monthly fee.
- Security – There would be no production data anywhere near testing but it would have our source code and integrations the repository etc. We could mitigate many of the security concerns raised but there would be a higher cost in time vs a server inside our network.
- Control – Having control of everything including physical security, availability, backups etc made us feel good.
The new testing server
Having decided to buy a server exclusively for testing this is what went with:
HP ProLiant DL380 G7
2x Intel Xeon 5670 Six Core 2.93GHZ processor (12 Cores Total)
8x146GB SAS 10K drives
144GB DDR memory
2 x PSU
4 Port Gigabit adapters
All of that cost under £700 with delivery which is an absolute bargain for the hardware especially given what it would have cost a couple years ago. These types of servers are available on eBay for the same sort of price.
Once we received the hardware we installed Ubuntu 16 for the OS and then started setting up the other software needed. Thankfully that list wasn’t too long (basically Jenkins + Docker) as most things were already setup in containers which is quite handy if we ever needed to switch servers or decided to do everything in the cloud.
Jenkins is a very popular automation server, it’s very easy to setup and straight forward to use. Jenkins is responsible for kicking of the nightly test builds, running through each of the tests and then either sending failure results to slack or pushing the development branch to master if successful.
Used to run our containerised application and use other software such as Selenium without having to go through the pain of installing it and maintaining it!
Codeception is the testing framework we chose to use, it’s pretty easy to learn and works well with everything else so generally makes life easy.
A few months later and the testing server is going strong, it’s provided huge value in reducing bugs reaching production, letting testers concentrate on higher value tasks and making life more enjoyable knowing key functionality has more protection around it than ever before.