Performance Testing for Agile Projects


Performance testing is an integral part of every software development project. When I think of agile projects, I think about collaboration, time to market, flexibility, etc. But to me the most important aspect of agile processes is its promise of delivering a “potentially shippable product/application increment”. What this promise means for application owners and stakeholders is that, if desired, the work done in iteration (sprint) has gone through enough checks and balances (including meeting performance objectives) that the application can be deployed or shipped. Of course, the decision of deploying or shipping the application is also driven by many other factors such as the incremental value added to the application in one sprint, the effect of an update on company’s operations, and the effect of frequent updates on customers or end-users of the application.

Often application owners fail to provide an objective assessment of application performance in the first few sprints or until the hardening sprint—just before the application is ready to be deployed or shipped. That is an “Agile Waterfall” approach, where performance and load testing is kept aside until the end. What if the architecture or design of the application needs change to meet the performance guidelines? There is also a notion that performance instrumentation, analysis and improvements are highly specialized tasks which result in resources not being available at the start of a project. This happens when the business and stakeholders are not driving the service level measurements (SLMs) for the application.

Application owners and stakeholders should be interested in the performance aspects of the application right from the start. Performance should not be an afterthought. The application’s backlog in agile contains not only the functional requirements of the application but also the performance expectations from the application. For example, “As a user, I want the application site to be available 99.999% of the time I try to access it so that I don’t get frustrated and find another application site to use”.  Performance is an inherent expectation behind every user story. Another example may be, “As an application owner, I want the application to support as many as 100,000 users at a time without degrading the performance of the application so that I can make the application available globally to all employees of my company”. These stories are setting the SLMs or business-driven requirements for the application, which in turn will define the acceptance criteria and drive the test scripts.

It is important that, if a sprint backlog has performance related user stories (and I’ll bet you nearly all of them do) its team has IT infrastructure and performance testers as contributors (“pigs” in Scrum terminology). During release planning (preferably) or sprint planning sessions these contributors must spend time in analyzing what testing must be performed to ensure that these user stories are considered “done” by the end of the sprint. Whether they need to procure additional hardware, modify the IT infrastructure for load testing, or work on the automation of performance testing; these contributors are an active member of the sprint team participating in daily scrums.  They must keep a constant pressure on developers and functional testers to deliver the functionality for performance testing. After all, the success of the sprint is measured as whether or not every member delivered the final product that fully met the acceptance criteria and on time.

To me, performance testing is an integral part of the agile process and it can save cost to an organization. The more you wait to conduct performance tests, the more expensive it will become for you to incorporate changes. So don’t just test early and often – test functionality and performance in the same sprint!

Tags: , , , , , , , ,

Leave a comment