Many software engineers, myself included, are driven to build the best software possible, pouring their creative juices into developing “the perfect” solution. This intention is admirable and it is this quality that can lead to extremely innovative applications. However, I believe it is also this perfectionist quality that can sometimes cause us to lose sight of the bigger picture, our highest priorities. System optimization is one of those “quagmire” areas where we can easily lose sight of reality, as well as our priorities. We can fall into the habit of allowing ourselves to believe that every line of code we write must perform at blazing speed or use the absolute minimum system resources. I’ve found myself, on occasion, after researching optimization patterns or coding and recoding a particular algorithm for hours or days, when suddenly I think, “What is the problem with my original implementation? Why do I think this code won’t perform well enough? What performance tests did I run, against what performance requirement, that pointed me to this code being an issue?”. My job, my responsibility, my goal, is to deliver value and value can only be derived from a real need. When I allow myself to go off on “optimization tangents” (or any sort of effort that could be challenged as being “over-engineering”, for that matter), I lose valuable development cycles – cycles that could have been used to deliver something that actually provides a tangible benefit. The tendency towards over-engineering is natural – I’ll guarantee that we all do it. And so we must accept that, become aware of it and not beat ourselves up when we catch ourselves doing it. One way we can maintain this awareness is to form a habit of frequently asking ourselves this question: “What is the real value in doing the task I’m doing at this moment? Is it real value, or is it imagined?”
Let’s focus specifically on software optimization. What are some of the consequences of taking the “optimize it, optimize it all!” approach?
- In the time I spent optimizing code (that may already be quite fast enough), I might have implemented and deployed a new feature that could really help the end user.
- Optimization very often results in code that is more difficult to understand and maintain.
How can we approach optimization in a way that minimizes the risk that we’ll waste time and energy trying to optimize parts of our system that may never be an issue? Here are a few I’d like to recommend:
- Make clean design your first priority – after correct system functionality, of course.
- Take a step back periodically during development to consider possible performance “hot spots”.
- Implement performance and scalability tests for hot spots and automate these tests. Make the tests as realistic as possible, implementing a mix of use cases, running parallel threads that mimic anticipated runtime behavior (yes, you cannot be 100% certain you’ll get it right, but 90% of the time you’ll be close enough to head off most problems) – sorry, but deciding what and how to design performance tests is more a judgment call than it is a science. There are tools, such as JMeter that are perfect for this. Most of these tools even incorporate “record and playback” features to ease test development and execution of regression testing.
- Evaluate performance objectively, based on test results. If the use case or non-functional requirements don’t indicate a need for a higher level of performance, spend your time elsewhere.
For any hot spot or set of functionality you decide requires performance testing, here are the steps I typically follow:
- First, determine the explicit mix of use cases or user/system actions that are to be performance tested, along with the success criteria for each. Now I’ve identified the use case(s) that represent performance issues. It’s all about 1) identifying use cases or mixes of use case scenarios, under load, that represent unacceptable performance, 2) narrowing my focus as quickly as possible, and 3) prioritizing what I’ll spend time and effort on. “If it ain’t broke, don’t fix it” has serious applicability here.
- Design the performance tests, preferably leveraging any “record/playback” features provided by the test tool to speed up this process.
- Execute performance tests and evaluate the results.
- Outside of the performance tests, execute any use cases that did not meet the performance criteria I defined in the first step above, monitoring execution using a profiling tool. The idea is to identify threads or resources that may be causing bottlenecks. Note that I am not running the system under load at this stage – I’m looking for methods or sections of code that may be consuming the most resources. If you’re developing Java applications, VisualVM is a great tool for this and comes bundled in the JDK since Java 1.6, update 7. There are many profiling tools on the market for most programming languages. In my experience, these tools are highly under-utilized.
- Based on the information I’ve uncovered at this stage, I usually find it helpful to step through the code with a debugger in order to get a more detailed view of exactly what’s going on. My own experience tells me that the debugger is also one of those tools in the developer’s toolbox that isn’t leveraged nearly enough. I still encounter otherwise very solid developers who don’t even know how to configure and use their debugger. The debugger really is your friend!
- The combination of these last two steps will usually point me to a section of code or data structure that’s causing a bottleneck or consuming more significant system resources than others – these are my targets. Review this code, refactor it, repeat step 3 above. If the tests still fail to meet my performance criteria, move again through steps 4-6.
There are many approaches to performance testing. The above is simply intended to get you thinking about the process if you haven’t seriously considered this or don’t have a framework for approaching performance testing on your projects. Whatever you do, if performance or scalability is critical to the success of your project, address this early in the project, while also keeping in mind that not every aspect of any system requires optimization. Prove to yourself and your team where the system is not meeting performance needs and then systematically move through the process of identifying and remedying these issues.
- Working Effectively with a Performance-Testing Bottleneck (learningagileandlean.wordpress.com)