This Simple Test Could Save Your Company

I didn’t really have much to talk about this week until I was reminded of a recent engagement. Thanks editor!

The engagement wasn’t a complicated task. I was simply asked to look over existing infrastructure and development process and then recommend what needed to be adopted and changed to implement Continuous Integration / Deployment. Something that can be a little daunting when you’re not familiar with product capabilities, or don’t have the experience from examining and working with different development processes. Personally, I’m a Git Feature and/or Flow, Jenkins, Red Hat/JBoss guy. To which I would add Pivotal because they do have some lovely best-in-class products for what they do. However, this is not what I wanted to get into today. That’s because the client already had the latest architecture. Cloud based IAAS from one of the big three, Azure, GCP, and AWS; Kubernetes based PAAS. They only needed two things: an orchestration engine (OE) to automate everything with some modest intelligence, and enough testing so the OE can know what is going on.

Which is what I want to rant about this week. Testing. We all need to do more testing. We need to write more and better tests. We need to generate reports from those tests, and, perhaps most radical of all, we need to actually evaluate the test reports. This needs to done now. It’s the one commonly recurring error that I find throughout the industry. Sometimes they need to update their administration technology. Install an IAAS, install a PAAS, and then insert an OE in there to automate everything. This is all relatively easy. There are plenty of OSS products out there, and plenty of community and vendor support to help you plug it all in to each other. You’ve added a layer of automation intelligence on top of everything else. Pretty much adopting Testing as a Service, if I may try to coin another term. Then you realize that you have no data on which this new brain can make decisions.  That’s because after you’ve migrated your build and deploy process to your new OE, you’ll likely realize that you have no reports to signal if any individual step was successful, which in turn signals your OE to continue to the next step.

This comes from just generally questionable coding practices as an industry. The industry seems to struggle with developing custom software. Look at what you get from Red Hat and Pivotal. Nicely instrumented products. A full array of remote command and control interfaces. Meaningful log output making full use of all available log levels. This includes full logging implementations so we can easily define our preferred output method. You can validate the code by reviewing the publicly available tests and confirming they pass. Often with meaningful test output. Now, we don’t necessarily have to go that far with our custom software, but we at least need to implement proper testing with output that can be programmatically interpreted.

Really, at a minimum, we need enough instrumentation to identify conditions to whatever granularity is needed. This can be as low as simple unit and end-to-end functionality testing, which is what I consider the table stakes for the industry. Your code needs to at least come with enough test code to produce a non-zero output code on failure, so the automation can know if your code works. Bare minimum. I don’t believe I can overemphasize this point. Take the time to do it now. Do it when you first commit code. It’ll be a simple addition now. Later you’ll have to devote whole sprints to just writing tests to catch up.

About the Author

>