10 min reading time
Tech Series: Values of Software
The primary goal of software is to deliver business value. However, delivering consistent business value isn’t an easy task. For software companies in particular, IT services are often their most valuable asset.
Welcome to the first Life at LearnUpon Tech Series blog post. My name is Uros Certic, Principal Developer at LearnUpon, and I’m excited to explore the process of developing software that addresses business and customer objectives.
To begin, let’s talk about how software adds value from a business standpoint. While there are many benefits of software, I’d like to highlight three of the most important ones:
This is perhaps the most obvious one. Features are the way we provide value to our customers, as they are the primary source of business value and profit. It’s important to note that “the business” and the developers want the same thing: to deliver a more predictable stream of important features that will make the end user happy.
As developers, we want to deliver a more predictable stream of important features, but design can make achieving this harder for us. So, why is software design important from a business perspective? What do we get when the design is good? This value is better explained if we answer the question, “What do we get when the design is not good?”
Here’s an example of the feature development lifecycle when design is overlooked.
We get our requirements, focus on them, write some tests to cover them, release the feature, fix 2-3 bugs in production and everyone is happy, right? Well, sort of. Some time passes, and you get new requests from customers, “We just need to slightly adapt that feature to cover this requirement.” OK, it’s not that bad. Yes, we have a couple of new people that didn’t work on that codebase, but I’m sure they will understand what we did there. After all, it’s just a few more lines to add. New people start to work on the code. They spend a considerable amount of time understanding what’s happening there, add a few if/else conditions, and release the new code.
Usually, what follows is a few more bugs regarding new requirements and also more bugs regarding old requirements. We had tests for both new and old requirements, but unfortunately, they didn’t cover how the old code interacted with the new one. So we add a couple of if/else conditions, fix those bugs, everyone is happy, and all is well again… sort of.
After some time, the client comes back with new requirements. What’s different now is that pretty much all of the people who worked on the original codebase are not working on it anymore (either they have left the company or changed roles). New developers release new code, but what happens now is that not only do we have bugs from new requirements, but we have bugs from previous upgrades and the original code. And when we finally fix them, we get new requirements again. I think you know where this is going.
Following this form of development, the cost of adding new features can be displayed as an exponential curve through this graph:
From the example, you can see that every addition to our codebase produces more costs in the future, such as bugs and time spent understanding and maintaining the codebase. Although coding our solution did not take too much time, can we say that it was worth it when we take into account the other issues that followed? This is where the business can suffer. Time wasted on solving bugs and understanding code is value wasted in creating new features.
If development continues in the same way, it is very likely it will hit a brick wall. The cost of maintenance will surpass the cost of starting over, and if lessons are not learned, it’s going to end up progressing through this never ending cycle. So how is good design going to fix this? First of all, we need to define what good design is. If you look at design as a form of organizing your code, good design will make it easier to know where to change something, and identify if you did it correctly.
The problem with good design is that it doesn’t come without a cost. To practice good design, you need to get used to refactoring. If we look at the example previously mentioned, once the new requirements came in for the first time, if we had refactored the code using context from both the old and the new code, we would probably have been saved from 80% of the resulting bugs. Refactoring helps us have more certainty about the cost of the next feature.
One thing to note is that there is no silver bullet when talking about design. Your design can, in one case, make your development easier, and in the next one, make it a nightmare. You need to evolve your design through constant refactoring.
A graph that represents the cost of adding new features following development with good design (as opposed to the one without) looks like this:
It’s worth noting that this is also an exponential curve. There is no way to consistently add new features while keeping the cost the same. The difference here is that the base of this exponential curve is way lower. Now let’s look at another graph:
T is some moment in time where the cost of “doing things well” and “doing them quickly” becomes the same. As we go past the T point, good design becomes profit. The problem is that nobody knows where that point in time is. Knowing that fact, there is actually one case where it’s bad to force good design. It’s when you know that your project is going to be “dead” before the T point. If you think that your project will be dropped after a few weeks, why bother with making it maintainable? The only problem is, how many projects do you know that lasted that long? And even if you do know some, there is still the issue of gambling against unknown T points.
To summarize, by practicing good design, we are protecting our capacity to deliver features over time.
To realize the importance of feedback, we need to look at how we do the things we do. Firstly, let’s talk about the Waterfall methodology for software development.
This model was established in the 1970s, and after a long period of usage, people have discovered that it’s not very effective. The Waterfall method only works for the most straightforward projects. Those are the projects where you fully understand almost everything (technology, domain, requirements, problems, and so on), and they are not likely to change during development. So, in today’s world, how realistic is it to work on these types of projects? The main reason why the Waterfall method is not effective, is that it has a long feedback loop. But, what does that mean exactly?
To answer that, we first need to understand why people started using the Agile method in the first place. If you look at the sub-processes of Agile, you can see that it’s essentially the same as the Waterfall method. What’s the difference then? It’s the shorter length of the iteration. But if Waterfall only works in “perfect” projects and Agile (which is the same as Waterfall, only with a different length of iteration) is used in projects where uncertainty is high, how is Agile a more successful method?
It turns out that the shorter length of the iteration forces a behavior change. And, if done correctly, creates positive results in even the most unpredictable projects. So what are the changes that need to happen? This is where we need to understand the importance of feedback. Let’s look at an example:
Let’s say we have a large system where two sub-processes, A and B, are interacting with each other. A is a process that does some things, and B is a process that returns feedback (good or bad) to A about the work that is done.
While doing process B, we realize that some amount of process A will need to be done again, but better. The problem here is that it’s hard to predict what that amount is going to be. Also, we don’t know how many times we’re going to have to do it (go through a loop).
There is a solution that can be implemented to improve or potentially solve the problem. By switching the places of A and B and shortening the cycle time of the loop, you can, on average, significantly improve the output of that part of the system.
So how can we use this? Let’s get back to the behaviors that we need to improve and take a subset process of Waterfall:
How many times has it happened to you that your code gets returned from testing because you made some mistakes in the code? The problem here is that you can’t predict how much of your code is broken, and because of that, you don’t know how many times you’ll have to return to your codebase (cycle through the loop).
If we say that the result of this part of the system is working code (code that passes all the tests that we can think of), we can improve it by using this trick: write little tests first and little code afterward in short cycles. In other words, we can improve it by using test-first development.
And now we have something like this:
Let’s continue down this path and look at the next subset process:
How many times has it happened to you that at the start of the project, you get a full design of how your code should look and be organized, but then somewhere in the middle of development, you see that all the parts don’t fit together and you need to change it? The problem here is similar to the previous one. We don’t know how much of the design we’ll need to redo and how many times we’ll have to do it during the project.
We’ve already covered how “good design will make it easier to know where to change something and identify if you did it correctly”. Based on this statement, we can conclude that the output or objective of this part of the system is working code that you feel confident changing. So how can we improve the output? We can first write some tests, then write some code, and then do some refactoring – we can use test-driven development.
Now we’re left with this:
What happens when you spend time creating the perfect code and, after you have done it and shared it, you get the response, “that’s not what I expected“? You’d probably feel like you’ve wasted time on producing code that isn’t profitable. So, if we say that the output of this part of the system is well-designed working code that is potentially profitable, we can improve it by doing a few writing tests, a little bit of coding, a little bit of refactoring, and then a little bit of analyzing and interacting with customers. All executed in short cycles of behavior-driven development.
And what’s left is:
Have you ever been in a situation where you have built the perfect code, but you can’t deploy it immediately due to other dependencies? We can now say that the result is well-designed, functional, and profitable code. And we can improve it by using old tricks like switching places, doing small cycles, and what we’re left with is continuous deployment (CD).
So, now we get the answer to the question, “Why is Agile different from Waterfall, and why do shorter cycles work?” Faster feedback can help you catch your mistakes sooner, reducing the amount of time needed to generate final value.
But, as previously mentioned, to fully utilize the benefits of Agile, you need to be ready to adapt your behavior. After all, what’s the purpose of writing code in short cycles if it has a considerable number of bugs and is a nightmare to change?
It’s worth mentioning that, similar to design, there is an initial cost to learning and adjusting behavior. But over time, it’s going to become routine, and just like with design, there is a point where the cost of learning and adapting is gone and only profit remains.
- With features, we generate profit.
- With good design, we lower the cost of producing new features on long runs.
- With fast feedback, we identify our mistakes sooner and spend less time creating features.
The italicized text in this blog post is based on a conference talk by J.B. Rainsberge on The Economics of Software Design. When you have time, I suggest you watch it with a cup of coffee or tea, and make sure to check out the other talks from the DevTernity channel.