Sunday, August 14, 2022

The Ideal Software Law

In science, we make abstractions that are simplified models of reality, then we try to describe them with equations that let us make accurate predictions given the conditions assumed by the model. In this post I attempt to do that for software projects.

Contents

The Ideal Gas Law

In physics, the behavior of an idealized gas is described by the ideal gas law: PV=nRT, where P is pressure, V is volume, n is the quantity of gas, R is a constant, and T is the absolute temperature. While real gases don't follow this law exactly, it can be used to make pretty good predictions. It can help you understand how steam engines, refrigerators, and hot air balloons work.

A key insight that follows from this equation is that you can't hold three of the four parameters fixed and change just one parameter. If you have a fixed amount of gas at a given pressure, volume, and temperature, and you increase the temperature, then either the pressure goes up, the volume goes up, or both. If, with the same starting conditions, you decrease the volume, then either the pressure must go up, or the temperature must go down, or both. You can keep any two parameters fixed and change the other two in fixed relationships, but you simply can't hold three of the parameters fixed and change just one. If you try to do that, you will invariably fail: one or more of the other parameters will, perforce, also change.

The Ideal Software Law

We can use a similar equation to convey the relationships among the parameters of software development. Instead of PV=nRT, we have:
FQ=nST
where F is functionality, Q is quality, n is development resources, S is a constant, and T is the amount of time to complete development. As with the ideal gas law, this equation does not precisely apply to real software projects, but it can be used to make predictions and gain insights. In particular, we can see in this formulation the same basic insight as with the ideal gas law: it is not possible to hold all but one of the parameters fixed and change only one parameter. If you try to do so, one or more of the other parameters will, perforce, also change.

The Parameters

Let's take a look at what the parameters in our equation mean and how we might measure them.

Functionality (F)

Functionality represents what our software can do. There are defined ways to measure the functional size of software, such as COSMIC function points, but we would like something simpler that still allows us to understand the relationships between the parameters of the equation. For our purposes, a reasonable proxy for functionality is lines of code (LoC).

We are not claiming that lines of code is a good general metric for measuring productivity. Some people write denser code than others, so can implement more functionality in the same number of lines of code. Some research has concluded that people can write the same number of lines of code per day independent of language, but a higher-level language can express more with the same number of lines of code, so could be used to implement more functionality in the same number of lines of code as compared to a lower-level language. Some projects have a more difficult environment than others, so developers produce fewer lines of code per day in that environment.

However, we are using LoC slightly differently in this case. We are not using it to compare productivity or functionality between projects and teams, but only within the team and project for which we are measuring functionality. We assume that all of the factors mentioned above that affect the LoC metric are constant within the project and time span of interest, so that twice as many lines of code will provide twice as much functionality.

Quality (Q)

For quality, we could use a sophisticated quality model such as ISO/IEC 25010, but for this exercise we will use the simpler Defect Management approach.

Intuitively, it makes sense that higher quality software will have fewer bugs (also called defects). We also expect a larger project to have more total bugs than a smaller project. Roughly speaking, then, we can think of the number of bugs per line of code as being a proxy for the level of quality of a software project. We can call this the bug density (or defect density). We want our parameter to be larger for higher quality software, so we use the reciprocal of the bug density. The reciprocal of density for materials is called specific volume, so we will call this measure bug specific volume (or defect specific volume), and use that as our measure of quality. Our units for quality are thus LoC/bug.

We recognize that there are some practical problems with this measure. Firstly, bugs come in different sizes. For our purpose we will assume some kind of "normalized" bug units, and assign more serious bugs more than one bug unit. Secondly, we don't know how many bugs are in a piece of software until well after it is delivered. We assume those bugs exist and will be revealed over time, at a rate which depends on factors such as how much use the software gets, so although we don't know the number in advance, we can still use this concept in our abstraction to understand the relation of quality to the other parameters.

Resources (n)

Resources, as in Human Resources, refers to the people we have available to work on the project. To a first approximation, n is the number of people developing the project. Many studies have shown that different people have different levels of productivity. For this idealization we assume that there is a baseline developer and that we know what the productivity multiplier is for each of our developers as compared to that baseline developer, despite that in practice this might be difficult, and the factor could be different depending on circumstances. We then define n as the number of baseline developers on the project. If we have a developer who we believe is three times as productive as our baseline, that would increase n by three. Our units for n are thus baseline developers, but for simplicity, we will sometimes just refer to the units for n as people.

Our idealized equation assumes that we could do our project in half the time if we had twice the resources. We recognize that we are blatantly ignoring the problems of the mythical man-month.

Time (T)

Time refers to how much time it will take to complete the project. This is the most straightforward dimension to measure, and because of that it is often the dimension that gets the most attention during project planning. We choose to use days as our units, as that is a commonly used unit for other aspects of software development.

The Software Constant

The units we have selected for the four parameters define the units of the constant S.

F(LoC)Q(LoC/bug)=n(person)S(??)T(days)

Therefore the units for S must be (LoC^2)/(bug*person*days). We can also write this as (LoC/bug)*(LoC/person/day). LoC/bug is a bug specific volume (our quality measure), and LoC/person/day is a development velocity for our baseline developer, so S is the product of a bug specific volume and a per-person development velocity. We can think of S as the "quality velocity" for one baseline developer. A higher value of S means higher productivity: more functionality or quality from a given amount of time, per developer.

So what value should we use for S? Some people (such as Brooks in The Mythical Man-Month) say a programmer can write about 10 lines of production code per day. Other sources use different numbers, but as a baseline we will go with Brooks value of 10 LoC/person/day.

For bug density, various studies have come up with a number from 3 to 50 defects per 1000 LoC. As a starting point, I will select 10 bugs per 1000 LoC, or a bug specific volume of 100. Combining these two values gives 10 * 100 = 1000 as the value of S. This means our baseline developer could, for example, write 10 lines of code with 10 bugs per 1000 LoC in one day, or 20 lines of code with 20 bugs per 1000 LoC.

In reality, different collections of people, different development environments, and different project attributes will all lead to different values of S. Organization should always be looking for ways to increase the value of S for their projects, but for this analysis I am assuming that they have already done this in all the easy ways, and the remaining opportunities to increase S require larger investments and time to have an effect on the project. Thus when analyzing our equation to see what predictions it makes for a particular project, we will assume S is constant.

The form of the equation

The Ideal Gas Law was created by assembling a number of simpler laws that were derived from empirical observations. Each of these simpler laws demonstrated the relationship between two parameters when the other two were held constant.
Our Ideal Software Law is similarly assembled from simpler guidelines. We don't have previously stated laws, so we rely on our intuition to guide us.
  • All other things being equal, functionality is proportional to resources: F ∝ n
  • All other things being equal, functionality is proportional to time: F ∝ T
  • All other things being equal, quality will he higher with more resources
  • All other things being equal, quality will he higher with more time
Because quality is hard to define and measure, we don't actually know how close to being proportional to the other variables it is. For simplicity, we assume that it is proportional to both resources and time, the same as functionality: Q ∝ n and Q ∝ T.

These four rules, when assembled, give us the form of the equation for the Ideal Software Law shown above.

Example

Let's make a concrete example. Let's assume we have a project with the following parameters:
  • The functionality we desire requires 10,000 lines of code
  • Our quality bar is 5 bugs per 1000 lines of code (better than baseline), so 200 LoC/bug
  • We have 10 people on our team, all operating at baseline
  • Our team software constant S is 1000, as calculated above.
How many days should we expect this project to take to complete? From the Ideal Software Law, we have:

10,000 (LoC) * 200 (LoC/bug) = 10 (people) * 1000 (LoC^2/(bug*people*days)) * d (days)

Solving for d, we get d = (10,000*200)/(10*1000) = 200 days. A project team, given the assumptions above (although perhaps not so explicitly), would perhaps deliver this estimate to management when asked how long the project will take.

Analysis

Now let's play with the parameters and see what happens.

The typical scenario is that management comes back to the team and says "That estimate is too long. We need to deliver sooner. Make it happen faster." What options does the team have?

Looking at the Ideal Software Law equation, if we want to make T smaller, we have four options:
  • Make F smaller (less functionality)
  • Make Q smaller (less quality)
  • Make n larger (more developers)
  • Make S larger (higher velocity)
Clearly making S larger would be good, but, as mentioned above, when considering the schedule for a single project, this is unlikely to be a short-term option. That leaves us with three other parameters that can be changed.

We could make n larger by adding more developers to the team. This can be effective if there are people available, but practically speaking is difficult because of limited budgets, the difficulty of finding appropriate developers, and the time-cost of bringing a new team member up to speed. All of those factors make this choice possible but unlikely.

Now we are down to two parameters: functionality and quality. The developer team will typically propose to make F smaller, also called a reduction in scope, by removing features from the project. If this is acceptable to management, then the reduced value of T can be balanced by the reduced value of F.

In many cases, however, management insists on not cutting any features. Now we are left with only one parameter: quality. Because this is the hardest parameter to measure, it is also the one that most often is ignored. In this situation, when T is made smaller and F, n, and S are unchanged, Q must, perforce, be made smaller by the same fraction as T was reduced.

The choice to reduce quality is sometimes made consciously, and could come with a commitment to go back later and improve quality. This is often referred to as taking on technical debt, which is expected to be paid back by improving the code later. The word "debt" is used here in intentional analogy to financial debt: there is a carrying cost to debt in the form of interest, making the total cost continue to go up the longer it remains unpaid. In software, this manifests as more time spent fixing bugs after product release, until such time as the debt is repaid by cleaning up the code to bring its quality back up.

If, however, a decision is made to reduce project time without changing functionality or resources, without consciously recognizing that there will be a reduction in quality, this is effectively like borrowing money without realizing it or having a plan to pay it back. The interest payments will still be there, in the form of more time spent fixing bugs and more time required to add new features, and that will negatively impact the team's schedule on future projects.

Limitations of the abstraction

All abstractions will eventually break down when the parameters go outside the valid range of the abstraction.
  • Newton's law of gravity elegantly describes the paths of the planets, but starts to break down in strong gravitational fields
  • The constant-time swing of a pendulum of a given length starts to change when the pendulum swings too far from its center position
  • The Ideal Gas Law becomes less accurate at lower temperatures, higher pressures, and with larger gas molecules
Understanding the limitations of an abstraction allows us to improve our predictions. In the Parameters section above, I discuss some of the assumptions about each parameter. When we recognize that an assumption does not hold, we can bend the results of our formula to try to compensate.

For example, our formula tells us we can get the same functionality in half the time by doubling our resources. But we know that it takes time to bring a new developer up to speed on a project, so we won't actually be able to cut our time in half. By estimating how much reality deviates from our assumption, we can improve the accuracy of the predictions made by the formula despite the fact that the assumptions behind the formula are not entirely accurate.

Conclusion

By abstracting the parameters of software development and creating an equation, we can make practical predictions about those parameters. We can make such predictions even when the assumptions behind our formula are not completely true.

One of the most important predictions is this:
If you insist on reducing the time available to complete a software project, and you don't increase the number of people on the project or cut some features, the quality of the delivered sofware will decrease proportionally to the reduction in time.