The magic triangle of software development

Ante Peric

This is a brain teaser. An idea I find useful when discussing the pros and cons of real-life use cases and features.

Why?

During one quite difficult period in the development of the product my team was working on, some super urgent, should-already-be-done-yesterday, topics were discussed. The neverending list of super-important items was just thrown at the table. Uneasy estimates were given. Nobody wanted to hear the word difficult. And nobody said the word pressure. 

At that time, my brain worked faster, and my mouth had fewer restraints than today.

I stood up against the requirements and said, “All of this can’t be done so it meets the future with open arms. We can do it, but we must make sacrifices.

My manager, at the time, asked me: “Well, this is quite a generic statement, don’t you think? What’s the real issue?

The idea of the magic triangle popped up in my mind at that moment.

The Triangle

Definition

The concept behind the triangle is very simple.

When working on any task (unit of work), the task workload is divided into three categories (or vertices – points):

  • Scope (increase/decrease) – the amount of work that must be carried out. It can be anything, from brainstorming, spiking, and investigation, all down to development and testing.
  • Speed (speed-up/slow-down) – the pace at which you perform the given task. The faster you are, you can produce errors. The slower you are, you could end up producing fewer errors.
  • Quality (increase/decrease) – the overall quality of the task outcome. I personally get lost in the definition of quality, so I’ll be highly subjective: a good and proper analysis followed by an even better, clean task execution. Or it can be said: “done by the book”.

By definition, each category requires an investment.

The following two statements have empirically proven to be true (this is a blog post, not a math book ):

  • You can’t fully satisfy all vertices. There is no fast solution on a huge scope that can be done with the utmost quality.
  • Moreover, you can, at any given time, fully satisfy only two vertices.

Use-cases

Let’s imagine that you want to have something done really, really fast.

  • In case you’re going to invest extra effort in quality, then you need to decrease the scope. There is simply not enough time to do a huge scope by the book. Unless you are a superman. Or a cowboy.
  • In case you’re going to invest extra effort in scope, then you need to decrease the quality. Small time frame and a lot of work to do – let the hacking begin!

Now, let’s switch the formula and say that you want to increase the scope:

  • In case you want to deliver a quality solution, this will definitely require more time – you’ll be slower.
  • In case you want to deliver a fast solution, the quality will suffer.

Technical Epics

I wanted to bring up one special case: we want to do something really slow with the utmost quality in mind. What will happen with the scope? Well, the scope decreases. We are focusing on a small unit of work and we’re doing it really, really well. We’re definitely not gonna wander around the entire system to make a cross-cutting change.

This case is, usually, embodied in technical epics (tech debt).

Why?

Let’s start with one key difference between product and technical epics. Technical epics must be carried out on existing running systems, and must satisfy the “already defined set of system behaviors and rules”.
Product epics are like clay. They can be modeled and spiked until we figure out the proper solution that fits all use cases and satisfies all requirements. And it’s much harder to work on the existing set of rules and boundaries, especially if breaking them could be a big problem.

As such, technical epics require time and precision. With variable, but usually low, and quite narrow scope.

Adding Weights

The weights idea was added later on, in order to give more flexibility to the idea. In nutshell:

  • A weight on a scale of 1-50 is associated with each vertex.
  • The total weight to distribute among vertices is 100.

With this in mind, an ideal distribution is 33.3 per vertex.

Let the distribution begin…

> subscribe shift-mag --latest

Sarcastic headline, but funny enough for engineers to sign up

Get curated content twice a month

* indicates required

Written by people, not robots - at least not yet. May or may not contain traces of sarcasm, but never spam. We value your privacy and if you subscribe, we will use your e-mail address just to send you our marketing newsletter. Check all the details in ShiftMag’s Privacy Notice