Improve Your Agile Practices

Enroll in the FREE 7-lesson course that will help you and your team to become more efficient and agile.

In this course, you will create a strategy for improvement. You can start improving after the first email, and you will use the strategy to stay on track months or years later.

* indicates required
Preview Email Course
Do you like what you are reading? Do you want to receive more content like this, in your inbox?
I send out articles just like this to my newsletter about once per week. Subscribe now:
* indicates required
Do you like what you are reading? Do you know people who might be interested too?

Share This Page:

Employee Training: Who's Responsibility is it?

After my last article, Intrinsic Motivation and Technical Excellence, a reader of my newsletter told me that something I wrote is apparently diametrically opposed to something Uncle Bob wrote:

[...] when a company employs someone, the company is responsible for training the person to become the employee they want!

Intrinsic Motivation and Technical Excellence

Whereas Uncle Bob wrote in The Clean Coder:

Your career is your responsibility. It is not your employer’s responsibility to make sure you are marketable. It is not your employer’s responsibility to train you, or to send you to conferences, or to buy you books. These things are your responsibility. Woe to the software developer who entrusts his career to his employer.

Robert C. Martin, The Clean Coder

Are we disagreeing here? I don't think so. And if we are, then only slightly.

Your Career is Your Responsibility

Period. You cannot expect your employer to do anything only you will profit from, just because "it is the right thing". So, if you want to learn technology X because it will look good on your resumee, don't expect your employer to give you the time to do so. Your employment is basically an exchange: Money for work done. Anything more (from both sides) is icing on the cake.

Your career is your responsibility. If your employer does not give you the time and resources to learn and become better, you should be prepared to work on them on your own.

And you should think hard about whether this is really the right employer for you. As an employee, your employer is probably your single source of money. So it's also your single source of failure for your career. When they stop giving your money for whatever reason, you need to be prepared to find someone else who gives you money. So if you get the feeling that your employer is steering you towards an uncertain future, think about your options early. Don't wait until it is too late.

You cannot expect your employer to do anything only you will profit from. But your employer should do things they will profit from:

Motivated People are Your Employer's Responsibility

Employment is basically an exchange: Money for work done. Actually most employment contracts are even: Money for time at the desk (but if the employee does not do any work, they will not stay with the company for very long). It is not your employee's responsibility to love their job. Companies cannot just complain that their employees are not intrinsically motivated and do nothing else - They have to create an environment where intrinsic motivation can grow.

You, as a company, are not responsible for making your employees more marketable. But, on the other hand, your employees are not responsible for becoming exactly the employees you need. They are not obliged to learn the skills you need in their spare time. When you hired them, this became your responsiblity.

If you, as a company, do not allow your people to refactor, if you do not allow them to learn or go to conferences, if you do not allow them to grow professionally, some of them will start thinking about whether you are really the right employer for them. This is absolutely reasonable because you are their career's single point of failure.

If you want to compete in this global market, if you want to stay relevant in the future, you need innovative, creative, motivated employees. You need people who love their job. This means that you have to give them jobs they can love. You need to give them freedom and time to grow professionally. Not because it's the right thing to do, but because your profit depends on it.

To Recap...

It is not the employer's responsiblity to take care of your career. It is not the employee's responsibility to become exactly the person the employer needs. But when the employee does not show any interest, they risk being fired. And when the employer does not allow their employees to grow professionall, people will start thinking about leaving.

When people start to think about leaving because they see the company as an impediment to their career, the best will leave first. Some years ago a friend of mine was leaving his company. He told me "This is a company where only people with mortgages stay. Everyone else leaves after a year". Believe me, you don't want to be a company like that.

When employment is only an exchange of money for work, both sides suffer. When you allow people to do good work - and to learn to do good work - they will be more motivated. And because they are more motivated, you, as a company, will profit.

Intrinsic Motivation and Technical Excellence

In my last article, I wrote about how managers are responsible for creating an environment where intrinsic motivation can happen. Here is one thing that I'm sure would help a lot of companies / teams:

Allow your people to achieve technical excellence!

"Wait, we allow that. We event want our people to be excellent!", I hear you say. All companies and managers want excellent people. But many are not willing to invest anything - money, time, process changes, ... - in enabling their people to achieve technical excellence. And some companies are even hindering their people to do excellent work.

You know, most (if not all) software developers / testers / managers I know actually want to do good work. They want to create excellent software. They want to learn how to do that. They were intrinsically motivated to create better software, but their company has killed this intrinsic motivation.

For many of them, developing software is a day job. They do not want to spend tens of hours of their free time per week on side projects or reading books or doing code katas. They do not want to learn and become better in their spare time.

Which is totally OK! Because, when a company employs someone, the company is responsible for training the person to become the employee they want!

So, if you want technical excellence, train your people and create an environment where people can learn. On the job. Create an environment where technical excellence is rewarded.

Are your team members allowed to read blog post, magazines and books during their work time? What is your training budget? What is your conference budget? Do you encourage your people to speak at conferences or user group meetings? Do you host or sponsor user group meetings?

Encourage experimenting and learning, even if you know that this means people will make mistakes. Allow team members to try new things and to work on stuff that is not on the Scrum board. Allow your developers to refactor (it saddens me how often I have already heard "we don't have time for refactoring").

And talk to your people. Tell them that you want technical excellence and that you need their help to change the company / department / team in that direction. Ask them what they need, and then listen to them.

Do those things one step at a time. See what works and do more of it. Stop what doesn't. Over time, the intrinsic motivation in your people to be good at what they do will come back.

Do you have any questions? Do you have any stories about companies that do this exceptionally well (or are quite bad at it)? Please tell me!

Intrinsic Motivation

I sometimes hear managers complain that their people are not intrinsically motivated. "We need people who love what they do, not the people we have, who are only here for the money!" Saying something like that as a manager is a bit ironic because...

You are in charge here. If your people are not intrinsically motivated, it's your fault.

Most corporate environments are not very good for allowing intrinsic motivation to happen. Some even kill intrinsic motivation very quickly. And it's often not one big annoyance that kills intrinsic motivation, it's lots of little things that add up.

Many little annoyances that give your people the feeling that "I cannot decide anything here" or "Nobody wants to hear my opinion" or "No matter what I do, things don't change here". Then it's only a matter of time until your people adopt a "work-to-rule" attitude...

As a manager, if you want intrinsic motivation, it is your job to create an environment and a culture where intrinsic motivation can happen. One where your people can decide for themselves and experiment and fail and learn and be creative. One where your people feel respected.

What happens at your workplace that kills the intrinsic motivation of the team members? What do you or your managers do to create an environment where people can be intrinsically motivated? Please tell me!

KPIs & Bonuses: It's the System

Earlier in this series, I wrote about what happens when management try to change people's behavior with KPIs, targets and bonuses. Some will try to game the system, or your KPIs will cause unwanted behavior. If they find the right set of KPIs so that cheating and unwanted behavior become impossible, they'll probably destroy motivation, morale and teamwork. But what if they find that right set of KPIs and manage to create an environment where everyone stays motivated and nobody quits?

It's the System...

The results will still be negligible. 95% of the variation of an organizations performance is caused by the system, only 5% by the people. Please read through the exercise in the linked article - Joe really has only very little influence on whether he can finish his work on time - Even if his bonus depends on it.

Also, there are always people who cannot improve their KPIs. Maybe they are juniors who cannot yet influence their outcomes in a positive way. Providing training for them would make much mure sense than giving them a bonus based on their KPIs. Or maybe they are already working as hard as they can. They would need some coaching to show them better ways of working, not KPIs to show them that they are not good enough.

Sure, in a system with bonuses based on KPIs, when a company does not get the results it wants, it does not have to pay the bonus. So they save a little money. But they also didn't get the result they wanted, which probably wasted much more money than the saved bonus brought back.

Companies must stop punishing their people when results didn't happen. They have to start to improve their systems so that results are more likely to happen. They need to create an environment that maximizes learning. And one that is failure-tolerant: Failure always has to be an option, and the company has to be able to survive any failure that could happen.

Trust is Key

Recently I read a great article about a similar topic: Why we cannot learn a damn thing from Semco, or Toyota.

Their [Toyota, Semco, ...] wonderful stories and practices will remain impossible to emulate, however – as long as we keep carrying around fundamentally screwed-up notions about other people´s human nature [e.g. that they cannot be trusted].

Niels Pfläging - @NielsPflaeging

Nobody can force people into being more agile. No company can coerce or bribe their people to be more efficient or to work together in a better way. But they can create an environment where those things can happen. And they can stop doing things that prevent them to happen.

But as a first step, companies and managers need to completely change their thinking - They'd really have to start trusting their people. And I fear that, at least for some companies I worked with in the past, this would be too big of a step...

KPIs and Bonuses: Motivation and Morale

In the last post from this series, "Bonuses and Wasted Resources: The Right Set of KPIs", I wrote about how it's really hard (or even impossible) to find a set of KPIs where cheating or wasting resources becomes impossible for the people being measured. But what if management can really find that right set of KPIs, that makes it impossible for people to cheat or waste resources? Now the performance of all employees will improve, right?

Turns out, no, you still have problems. It turns out, most (if not all) people actually want to do good work.

But when management forces them into that right set of KPIs (that tries to make cheating impossible while rewarding desirable outcomes), they make it harder for them to actually do their work. People will have to spend more and more time checking if they are within their KPIs, reducing the time being productive at work.

Also, in our industry, it is extremely important to find creative solutions for hard problems. To be creative, people need an environment where they can fail safely, and where they have lots of options (different ways) for solving the problem. The right set of KPIs takes away both: They can only fail so often before their KPIs go down. And because the KPIs were designed to inhibit certain behaviors (cheating, wasting), they also take away options for solving problems [*].

So, with the tightly woven right set of KPIs, managers make it harder for their people to be productive and to do good work. But that can be offset with the desired outcomes that the KPIs encourage, right?

Well, no. If management makes it hard for people to do good work and to be productive, job satisfaction will suffer. Some people will just tune out and only do what they are told [**]. Some will leave. And those who leave will probably be company's best talents - Those who have lots of options finding new jobs.

So, if managers constrain their people with KPIs, their best people will leave. And others will resort to just doing what they're told, killing creativity in your company / teams. I don't think this is really what they wanted.

But what if you can find the right set of KPIs, and create an environment where everybody is motivated and wants to stay with the company? Then everything will be fine, right? Well, I don't think so. In the last post from this series, I will tell you why. Stay tuned, and subscribe to my Newsletter so you don't miss it!

[*] Don't get me wrong: I do not want your people to cheat. But the KPIs that were designed to inhibit the cheating and wasting resources will almost certainly also inhibit good behavior.

[**] I have talked to several managers that complained that some of their people are lazy and that they cannot find good people. All of them had systems in place that prevented their employees from doing good work - At least to some degree.

Bonuses and Wasted Resources: The Right Set of KPIs

In an earlier article, I wrote about how competition and bonuses encourage cheating and waste. I wrote a personal example, where I eperienced that I cheated and wasted resources during a competition that didn't even matter to me. I got a lot of positive feedback, including:

Not new, but a nice example of why metrics are bad: http://devteams.at/competition_and_cheating by @dtanzer

Jens Schauder (@jensschauder)

And Jens is absolutely right: This is not new. It is actually fairly old. But time and again, I work with customers who have implemented or want to implement a system of bonuses to motivate their employees. And when I talk to them, I often get answer like:

"Well, you have a point. But you just have to find the right combination of metrics so that cheating or wasting resources is not possible anymore."

Ok... But that's incredibly hard. Let's assume Louise Elliott would have given us the following challenge: "The side where everyone has a card wins, but there must not be any wasted cards".

My cheat would still have been possible: I could still just declare we are finished, and hope that the counting occurs only when everyone really has a card. I could even cause some chaos to delay the counting. Wasting cards would also have been possible - It would be next to impossible to determine that not a single card has been wasted.

But that's an artificial example. Let's look at something real companies are doing. Louise mentioned a company that had a KPI for testers like "less then x defects found in user acceptance testing". The idea is that testers should find the defects before going to UAT, so if there are no defects in UAT, they did a good job and deserve a bonus.

What you've done now is to incentivize testers to waste time: They need to be absolutely, positively be sure that there are no defects. So they'll probably spend much more time than necessary testing features, just to be sure. Wasting time has another positive side effect: Less features make it into UAT, which makes defects less likely.

Some time ago, I talked to a manager about a similar situation. And he told me: "Of course you are right. That's why we also need a KPI that says average testing time per feature must go down every year".

Your testers could now just reject all the hard to test features very quickly. Find some minor bug in the feature or the specification, and reject it. This increases the lead time for the hard-to-test stuff, and so more easy-to-test stuff goes into every release. Less features in the UAT, and they all are easy to test: Less defects! Of course, your product still suffers. So we need another KPI...

I guess there might actually be the perfect set of KPIs - Maybe it is possible to find a combination of metrics where cheating and wasting resources becomes so hard that people won't even try anymore. But if you find that set, you are still screwed. In one of my next blog posts, I'll tell you why: Stay tuned! Subscribe to my Newsletter so you don't miss the third part of this mini-series...

P.S.: Did you know that you can ask me anything?

Competition, Bonuses, Wasted Resources and Cheating

Yesterday I was at TopConf Linz, and one of our speakers, Louise Elliott, wanted that everyone in the room gets a card with a red and a green side. She needed us to have those cards because she did some experiments where we could vote by showing either the red or the green side. But to me, what happened before the actual experiments was even more interesting...

Louise gave me and another person a deck of cards, and we should distribute them to the people in the room. I was to distribute them to the right side, the other person to the left side. And the side where everyone would have their cards first would win.

We started to distribute the cards. I started quite slowly, and I don't know if my colleague / competitor started faster. But then Louise said: "You do know that you are in a competition here, right?"

I realized that I had much more cards than I needed, so I started to hand out large decks of cards to every row. Like, 15-20 cards for a row with 8-10 people. After the last row, I shouted "done!", we counted, and everybody had a card, so we were declared winner. The other side finished at just about the same time...

We were the winners, but I wasted just about half of my "resources" (cards) by giving each row more than they needed. Also, we maybe cheated a little: I'm not sure if everyone already had a card when I shouted "done" - I guess not. But I was confident that everyone could show a card when asked to, which would, of course, be a little later than my shouting "done". So yes, we won. But only by a very little margin, and we wasted a lot of resources and cheated a little...

This reminded me of the bonus schemes I saw implemented at some of my customers. They created a system of metrics, reward and competition in which cheating and/or wasting company resources was a possibility to win. And if something is possible, somebody will do it sooner or later.

Now, you could say, we obviously hat the wrong metric. Or not enough different metrics. If Louise would have better nailed down the winning conditions, my cheat would not have been possible... In one of my next blog posts, I want to write about why this won't work either: Stay tuned! Subscribe to my Newsletter so you'll never miss one of my articles...

Update: Part two is here: Bonuses and Wasted Resources: The Right Set of KPIs. But you can, of course, still subscribe to my newsletter ;)

Well Crafted Code, Quality, Speed and Budget

A couple of days ago, I wrote the article Well Crafted Code Will Ship Faster. It contains some reasoning why well crafted code is also cheaper and faster, and some ideas what you can try next to get there (like, "first do things right, then do the right things"). This article sparked an interesting discussion on Twitter.

We should stop using misleading generalizations for concepts that exist on a scale. That's my point.

Benjamin Reitzammer

I agree the science we have is crap but does that mean we should stop trying?

Christoph Neuroth

I really liked the discussion, but I think some people did not understand what I wanted to say with my original article. I guess this is mainly because I did not express myself clearly enough. And in some cases, I just disagree.

Chad Fowler also sent me the link to his very interesting presentation McDonalds, Six Sigma, and Offshore Outsourcing: Unexpected Sources of Insight during this discussion. You should watch it, it's great... Just, I would do some things differently (I would never quote the chaos report ;) ), and I disagree with some of his points.

Here is a quick summary of all the criticism / feedback / ideas I got in the twitter discussion:

  • Most of the concepts in the article ("Quality", "Well crafted code", ...) are not universal, but highly subjective and opinionated and context dependent.
  • Some things I wrote ("defect", "efficiency") are so generic that they are almost meaningless.
  • The studies I quoted only apply to "industrial style" software development, not small teams / startups / ...
  • The science we have is crap.
  • We should stop using misleading generalizations for concepts that exist on a scale.
  • A more nuanced discussion would better reach the target.
  • Internal quality is nearly unimportant for software to function well / Your users don't care about internal quality.

Here I want to clarify some of the things I wrote, define some of the concepts and provide more arguments where I disagree. This is going to be a loooong article, but please bear with me. I will try to show you my perspective, and I hope you can learn something. Or start another interesting discussion when you disagree with me ;)

All The Answers...

Here comes basically the disclaimer of this article :) During our twitter discussion, Chad Fowler wrote:

I've done a lot of thinking about this. I don't have answers but I really enjoy it :)

Chad Fowler

I have done a lot of thinking about this too. And I have some answers. Here, I tried to write them down. But I don't want to claim that those are universal truths.

Those answers work well for me, right now, most of the time. And from discussions with other developers, I know they work well for others too. I hope they can help you to:

  • Start thinking about those topics / Show you one perspective on the topic.
  • Start discussions with your coworkers / managers.
  • Start investigating where you are losing time (and money).
  • Start improving.

Cycle Time and Lead Time

I am using the definitions for those two terms that are used in software development most of the time:

Lead Time is, roughly speaking, the time from when we first have an idea about a feature until our users get the updated software with the implementation of the feature present.

Cycle Time is, roughly speaking, the time from when we start development of a feature until we finish it's implementation in a tested, documented, potentially shipable product.

Note that this definition differs a bit from the definition in manufacturing, where "cycle time" is the average time between products created in a process, i.e. "Assembly line 3 produces a new razor avery 5 seconds". I actually like this definition more, but nobody in software development uses it, so let's stick with the definition above.

...Will Ship Faster

By "will ship faster" I mostly mean "will have shorter average cycle times". Well crafted code does not influence things that happen before we even start a feature. But it definitely has an influence on how long we need to develop, test and document the feature. So it will influence our cycle time

The lead time will be shorter too, since cycle time is a part of the lead time: It will be shorter by the amount of time we saved in cycle time. With well crafted code, it might also be easier to go from "potentially shippable" to "shipped", so in some cases the lead time might be even shorter than that.

Speed and Cost

With a stable team, cost is a function of speed: The cost is dominated by the salaries of the team members. So, if "will ship faster" holds, we will also be cheaper.

Everything is more complicated when the team is not stable: When the team size or composition changes, the "per person output" becomes lower. This effect is probably worse when you grow a team than when you shrink it (see also Original Scope On Time On Budget - Whatever, section "The Problem With On Budget").

Speed and Scope

If you are faster, you will deliver more. Obvious, right? But there is an important aspect here that we often forget:

Requirements change (~27% within a year). If you have long lead times, some of the features you ship are already outdated by the time the users get the software. So if you can deliver faster (reduce the lead times of your features), you have a better chance of delivering something the user actually wanted. You have a better chance of delivering quality software (see below).

So, if you are faster, you do not only deliver more features. You also deliver less features that are outdated by the time the user gets them. You deliver even more useful features.

The Cost Of A Feature

Say you have a long running software development effort. The team is stable and experienced. You need to add a feature F_n, where n is the index of the feature (F_1: You implement it as the first feature, right after starting; F_100: You have already implemented 99 other features before starting this one). Does it make a difference whether you implement the feature in the first month or after 6 years of development?

Yes it does. In the beginning you lose some time because you don't have all the infrastructure in place: You need to setup version control, the CI server, the test environments. You will change your architecture and design a lot, because you are trying to figure out what is right. You will lose some time because you have to create a first sketch of all the layers and multiple components of your software.

Later, you lose some time because you don't immediately understand the effects of your changes. You have to search for all the places where you have to do changes. Make sure you don't cause any side effects. Read the documentation and find out where it is still accurate. Work around that one weird design that we added because of BUG1745. Find out why there is some inconsistency in the architecture and whether it affects you. You are slower (cycle time, see above) because of Accidental Complication.

Developing a feature in one year will cost more than developing the same feature now. We want to minimize the difference, though.

The big question here is: How much slower is implementing feature F ("Log all logins of priviliged users") as F_100 than it would be if it was F_1? Two times? Ten times? Not slower at all? How much slower would F_1000 be? I have seen some very large values for the slow down factor in past projects. If you like to use the term "Technical Dept", you could say the slowdown was caused by not paying back the dept.

Minimizing F_n (For Large n)

To minimize the cost of adding features later, you have to:

  • Make sure everybody understands the architecture of the system.
  • Make sure everybody understands the current functionality that is implemented and that the description of this functionality is accurate.
  • Minimize rework caused by not understanding requirements correctly.
  • Make sure developers can easily find all the places where they have to make changes when they are working on a feature / defect.
  • Make sure changes can be made locally (i.e. there are no side effects that ripple through the system when you make a simple change).
  • Make sure developers can find out quickly when there are side effects or regressions.
  • Make sure that no defects escape to production, and if they escape, that you find and fix them quickly.

I am pretty sure all those things are necessary, but I guess they might not be sufficient (i.e. you have to do even more in your specific situation).

Quality

Quality is really hard to define. I know, and I agree. But it is not entirely subjective.

There are two important aspects of quality: External quality is the quality as experienced by the user, and internal quality is the quality as experienced by the developers (I wrote more about this in my free email course Improve Your Agile Practice).

I want to call "internal quality" "well crafted code" for now (see below) and focus on external quality from now on when I say "quality". I think there are two important conditions for quality:

Absence of defects Software with less defects has higher quality.
Does what the user wants Fulfills the requirements the users currently have, which may or may not be the requirements they wrote down 6 years ago (see above).

Both conditions are necessary, but not sufficient. Quality has lots of other aspects too, some of which are, in fact, subjective.

Defects

Defects are hard to define too. I often hear the very simple definition "A defect is when the software does not conform to its specification".

Which leads to behavior like: "Yes, the software does not work correctly, but it was specified exactly like that in the requirements, so it is no defect". If you still have discussions like that, you probably value processes and tools more than individuals and interactions. Comprehensive documentation more than working software. Contract negotiation more than customer collaboration. Following a plan more than responding to change. You are probably not very agile. You maybe are not ready for this "well crafted code" discussion yet. But the good news is: You can start to improve now.

Honestly, "does not conform to specification" does not make sense as a definition of a defect when our goal is providing a steady stream of value to the customer. On the other hand, "was reported as a defect" is also not enough.

Do you have a product vision and a list of things your product definitely does not do? (Hint: you should have). Then I would classify a defect as: "Was reported by someone as a defect; Given our current product vision, a reasonable person would expect the expected behavior described in the defect; Does not fall into the list our product definitely does not do".

Low Defect Potential, High Defect Removal Efficiency

Those two terms are from a study that was quoted in a book I quoted in the original article. I don't know how they defined the two terms in the original study, and maybe "The science we have is crap" anyway, so I want to come up with my own definitions that make sense to me.

Low defect potential The chances of introducing a defect with a change are lower than in comparable teams / systems.

High defect removal efficiency The average lead time and cycle time for fixing defects are lower than in comparable teams / systems.

Getting good at avoiding and fixing defects is essential for delivering high-quality software

Both are necessary to deliver a high quality product, since high quality means (among other things) absence of defects.

The Cost Of Defects

Defects are expensive. Your team has to find, fix and deliver them. If the defect escapes to production, you often have to put out fires in operations while you are developing the fix. Your users cannot do their work - They are losing time and money too. You lose reputation.

The later you find a defect, the more expensive it becomes. A defect found in the specification... - You know the numbers. But if a defect escapes to production, it does not stop there. It does matter if you find a defect within a week or after two years!

When you find a defect in production a week after deploying the code, it is probably still rather cheap to fix: All the original developers are still there, they still remember what they did, the documentation is still accurate. And: there is not much new code that depends on the code with the defect. When you find the defect after two years, all of those will have changed. It is way more expensive to fix it.

Defects are rediculously expensive.

High-Quality Software and Speed

When your software has low (external) quality, i.e. has defects or does not do what the users want, you have to do a lot of rework. Rework is expensive, and while you do rework you cannot work on new features, so it slows you down.

High external quality in our software allows us to reduce the lead time of new features.

But: Some rework is necessary. Often we don't know exactly what our users want. And they don't know either. Most of the time, we don't know beforehand how the perfect software would look and behave like. So we have to deliver non-finished software to gather feedback, and then get better based on that feedback.

Well Crafted Code

Maybe "well crafted" is somewhat subjective again, but we all know crappy code when we see it. So there must be some objective aspects that we can find about well crafted code.

Well crafted code is tested, and the tests are easy to understand and easy to read. The code has tests on different levels, and tests that tell me different aspects of the system (what is the functionality from a users point of view, how does the system interact with the outside world, how does it behave under load, how do all the single units in the system work, ...)

Well crafted code is self documenting. When I read the code together with it's tests, I want to be able to understand what is going on. Without reading some external documentation (wikis, ...)

Well crafted code does only what it absolutely has to do. It's design does not anticipate any possible future. It has a minimum number of design elements.

Well crafted code follows good software design and has a consistent, documented architecture.

Well crafted code now looks different than it did one, two or 10 years ago, because it was refactored continuously.

Good Software Design

Good software design is, in my opinion, less subjective than some other terms I described above.

You can follow a set of rules to come to good design, like the "Four Rules Of Simple Design" by Kent Beck. They don't tell you exactly what you have to do in which situation, but they tell you about which things you should think when designing software.

You can learn about design pattern (if you use object oriented languages) or equivalent patterns and techniques when you use other types of languages.

You can apply some principles, like reducing coupling, improving cohesion, or the SOLID principles (if you use object oriented languages).

You can try to find better names and write more self documenting code by applying domain driven design.

Most of the things above are not subjective, some are even measurable. Maybe there are other ways to arrive at good design, but the ones above seem to work for many developers and teams.

It's Not Much Slower Anyway

The Microsoft study on TDD found that teams doing TDD where between 15% and 35% slower than teams not doing it (and all had higher quality). This is not very much. But you might say "All science we have is crap" again, so I'll try to argue that well crafted code is not much slower. (If you don't do over-engineering or gold plating, but I wouldn't count this as "well crafted code" anyway).

Here is a situation I experienced several times, for myself and seeing fellow developers: You bang in a feature, without writing automated tests or really caring about quality, because you are in a hurry. And you are really quick! You start the application, and almost everything works - Great, you just have to fix this one little error, and then... You fire up the debugger. And you waste spend the next few hours debugging.

If you would have done it right in the first place, you could have saved a big part of that debugging time. You would fire up the debugger less often and when you had to, you would only debug a small, focused test, not the whole application.

Well Crafted Code and Speed

Let's look back to "Minimizing F_n (For Large n)":

  • Self documenting code and consistent names from the domain make it easier for everybody to understand the architecture of the system.
  • Executable specifications (a part of the test suite) make sure everybody understands the current functionality that is implemented. And that "documentation" cannot be outdated, because otherwise the tests would fail.
  • A consistent design and architecutre makes sure developers can easily find all the places where they have to make changes when they are working on a feature / defect.
  • The SOLID principles and high cohesion / low coupling make sure changes can be made locally (i.e. there are no side effects that ripple through the system when you make a simple change).
  • The comprehensive test suite makes sure developers can find out quickly when there are side effects or regressions.
  • Focussing on quality from the beginning makes sure that not many defects escape to production. Those that do escape can be fixed quickly, because making changes is easy (see above).

Your Users Do Care About Internal Quality - Indirectly

"Internal quality is nearly unimportant for software to function well, and it's only function that counts. Your users don't care about internal quality." Really?

At some point, your users will recognize that it takes longer and longer until they get software that contains the new features they requested. And they have to pay more and more for it. And at that point, you will recognize that they actually do care about things like cycle time, lead time, and the marginal cost of features. Even if they don't use exactly those words.

But by then it's too late for you - At that point, you have a "Rescuing legacy code" project. And it will take time and money to get back on track - a lot of time and money.

Well, you could rewrite the whole thing from scratch, but that's probably not a good idea either. I mean, some companies have pulled it off (I have been consulting some re-write projects that actually delivered in the end too...), but it will be more expensive and take longer than you think.

Putting it All Together

Yes, some of the terms I used in my original article (like "quality", "well crafted", "defect", ...) are a bit fuzzy and generic. But I still mean it:

Software with a high external quality will ship faster because we spend less time on rework and have more time to work on relevant stuff. Through reduced lead times, we can deliver more features before their original requirements become obsolete.

And about well crafted code:

Well crafted code (i.e. software with a high internal quality) will ship faster because writing new code and fixing defects is less risky (no side effects, immediate feedback from automated tests) and also faster, because we can quickly find the places we have to change (and there are fewer of those places than in crappy code).

Now you can go back to my original article, which hopefully makes more sense now. There you will find some practical considerations and also things you could try right now.

What If My Software Is Really Simple?

A former colleague once said TDD would not make sense for them "because our iOS app only consists of a user interface and some server calls, and you cannot really test those. There is no logic in between. And it's really easy to test manually." Well, maybe you can get away with it when you have an app like that.

But if you want to deliver such an app really often (multiple times per day, which is not possible with iOS apps anyway), you would need automated tests again, so the situation is not that clear cut.

Anyway, you can unit test those. I know at least one developer - Rene Pirringer - who creates iOS apps in a test driven way. And he also tests his user interfaces with fast, automated tests. He is really enthusiastic about what he is doing, and he told me he would not want to work in a different way again.

What If The Time Horizon Is Really Short?

What if the time left until the deadline is really short and our short term goals are so much more important than our long term strategy (i.e. "We are running out of money next month so we need the credit card form now")?

Well, then you might get away with it, but how much time can you actually save? Doing it right is not that much slower anyway (see above).

Also, your actions now will come back and bite you later. And you don't know when. You cannot possibly estimate when that really hard defect will come or when you'll get really stuck on a feature because of the things you're doing now. It might be in three years, but it might also be as soon as next week! And your actions now have potentially unlimited downside! (Well, almost. Unlimited is really a lot...)

So, even in this situation, think hard if being a few percent faster is actually worth it...

Throw Away Code (a.k.a Implement Everything Twice)

You could implement a feature in a quick and dirty way (as a Spike, maybe time-boxed, to learn something), then throw everything away and do it "right" the second time. The idea behind this is that the second time will be much faster than the first time, even though you're doing it "right" now: You can incorporate all the learnings from the first time.

The only problem here (which is basically the same problem as with "Technical Dept") is: You really have to follow through. You really have to go back, throw away the bad code, and do it again. And in many organizations, you will have a hard time to argue throwing away "perfectly working code" when the time pressure gets bad...

You might be also interested in:

Original Scope On Time On Budget - Whatever...

At a conference, I was listening to a talk by a "famous" speaker. I was rather unimpressed - He was switching topics in a confusing way, and the talks also was a bit boring. But it was OK(ish).

But then he based one of his arguments on the Chaos report. Now, you know, I am really skeptical when someone uses the chaos report - especially data from before 2015 - to back up their arguments. Especially when they talk about agile software development.

Let me tell you why. And I also want to tell you why the data from 2015 (and hopefully onward) might represent modern software development better - But still may have some problems.

Chaos Report

The chaos report measures whether projects delivered the original scope on time and within the original budget. Every year, they report really large failure rates: Around 60-70% of all projects are either "challenged" or "failed" in most years.

The (pre-2015) chaos report defined "Success" as "Delivered the original scope on time and within budget".

There are a lot of things one could criticize about this report. For example, that it excludes a large number of organizations and project types, by design. The statistics are flawed: Organizations self-select to participate in the survey, and no sample size is defined.

But my biggest problem with this report is how they measured "challenged" projects - At least up until 2014. Challenged projects are projects that did deliver, but either late or not in budget or they did not deliver the original scope [1].

A measure of success where Flickr is "challenged" (did not deliver original scope, which was an online game) and Microsoft Bob might be a success (I don't know if they delivered on time, but they might have) is just not useful. [2]

The Problem With "On Time"

When we talk about "on time" in this context, we probably mean an estimated delivery date that was based on the original scope. So, no "Fixed deadline, deliver whatever is ready" scenario and also no "We deliver when it is ready" scenario. Both do not make sense when we talk about what the chaos report measures: "Original scope delivered on time and within budget".

So "On time" depends on a delivery date based on an estimate (i.e. guess!) based on the original scope. And that estimate has to be done very early in the project, where we have little actual data and experience. This means that either this estimate is heavily padded, or the project will not be on time.

And "On Time" also depends on how well we know the project scope at the beginning. To give some meaningful estimates at the very beginning of a (longer-running) project, we need to define the requirements in detail. Some high-level goals will probably not be enough to estimate an end date. But defining detailed requirements early has some real disadvantages over working with high-level goals (like we lose a lot of flexibility).

The Problem With "In Budget"

Here we have almost the same problem as in "The Problem With On Time" above: With a stable team, the budget of a software development project is just the time multiplied by some factor. So, with a stable team "In Budget" and "On Time" are basically the same.

But, you might think, I could add some people to the project team, so that we can deliver in time, but slightly over budget... Well, this often backfires.

"Adding manpower to a late software project makes it later" - Fred Brooks wrote this in 1975. Yet, even now, in 2015, many managers and teams grossly underestimate the impact of new team members. In fact, they will not only be over budget (as expected), but also over time in the end.

The Problem With "Original Scope"

Around 27% of the original requirements will change within a year. So, if you try to deliver the original scope in a long-running project, you will deliver outdated software in the end.

But what if your project is quite short and your requirements are really stable? Then this measure would not be so bad, right?

Probably... But you still lose a lot of flexibility when you define the requirements in minute detail in the beginning of the project. So you reduce your chance to deliver on time and in budget. And you lose a lot of time in the beginning defining the requirements - Time that you could use to deliver working software to get real feedback from real users!

To get that flexibility and feedback back, you only want to have some high-level goals at the beginning. But can you really measure whether you delivered the original scope then?

A New Measure of Success

Starting with the 2015 report, the Standish Group added three more factors to their success criteria: Strategic corporate goal, value delivered and satisfaction [3] [4]. The reason why the did this is

However, we have seen many projects that have met the Triple Constraints and did not return value to the organization or the users and executive sponsor were unsatisfied.

Jim Johnson

Now, this is a big step in the right direction. But I think the reasoning is backward. My problem with the chaos report is not about projects that were counted as success but did not return value. Sure, those projects are a big problem. But they don't explain the skewed numbers in the chaos report.

I see a bigger problem with projects that failed in the first three categories, but still provided value and satisfaction for their users and achieve their strategic goal. Those must be counted as success!

I hope the Standish Group weighs the factors in a way so that when the new three are met, the first three become unimportant. Then, maybe, we'll get interesting numbers from the chaos report in the future. But for now, I remain skeptical.

Conclusion

All the above problems boil down to: It does not matter whether you deliver the original scope on time and in budget. When people are using your software and get real value from it, it is a success, even if it was late or did not deliver the original scope. On the other hand, "But we delivered in time and on budget, the requirements were wrong" is no excuse when nobody uses your software. It is a failure.

The only thing that matters in the end is: Does the software you created deliver more value than it cost? Did your team and your company maximize the return on investment?

The new measure of success better captures this reality, but it still contains "on time", "on target" and "within budget", and most of the problems I wrote about still apply. It is a step in the right direction, but I will continue to be skeptical when someone uses the chaos report to back up their arguments about agile software development.

[1] Actually the Standish Group acknowledges this and warns people to lump "challenged" and "failed" together (Interview: Jim Johnson of the Standish Group). Still, challenged sounds like there is a problem to be solved. And this is simply not true. If you work in a truly agile way, your project is automatically in the "challenged" area!

[2] See The Non-Existent Software Crisis: Debunking the Chaos Report

[3] Success Redefined

[4] Standish Group 2015 Chaos Report - Q&A with Jennifer Lynch

Thanks to Glen Alleman for his feedback and input after reading a draft of this article on my newsletter.

You might also be interested in:

We Need Estimates for Our Customers

This is the third article in the mini-series "Why Do We Need Those Estimates".
Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

The reasoning goes like this: "Our customer only hires us on fixed price. We need detailed estimates to calculate that price, so we can even enter the bidding process". Of course, there are variations to this reasoning. But the bottom line always is: We need estimates for our customer.

In this article I first want to discuss fixed price projects, and then move on to other ways for running a project. Then we'll look at how estimates fit into this picture, and what kinds of estimates are useful and what kinds are not.

The Problem With Fixed Price

There are some customers who only want to hire you for a fixed price. This even happens within companies: The software development department has to quote a price to the specialist department (the "internal customer"). Based on that, they agree on a budget for the requested features and sign a contract. The software development department basically has to deliver a fixed price project within the same company.

Customers often argue that they need the predictability of a fixed price project. Sure, they might end up paying a bit more than necessary, but they have to know when they get what exact feature set for which price.

Here's the problem: A fixed price project harms everyone involved.

  • Since the developers have to deliver within the allocated budget, they have to pad their estimates heavily. The customer potentially pays much more than they'd need to.
  • There is a huge amount of risk involved for you, the supplier. If the customer does not accept the software, and it was your fault, you don't get any money until you have fixed all the problems. You might be bankrupt by that time.
  • If the customer does not accept the software, and it is their fault, you still don't get any money. You have to get a lawyer and drag them to the courts. Suppose you will win (because it was clearly the customer's fault) - You still might be bankrupt by then.
  • There is no predictability for the customer. Well, the price is fixed, but the project might still be delayed. The supplier might even not be able to finish it at all. Or they might ship a very low quality product. Sure, the customer won't have to pay for it when they don't get it within acceptable quality, but when the need the software, that does not help them.
  • To make fixed price work, you have to freeze the requirements at the start or have a very rigorous change management process. Remember, 27% of your requirements will change within a year! So, if your fixed price project runs for longer than approximately 6 months, you, as a customer, will get outdated software at the end.
  • Because of the fixed requirements, the developers are constrained to a specific solution. If they discover a better or cheaper way of doing things as they go, they cannot implement it.

There are customers who will only hire you for fixed price projects. But do you really want to work with them?

Is Your Customer More Flexible Than You Think?

If you want to find better ways to work together with your customers - whether they are "internal customers" or other companies / individual - You'll have to ask them. And you'll have to work out the better way together with them.

Only like this you can convince them that the way is actually better - That they will save money and/or time. And that they will still have predictability - but maybe in a somewhat different way.

From Fixed Price To Fixed Budget

As a first step, go from a fixed price to a fixed budget. The agreed budget is the maximum amount of money that the "project" will cost. But the feature set is not yet defined. In fact, the feature set of the project will only be known after the fact - When you deliver the software.

Then, work together with your customer to maximize the value they get for their money. Show them intermediate results (Working Software!) early and often. Have an ongoing communication about which features are most important and how those features should work like for the users. You'll need a strong Product Owner in your team to guide the customer and make sure that they work towards a greater vision - And not lose themselves in details.

Allow the customer to stop the project at any time, whenever they are satisfied - Then they will get the software that is ready at this exact moment. Your customer can potentially get the software cheaper than originally planned when you do it like that. You have to be able to deliver the software at a moment's notice, so you really have to care about "potentially ship-able software".

Reducing Risks (For Both Parties)

The above suggestions pose a couple of risks to both parties - you and the customer. You might run out of work when the customer stops the project early. The customer gains a lot of flexibility, but loses most predictability - And many customers care more about predictability.

What can we do to reduce those risks?

  • Incremental Funding: Split the "project" into a series of very short cycles, where you only fund the next cycle if you are satisfied with the current. For example, start with a 4 weeks project to create a prototype, and after 2 weeks you'll decide whether you'll fund the next stage. This next stage could be an 8 weeks project to implement the most important features.
  • The first sprint is free: Allow your customers to see some results without paying anything. For the first 1, 2 or 4 weeks of the project, agree with the customer that they only have to pay them when they want to continue the project. Also agree that they can only use the results of your work when they continue the project.
  • SLAs / Retainers: Guarantee to your customers that you'll have capacity to provide a minimum number of developer days per month for them. They will have to pay for those days, even if they don't need them. For example, your customer pays for 20 developer days per month for the next 6 months (maybe at a reduced rate), and you guarantee that your developers will be available for at least those 20 days, even if you have other projects.
  • "Money For Nothing": Say you are on a 6 months project with 5 developers. The total budget is 500000 Euros. The customer is happy with your solution after three months and cancels the project. This is a huge risk for you, since you need a new project right now. How can you mitigate that risk? Well, you can agree with your customer to split the remaining budget (e.g. 40% - 60%) when they cancel early. Your customer pays 350000 Euros (250000 plus 40% of the remaining 250000). You won some time to find new projects, and they still get the software cheaper than planned.
  • Continuous Delivery: As I already said, you always have to be able to deliver to pull this off. The easiest way to achieve this is to always deliver. Deliver working software to your customer as frequently as possible - Strive for multiple times per day.

Estimates and Experiments

So, how do estimates fit into this picture? Maybe you want to stop estimating altogether and always deliver in very short, incrementally funded cycles. That might work, but you probably don't want to do it. I guess most companies and teams want at least some predictability when it comes to their income or the price they'll pay for some software.

You should give indirect, abstract, coarse-grained estimates a try. I have worked with several teams where story points or average cycle times worked very well for them. They need some "calibration time" before you get some predictability, but this is O.K. - You can start to work with the numbers very early, and just know that your predictions will get more accurate as you collect more data.

But then you'll recognize that your predictions will never be really 100% accurate. Don't give in to the temptation to switch to direct, concrete, fine-grained estimates, like "hours to complete a task". They don't increase your predictability but only provide an illusion of accuracy. And they can lead to several kinds of bad behavior.

You should anyhow design your work in small, safe-to-fail experiments. Deliver something small, then get feedback. Then deliver something small, get feedback. Then deliver something small... This is your best chance to increase predictability of the desired outcome - The satisfaction of your users. Even if the delivered feature set is completely different from the original requirements.

And Now?

What if you are in a situation that is the exact opposite of where you want to be right now? Like, you negotiate development budgets for multiple-month projects long before you write the first line of code. Requirements are defined in detail even before the budget negotiations. You have to get the estimates right, because your customers always ready to switch to a cheaper supplier, so your margins are low. Where do you start?

The cop-out answer (a.k.a "consultant answer") is "it depends". Of course it depends on your exact situation, but you have to break the cycle at some point. And you have to do so in a rather safe way, so you probably want to take a small step.

Ask yourself: What is the smallest step in the right direction you could take? Can you move budget negotiations closer to the project start (only days or weeks and not months before you write the first line of code)? Can you replace one of the detailed requirements with a high-level goal? Can you agree with your customer to split a one year project into two 6 months projects? Can you make a small part of some of the requirements "optional" (so you can move a little bit into the direction of "fixed budget")? What can you do to become a "high-value supplier" or "strategic partner" for you customer, so you don't have to compete on price?

Those are small steps that don't really solve your problems, but they are steps in the right direction. And if you use them to build up trust with your customers and users, they might allow you to take a bigger step next time you ask...

Now I'm interested in your situation... Where does your team or project stand right now with regards to estimates and project budgets? What are your problems? What are some first steps you could take? Where can I help you? Please tell me!

You might also be interested in:

Pages

My name is David Tanzer and I have been working as an independent software consultant since 2006. I help my clients to develop software right and to develop the right software by providing training, coaching and consultanting for teams and individuals.

Learn more...

Subscribe to Developing Software Together RSS