You are here

Close
Do you like what you are reading? Do you want to receive more content like this, in your inbox?
I send out articles just like this to my newsletter about once per week. Subscribe now:
* indicates required
Close
Do you like what you are reading? Do you know people who might be interested too?

Share This Page:

How I Got Interested In #NoEstimates

Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

Lately I have been writing a lot about Estimates and #NoEstimates. I am not particularily for or against estimates and estimating - I think they are a tool that can be used well or badly. I just got interested in #NoEstimates because of some things I have experienced in the last few years while consulting clients.

Here are some things I have seen regarding estimates and estimation:

Estimates Were Very Inaccurate

Ok, that's the whole point of estimates. They are based on incomplete or inaccurate data, so they are inaccurate by design. But in some cases our estimates were very inaccurate.

For one team I compared the time a user story was in development ("Cycle Time" [*]) with the original estimate in story points. There was some correlation between the estimate and cycle time, but the numbers were all over the place. Also, I found out that a story point was cheapest for size "3" stories - Smaller and bigger stories had a longer cycle time per story point. This means that the size of a single story point depended on the estimated size of the story - Which defeates the purpose of story points.

I was on two different teams where the tasks were even estimated in real development time (days). The tasks were estimated before we knew who would work on them. This was even more inaccurate, because now the accuracy of the estimate depended on who will do the tasks. Some developers finished their tasks in 1/5th of the estimated time, others would barely finish within time. And again, the numbers were all over the place.

There Was Still Wishful Thinking

We had data about our performance - Our velocity. We had an idea about how much work was remaining. We had progress reports. And still, we had to postpone the release at the last moment.

I have experience this not only once, but multiple times at different clients. Even though the data showed us that we probably would not make the release, we kept trying until it was almost to late.

Sometimes people just ignored the data ("Yes, our past performance shows we can't make it. But we will get faster because [excuse].").

Other times the data suggested we might make it, it was just very improbable. This is the more dangerous case. Let's say the release date is February 1st. Let's say you tell your line manager that the projected date (based on your past performance) where you'll finish all the features for the release is between February 1st and February 23rd. In a low-trust environment, the line manager will almost always report "green" to his manager. Because we could make it.

Estimates Were Not Used Well For Decisions

Here are some questions (the list is not complete), where estimates could help you to come up with an answer:

  • Are we on track for our next release?
  • When will we be done with those [n] features?
  • How much can we get done until [date]?
  • Do we have to postpone features or postpone a release?
  • Which of those features will provide the best return on investment (ROI)?
  • Could we come up with an alternative for this feature that has a lower cost or higher value and still allows the user to reach his goals?
  • Based on ROI, how much features can we delete from the backlog to release sooner?

The first four questions are the boring questions. They assume that we know exactly what has to be done. They just allow us to keep track of what we are doing. Somebody who only asks those questions does not truly own his product, they only manage the backlog.

The last three questions are more interesting. They help us to explore what to build. They allow us to find new and better ways to reach the user's goals. They allow us to maximize the return on investment - For us and for our customers.

Now, please take a guess which question I've heard the most (by far). Ok, I'll tell you. It's the first. Now guess which three questions I almost never hear at my clients...

Estimating Took Time

In some teams, we provided estimates quickly and without much process. In other teams it took a lot of time. And it takes more time to track the estimates, compare them against real completion time, and so on.

This might not be much, but do you know exactly how much time you spend producing and managing estimates? If you don't know, how can you decide if estimates provide a good return on time invested for you?

Data Was Used Against The Team

We grew our team and our cost per story point went up. For a long time. Management told us to fix this.

We were late on tasks where we did not provide the estimate (the project manager did). We were blamed for being late in the next jour fixe.

Data suggested we would not make the next release date. Management suggested we work overtime.

...

You have probably experienced some of those yourself. (Have you? Please tell me!)

Conclusion

Some of the things I described above are just examples of bad management. We should fix those, I know. But let's assume for a moment that we can't easily fix them - Or that fixing them will take longer than we are willing to wait.

In many situations, we didn't really get good results from our estimates. We did not get accurate predictions. And we did not use the estimates to answer interesting questions.

So I asked myself:

  • What can we do to get better results from our estimates?
  • Can we get the same results with less effort (i.e. with #NoEstimates)? [**]

And this is how I got interested in #NoEstimates. Don't get me wrong - I am also not particularily for or against #NoEstimates. But I think it is another tool - One that you can use well or badly.

What could you do right now to get more value out of your estimates? Do you have any questions? Please tell me!

[*] The time a task spends in development and testing is often called "Cycle Time". From a manufacturing point of view, this is probably slightly inaccurate. I still use "Cycle Time" here because everybody else does.
[**] I have collected some data that suggests this might be true - At least for one team. Predictions based on the number of user stories were just as accurate as predictions based on the team's estimates.

My name is David Tanzer and I have been working as an independent software consultant since 2006. I help my clients to develop software right and to develop the right software by providing training, coaching and consultanting for teams and individuals.

Learn more...