Close
Do you like what you are reading? Do you want to receive more content like this, in your inbox?
I send out articles just like this to my newsletter about once per week. Subscribe now:
* indicates required
Close
Do you like what you are reading? Do you know people who might be interested too?

Share This Page:

We Need Estimates for Project Management

This is the second article in the mini-series "Why Do We Need Those Estimates".
Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

One of the reasons teams and managers need estimates is to know how far they are in the project and when it will be finished. The reasoning is simple:

  • We know how far we are by dividing total estimated time by total elapsed time.
  • We know when we will be finished by adding up all the remaining estimates.

Indirect, Abstract, Coarse-grained Estimates

Answering those questions can work pretty well with indirect, abstract, coarse-grained estimates. Like story points on rather large-grained stories. Or what Kanban prefers: Different classes of work with SLAs attached to them.

Those estimation techniques are often designed in a way so your estimation errors can cancel themselves out. Story points, for instance, deliberately give you a very vague value. Every value is in fact a range of estimates: 5 Story points really means "Bigger than 3 but smaller than 8". So, you might have one 5 that is larger than expected and on that is smaller, but both are still in the range.

With those kinds of estimates, you don't get concrete answers. You get an interval, like "we will likely finish all the high priority tasks between June and July", or "We will finish 70-85% of the high priority tasks before August". Given that the backlog does not change (which we cannot safely assume). It is harder to work with those kinds of answers, but they are more honest than a concrete date or percentage. And they still might be wrong.

Also, this only works well when you only count fully completed work items and when you do not calculate remaining estimates.

Direct, Concrete, Fine-grained Estimates

Answering those questions might also work when you use direct, concrete, fine-grained estimates, like hours required to complete a task. But it becomes harder to get meaningful data. And there are more pitfalls. You data looks more precise, but that is often just an illusion. Also, I have not seen a team where I was confident that they really got that right.

It should work like this: You divide a larger work item (like a user story) into a series of tasks that all have to be done. Then you estimate how long it will take you to perform each task. As a result you also know how long it will take you to finish the larger work item: It is just the sum of all task estimates.

When people are working on the tasks, they have to update the remaining estimates, so we have a chance to correct our initial estimation errors. Say we estimated a task at 4 hours, but after two hours we found out that it will take much longer. We simply estimate all the remaining work and we now set a remaining estimate of 1 day and 6 hours.

It turns out that we (humans - all humans) are very bad at those things. We are bad at correctly identifying all the tasks that need to be done. We are bad at defining the tasks independently from all the other tasks. We are bad at estimating them. And our cognitive biases (Hindsight Bias, Planning Fallacy, Optimism Bias and others) make it very hard for us to learn from our mistakes and to get better.

So you are essentially pulling numbers out of thin air, no matter how much effort you put into the estimation. But since those numbers are so precise (the project will require 4721 man-hours), people will trust them more than the indirect, abstract estimates from above. Which is not really justified. The direct, concrete estimates will often be less reliable than the indirect, abstract estimates, which are designed so that your estimation errors can cancel themselves out (at least in theory).

Also, mixing original estimates, remaining estimates and elapsed time does not make sense, as in "Done Percentage = Elapsed Time / (Elapsed Time + Remaining Estimate)". You cannot compare them. The elapsed time is a measured fact, the remaining estimate is an educated guess done with current data and knowledge and the original estimate is and educated guess done with outdated data and knowledge.

Is it even worth the trouble?

When you have a truly emergent backlog, one that obeys the ice-berg rule, you cannot really answer those two questions from above anyway. Because you simply do not know for certain how much work lies ahead.

You might say: "Well, we always gather all the requirements at the beginning of our projects". But, given the changing nature of requirements (27% of your requirements will have changed within the first year), you do not really gain any advantage. Since you simply cannot know which and how much of your requirements will change, you are essentially in the same situation as above. You have just hidden the fact that you don't know how much time lies ahead. But you have added some distracting details to your documents. And you have added some possible sources of errors and mistakes: You might forget to update a requirement in all documents. Somebody might find an old requirements document. You will have to re-work finished software when the requirements change.

Defects make everything worse (like always). When the quality of your software is low, your estimates are less meaningful. You just don't know how often your work will be interrupted by defects and how much time you'll spend fixing them.

And then there's the agile idea that you'll always work on the most important things, and simply stop when the software is good enough. How meaningful is "time remaining" when the customer can stop after any given sprint or release, telling you: "Now the software is good enough for us. Investing more would only give us deminishing returns."? There is no "time remaining" in this case. And so there is also no "80% complete".

Working Software

Detailed estimates are more important when you are not able to produce anything you can show to your customers and users early on. When you cannot show working software, you need other methods to tell your users where you are in your project. Like, "We are 80% done, according to our estimates". When you always have working software, deployed to a production like environment, everybody knows how far you are in the project. Everybody knows what is still missing. They can try it out.

So, the better you become at actually producing software (and actually producing quality), the less you need to rely on direct, concrete, fine-grained estimates.

Conclusion

Time based estimates on (small) development tasks come with a lot of baggage. They take a lot of time and effort to produce. They become outdated very quickly. They create an illusion of precision. They create an illusion of accuracy.

When you are a developer, or a manager or a customer of a development team, ask yourself: Is there a better way for getting the data we need?

How this better way could look like depends on what you do and how your organization works. Estimating larger chunks (like user stories or epics) might work for you. Or using indirect, abstract estimates, like story points. Or creating different types of work and defining SLAs for them (like in Kanban). Maybe you need to re-organize your backlog, so that there are different detail levels and features can emerge. Or you could benefit from some #NoEstimates ideas.

There is no easy solution that will work for everybody. Maybe for your team, the solution is even to keep using detailed, task-level, time-based estimates. But I have seen several teams stop using them. And they were still able to answer the two questions from the beginning - When they needed to.

Why Do We Need Those Estimates?

Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

I have worked with several clients who do detailed, task-level estimates (in hours). Or at least detailed story-level estimates (in Story Points or Person Days). And for most of them, their process seemed to work fine. From their point of view. But I always had the feeling that they didn't realize their full potential. That there was some waste in their process. Of course, I also told them. They hired me as a consultant, after all.

Their answer (most of the time): "But we need those estimates!"

This, of course, begs the question...

Why Do We Need Those Estimates?

There are probably more possible reasons. But those are all reasons I heard from real customers during real projects...

Each of those reasons seams reasonable. And I can somehow understand the teams and managers who come up with those reasons. But every one of those reasons comes with some problems.

As I have already written elsewhere, I am not "for" or "against" estimates. They are a tool that you can use well or badly. But I have experienced that estimates always come with a cost. Especially detailed, concrete, fine-grained estimates (like estimating the time a task will take in hours). And you have to be very careful to make sure that the benefit you get from estimates is really worth the cost.

To Be Continued...

I will write about the problems that come with the reasons above in the next few blog posts, and link them here.

And I am interested in your experience with detailed task-level or story level estimates.

  • How detailed do you do them? Task-level, story-level, hours, story points, ...
  • Why do you need them?
  • How well do they work for you? How do you feel when giving a detailed estimate?
  • Do you have any questions about detailed estimates?

Please tell me!

The Pillars of Scrum

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

I have seen some implementations of agile software development and heard about several others that faced major difficulties. Difficulties that shouldn't exist in an agile project - Or that was what I thought. Then I re-read the Scrum guide, and the "three pillars" caught my particular attention.

Scrum employs an iterative, incremental approach to optimize predictability and control risk. Three pillars uphold every implementation of empirical process control: transparency, inspection, and adaptation.

-- http://www.scrumguides.org/scrum-guide.html

By looking at agile implementations - and stories about agile implementations - through the lens of thos three pillars, I could understand the reasons for some of the difficulties. Today I want to show you how you can use the three pillars to understand your challenges. But first I want to quickly explain them.

Transparency

Everybody who participates in creating an outcome must be able to obtain all the information she needs. There must be a shared understanding about what goals we want to reach and how we know that we have reached them.

All stakeholders (users, customers, testers, managers, operations, ...) should be able to find out what is happening in development. We invite them to our meetings (planning, daily stand-ups, review) so they know what we are doing. We create some standards (definition of ready, definition of done, policies) so we know how to work together. We share statistics about our process (lead time, velocity, throughput, ...) so everybody can plan what to do next.

On the other hand, the development team has to be able to find out things about the business too. We need direct access to customers and users to get fast feedback. We need to be able to find out the business reasons for features so we can suggest better or cheaper alternatives. We often need to know details about schedules, contracts and the funding of the project. We need to be able to see the bigger picture - How our software fits into the grand scheme of things and where we are going.

Transparency is only possible when there is trust. Without trust, nobody would share all the data they have. And, without trust, nobody would let everybody else do their work - We all would want to solve all the problems that are exposed by the available data ourselves. For example, when velocity goes down, all the other stakeholders must trust the team that they can solve this themselves. Without trust, every other stakeholder would probably try to intervene.

Inspection

You need to frequently inspect what you are building, so you can react when something starts to go wrong. The scrum guide tells us we should "...inspect Scrum artifacts and progress toward a Sprint Goal...". You also need to constantly inspect if your "best practices" are still working. Times change, and yesterday's "best practice" might be today's road block.

This means, basically, that we have to track the features we have promised (or forcasted) to build during a sprint. We do this, for example, by tracking "done" user stories and tasks, where "done" means "implemented, tested and deployed to a production-like environment".

We also have to inspect the quality of the artifacts we produce. We know that we cannot trade quality for speed [1], so we have to ensure high quality to sustain our pace. We should use a combination of techniques to ensure high quality: Test-automation on different levels, test driven development, pair programming, code inspections, and so on. [2]

We also have to ensure that we are building what our users really need. This means that we have to constantly inspect if our understanding of their need is still accurate. And we have to inspect whether our software or product still allows them to reach their goals in an efficient and effective manner.

Then we have to inspect our process itself: Are we still working efficiently and effectively? Are we still delivering the most value for the smallest amount of money? What should we improve next?

Inspection requires trust, because when there is no trust, people would not allow us to see the whole truth. They would try to hide some facts and try to blame other stakeholders or departments for shortcomings.

Adaption

Adaption means that we change what does not work or what could work better. It means that we constantly run small experiments, keep what is working and change when we fail. We use the results from our inspections to decide which experiment to run next.

Adaption requires trust. We will only be allowed to run experiments when there is mutual trust. We will only be allowed to fail when there is mutual trust. And we will only admit our failures when there is mutual trust.

The Foundation

Trust is the foundation on which the three pillars are built. You need mutual trust, or you'll have a hard time when you try to establish transparency, inspection and adaption.

How To Use The Pillars

You can use the pillars to assess your current situation. Maybe you see some difficulties you shouldn't have in an agile environment. Maybe you don't see any difficulties, but that does not mean that you don't have them: You might just have developed some blind spots [3].

When you assess your situation, first ask yourself: Is the foundation of the three pillars intact? Do we have mutual trust among all stakeholders? If you don't have the trust you need, your pillars stand on shaky grounds and you should work on that first. But you can still use the pillars to further assess your situation.

Look at each pillar and ask yourself: How strong is this pillar in our company? How strong is it in our team? What problems might arise if this pillar is not strong enough?

You can also do this in a retrospective with your team - or even with all stakeholders of your current project. Then you ensure that there is a shared understanding of what you need to successfully remove your difficulties.

Conclusion

Often problems and difficulties with agile software development are caused by a lack of trust, transparency, inspection or adaption. When you understand which of your problems are caused by those factors, you can start to change.

Take some time to analyze your current situation with regard to the three pillars (inspect) and then start a small experiment to change for the better (adapt). Tell everybody what you are about to change (be transparent) and trust them to trust you.

What problems are caused by those factors in your team? How can I help you to mitigate them? Please tell me!

[1] See, for Example "Code Complete, second edition" by Steve McConnel, page 474: Software with fewer defects is developed faster and cheaper.

[2] I am currently writing a series of posts about simple design, which also touches those topics. Read more here: Simple Design

[3] Find out more about blind spots and what you can do about them in my free online course Improve your Agile Practices.

How I Got Interested In #NoEstimates

Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

Lately I have been writing a lot about Estimates and #NoEstimates. I am not particularily for or against estimates and estimating - I think they are a tool that can be used well or badly. I just got interested in #NoEstimates because of some things I have experienced in the last few years while consulting clients.

Here are some things I have seen regarding estimates and estimation:

Estimates Were Very Inaccurate

Ok, that's the whole point of estimates. They are based on incomplete or inaccurate data, so they are inaccurate by design. But in some cases our estimates were very inaccurate.

For one team I compared the time a user story was in development ("Cycle Time" [*]) with the original estimate in story points. There was some correlation between the estimate and cycle time, but the numbers were all over the place. Also, I found out that a story point was cheapest for size "3" stories - Smaller and bigger stories had a longer cycle time per story point. This means that the size of a single story point depended on the estimated size of the story - Which defeates the purpose of story points.

I was on two different teams where the tasks were even estimated in real development time (days). The tasks were estimated before we knew who would work on them. This was even more inaccurate, because now the accuracy of the estimate depended on who will do the tasks. Some developers finished their tasks in 1/5th of the estimated time, others would barely finish within time. And again, the numbers were all over the place.

There Was Still Wishful Thinking

We had data about our performance - Our velocity. We had an idea about how much work was remaining. We had progress reports. And still, we had to postpone the release at the last moment.

I have experience this not only once, but multiple times at different clients. Even though the data showed us that we probably would not make the release, we kept trying until it was almost to late.

Sometimes people just ignored the data ("Yes, our past performance shows we can't make it. But we will get faster because [excuse].").

Other times the data suggested we might make it, it was just very improbable. This is the more dangerous case. Let's say the release date is February 1st. Let's say you tell your line manager that the projected date (based on your past performance) where you'll finish all the features for the release is between February 1st and February 23rd. In a low-trust environment, the line manager will almost always report "green" to his manager. Because we could make it.

Estimates Were Not Used Well For Decisions

Here are some questions (the list is not complete), where estimates could help you to come up with an answer:

  • Are we on track for our next release?
  • When will we be done with those [n] features?
  • How much can we get done until [date]?
  • Do we have to postpone features or postpone a release?
  • Which of those features will provide the best return on investment (ROI)?
  • Could we come up with an alternative for this feature that has a lower cost or higher value and still allows the user to reach his goals?
  • Based on ROI, how much features can we delete from the backlog to release sooner?

The first four questions are the boring questions. They assume that we know exactly what has to be done. They just allow us to keep track of what we are doing. Somebody who only asks those questions does not truly own his product, they only manage the backlog.

The last three questions are more interesting. They help us to explore what to build. They allow us to find new and better ways to reach the user's goals. They allow us to maximize the return on investment - For us and for our customers.

Now, please take a guess which question I've heard the most (by far). Ok, I'll tell you. It's the first. Now guess which three questions I almost never hear at my clients...

Estimating Took Time

In some teams, we provided estimates quickly and without much process. In other teams it took a lot of time. And it takes more time to track the estimates, compare them against real completion time, and so on.

This might not be much, but do you know exactly how much time you spend producing and managing estimates? If you don't know, how can you decide if estimates provide a good return on time invested for you?

Data Was Used Against The Team

We grew our team and our cost per story point went up. For a long time. Management told us to fix this.

We were late on tasks where we did not provide the estimate (the project manager did). We were blamed for being late in the next jour fixe.

Data suggested we would not make the next release date. Management suggested we work overtime.

...

You have probably experienced some of those yourself. (Have you? Please tell me!)

Conclusion

Some of the things I described above are just examples of bad management. We should fix those, I know. But let's assume for a moment that we can't easily fix them - Or that fixing them will take longer than we are willing to wait.

In many situations, we didn't really get good results from our estimates. We did not get accurate predictions. And we did not use the estimates to answer interesting questions.

So I asked myself:

  • What can we do to get better results from our estimates?
  • Can we get the same results with less effort (i.e. with #NoEstimates)? [**]

And this is how I got interested in #NoEstimates. Don't get me wrong - I am also not particularily for or against #NoEstimates. But I think it is another tool - One that you can use well or badly.

What could you do right now to get more value out of your estimates? Do you have any questions? Please tell me!

[*] The time a task spends in development and testing is often called "Cycle Time". From a manufacturing point of view, this is probably slightly inaccurate. I still use "Cycle Time" here because everybody else does.
[**] I have collected some data that suggests this might be true - At least for one team. Predictions based on the number of user stories were just as accurate as predictions based on the team's estimates.

What is An Estimate, Anyway?

Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

I am still thinking about estimates and #NoEstimates, and I am still writing my findings down here. In the last article, "A Spectrum Of Effort Estimates", I wrote about different ways of estimating software development effort. Today, I want to explore what an estimate really is.

Definitions

In discussions about estimation, planning and #NoEstimates, I have experienced that people use quite different definitions for what an estimate is. So the discussions often lead nowhere because the participants are talking about completely different things and misunderstanding each other.

Here are three different definitions I have found. You will probably encounter more.

Estimates, in the context of #NoEstimates, are all estimates that can be (on purpose, or by accident) turned into a commitment regarding project work that is not being worked on at the moment when the estimate is made.

-- Vasco Duarte, "What is an Estimate?"

I think this definition is not very helpful as a definition of an estimate. First, it references itself ("Estimates are all estimates that..."). Second, everything you say can be turned into a commitment by a sufficiently malicious or careless person. On the other hand, this definition is helpful in understanding why estimates are often problematic: We don't want our estimates to be turned into commitments. Especially not by somebody else. This definition also hints that there are some estimates we want to avoid in #NoEstimates, and others that we are less concerned about.

An estimate is an approximate calculation or judgement of the value, number, quantity, or extent of something.

-- Giovanni Asproni, "Learn to Estimate"

This definition is helpful in a way, because it highlights that an estimate is an approximate calculation. On the other hand, it does not tell us what is the exact difference between an estimate and other approximations (rounding, ...).

An Estimate [is an] approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available.

-- Wikipedia: "Estimation"

While quite similar to the definition above, this definition adds some interesting aspects: The approximate value was derived from incomplete or uncertain data. Still it is usable, because we used the best information available to create it.

Effort Estimates

When we talk about "estimates" in software development, we often mean estimates of the development effort. That is, estimates that try to predict how much work a task, user story, use case, feature or product might be. Or how long it might take.

We use those estimates to predict when something will be done, how much functionality we can implement in a given time frame, how much developers we need for a project, and so on. We also use them to decide what to work on next, if we should even start a project, and other things.

Estimates everywhere

In software development, you can find estimates everywhere. Some of them are explicit, like the story point value on a user story. Others are implicit, like "Tasks take just about one day, because when we split a story into tasks, we try to make them short". Some of the estimates will estimate development effort, some will estimate other things (like the performance impact of a feature).

What do you estimate explicitly in your team? What implicity estimates do you have, and which ones would you rather make explicit? Did you ever experience any problems with your estimates? Please tell me!

Oh, and if you have any questions, Just ask ;)

A Spectrum Of Effort Estimates

Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

In the last post, I wrote about how you can get started with #NoEstimates when you use story points right now. Today I want to write about different kinds of estimates. [You might be also interested in: What is An Estimate, Anyway?]

There are different techniques to come up with an effort estimate, and the estimates themselves are different. We can classify these techniques and estimates along a spectrum with different axis:

  • Direct vs. Indirect: Does the estimate express the effort direclty (time, cost, ...) or does it express something different from which you can derive the effort (size, ...).
  • Concrete vs. Abstract: Somewhat related to the point above, is the effort some concrete mesure or something more abstract.
  • Fine-grained vs. Coarse-grained: What is the size of the things you estimate (hours, days, months, ...).
  • Simple vs. Elaborate: How hard is it to come up with an estimate? How hard is it to understand?

Today I want to write about some techniques to come up with effort estimates. I will also try to classify them within the above spectrum.

Estimating Time Directly

Here you try to estimate the time it takes to implement a feature or to fix a bug. The unit of the estimate "person days" (or something similar). Sometimes the project manager or lead developer creates the estimate based on her experience, sometimes the team creates the estimate together.

Within the Spectrum: Direct - Concrete - Coarse-grained - Elaborate

This approach has many problems. The biggest two are: It is very hard (or maybe even impossible) to create accurate time estimates. We humans are just very bad at that. And all work tends to fill the time available. This means if you over-estimates the time it will take, the task will probably then take the longer time.

I was in two projects that estimated time directly in career as a consultant. In one it did not work at all. A week before the release, everything was 99% complete. On the day of the release, we found out that we had to postpone the release for three months.

In the other one it seemed to work very well: We completed everything in time and within budget for almost two years in a row, even though we did not create the estimates - The chief architect did. It was totally predictable for the customer. So how did we do that? Of course, the estimates were heavily padded. We would often sit around for days and... compile... because new work only arrived after the old work was supposed to be finished. Worse than that, it was next to impossible for our customers to change anything that was already planned. Which is not good.

Estimating Tasks in Hours

A variation of the above, but finer grained. You don't estimate big features, but small engineering tasks. The unit of the estimate is "hours". The people doing the work create the estimates. You can then e.g. create a sprint burn down chart based on the remaining hours.

Within the spectrum: Direct - Concrete - Fine-grained - Simple

When I first learned about Scrum, I learned that you should estimate tasks in the sprint planning in this way. Somebody else once told me that you should estimate tasks in quarter days. Because two hours is totally more reasonable as a unit as one hour... Really?

The problems are basically the same as above, but there is another problem: The time spent estimation is probably wasted. Why would you want an estimate for something that takes only a few hours anyway? I think - or I hope - Nobody does that anymore.

Just Count The Tasks

Instead of estimating the time it takes to complete a task, you just count the number of tasks. A task should take less than a day anyway, so you can safely assume that most of the tasks are roughly the same size. You can then e.g. create a sprint burn down chart based on the remaining tasks.

Within the spectrum: Indirect - Abstract - Fine-grained - Simple

This solves some of the problems mentioned above, but you still have to identify all the tasks early to get meaningful statistics. Within a time-boxed planning meeting, you might not be able to do this. And it's maybe even more agile to not even try to identify all the tasks upfront - This leaves options open.

As far as I know, most teams have even stopped counting tasks and create their burn down charts based on the story points remaining or the number of stories remaining.

Estimating Ideal Time + Load Factor

Some people (in the early days of Extreme Programming) tried to estimate features in "ideal engineering time". The unit of the estimate is "person days" or "pair days". The team or a part of the team creates the estimates.

Within the spectrum: Direct - Concrete - Coarse-grained - Elaborate

The total time required would then be much longer, because of interruptions, mistakes, and so on. You can calcualte a load factor that tells you how much longer it will take (on average) to complete something. A load factor of "3" means that you need 3 real days for each ideal engineering day.

Fog Creek Software used a technique like this Evidence Based Scheduling some time ago. I don't know if they still use it, but as far as I know this technique has come mostly out of fashion.

Estimating "Size" With Story Points

You try to estimate the size of the work and then derive the expected effort from the estimated size. The unit of the estimate is "story points". The team comes up with the estimate, often with "Planning Poker".

Within the spectrum: Indirect - Abstract - Coarse-grained - Simple

With all the problems of the direct, concrete estimation techniques, some people decided that we needed something more abstract and indirect. So they came up with story points.

The basic idea is that it might be hard to agree on how long something will take, but you can still agree on the size of the something. Suppose you want me to build a brick wall that is 10 meters long and 2 meters high. I have no idea how long I will take to complete it, but I can agree that the wall is 10 meters long and 2 meters high. Once I have built some brick walls, I might tell you an average duration per square meter. Or I can tell you how many square meters of brick wall I can build in two weeks - My "velocity" in terms of story points.

The problem with this technique is that you estimates will probably still be very inaccurate. One five point story might take a day to complete, another one two weeks. Some people will tell you that the inaccuracies will cancel out, but when you don't know why they do, your estimates are just numbers out of thin air.

Most teams I consulted so far used this technique.

Function Points do also seem to estimate the size of the requirements, but are more complicated and out of the scope of this article.

Just Count The Stories

So, even story point estimates are still quite inaccurate. And the estimation takes some time, which is possible waste (i.e. not used to produce value for the customer). Maybe we can still come up with meaningful data with something even more abstract and simpler? Well, just count the user stories.

Within the spectrum: Indirect - Abstract - Coarse-grained - Very simple

As with "Just Count The Tasks", you want most of the stories to be roughly the same size. The idea is that if the stories are simple enough and well understood enough so that development could start, the will be roughly the same size.

Your planning horizon will be shorter with this technique, because you only want to come up with detailed stories shortly before you need them. But this is often no problem, because it creates options for the longer planning horizon. Or you could only use this technique for the longer planning horizon, where you only count bigger features, not individual user stories.

Conclusion

All of the techniques can provide value under certain circumstances. You should not discard one of them because it is "too abstract" or "too simple" or "too complicated".

But you can ask yourself "Could I get all the information I need with a simpler technique"? And from there, you might find ways to improve.

Do you have any questions? What would you need to get started with something simpler than you have now? Please tell me!

From Story Points to #NoEstimates

How can you get started with #NoEstimates when you use Story Points to estimate right now? Easy:

Just replace all story point values with "1". Then use all the analytics you used before to make predictions and do planning.

Ok, that's a grossly oversimplified view on #NoEstimates, but it's the gist of it. You can start now.

Instead of using "1" as your new, only story point value, you could als use the average story point value of your last [n] user stories. Then the statistics will be somewhat comparable. But this is not necessarily a good thing!

But we need estimates!

But what do you need them for?

  • To calculate your throughput
  • To know how much can be done by a certain date
  • To get a feeling about how big a project is
  • To convince somebody else you are not lying about expected effort
  • ...

Could you get all this with a simpler method?

Under certain cercumstances you can achieve all of this by just counting stories (instead of estimating them).

Often you will not even lose any precision in your predictions - Or even gain precision! I have experienced this when I analyzed a lot of past data at a customer I was consulting some time ago. Their average story estimates were very stable over time. By just counting stories, they would not have lost any precision, so their average estimation error must have been very stable too.

But there's estimates everywhere!

I may be missing something, but everything I’ve seen about #NoEstimates uses estimates somewhere. Sometimes just in a different way.

-- Giovanni Asproni

The fact that you have replaced all estimates with "1" does not mean that you don't have estimates anymore. The "ones" are still estimates. The predictions you make based on your throughput are estimates: Estimates of what can be done by which date. And there are others.

So #NoEstimates does not remove estimates. It just makes them much simpler. And it makes it harder for outsiders to turn them into a commitment.

What about different planning horizons?

Some teams use themes and/or epics to describe larger pieces of functionality. They will split them on demand into smaller stories. How do you count those huge chunks when everything is a "one"?

You could:

  • Not count them. Then your planning horizon is probably quite short, because you don't want to slice stories too far ahead in time.
  • Count them as "one" in a different statistic. That makes statistics a little bit more complicated.
  • Just count them as "one" in the same statistic as the small stories. That was what Vasco Duarte suggested to me.
  • Don't count the small stories at all. After all, the big features are what matters!
  • ...

You can have both!

The cool thing with #NoEstimates is that you can just try it without causing any damage. It is completely safe to fail. Just do both: Story point estimates and #NoEstimates and compare their advantages and disadvantages.

Here is how you can do this:

Step 1: Analyze your past data again. Count the user stories again as if they all were ones, create your statistics again. (You do have past data and statistics, right? No? Let's talk.) Then compare how well #NoEstimates would have predicted your throuput and what you delivered when.

Step 2: Run in parallel. Now, for some time, you can run story point estimates and #NoEstimates in parallel. The #NoEstimates statistics are really easy to generate, so that will not take much time. Experiment with saying things like "We need to do 12 Stories in the next 3 Sprints, on average we complete 5 per Sprint so we should be fine." instead of "We need to complete 38 Story Points in the next 3 Sprints, on average we complete 11 per Sprint so we should be fine."

Step 3 (absolutely optional): Stop doing story point estimates. Only do so if it makes sense for you.

Just try it

So, just try it for your team. It costs almost nothing. As a ScrumMaster or team member, you can probably do Step 1 in a single lazy afternoon. Then talk to your team.

Do you need help with getting started? Or do you have any questions? Just ping me!

Agile Manifesto and Agile Doctrine, Part 1

In this series of posts I will describe how the agile doctrine [1] relates to the agile manifesto [2]... For example, "Individuals and interactions over processes and tools": The frequent interaction of developers, testers, product owners and customers will reduce the distance between problems and problem solvers.

"Agile Doctrine", as defined by Jason Yip, is:

  • Reduce the distance between problems and problem-solvers
  • Validate every step
  • Take smaller steps
  • Improve as you go

In this post, I will cover the first point:

Reduce the distance between problems and problem-solvers

Reducing the distance does not necessarily mean physical distance, even though it might. But what we really need to reduce here is intermediaries. Can the development team communicate directly with users? If not, how many steps are in between? Is there somebody who translates the users' requirements, so developers can understand them? Are there any business rules that are hard to grasp for developers? And so on.

Relation to the agile values

"Individuals and interactions over processes and tools" Reducing the distance between problems and problem solvers means that individuals (developers, users who have a problem) have to interact with each other. On the other hand, processes and tools (written documents, intermediaries, ...) will increase the distance between problems and problem solvers.

"Working software over comprehensive documentation" Reducing the distance between problems and problem solvers will be easier if the measure of progress is working software: Working software makes it easier to discuss whether the problem was really solved. It also makes it easier to discuss the next problem that needs to be solved.

"Customer collaboration over contract negotiation" Here we have the most direct relationship: Contracts would increase the distance, collaboration reduces it.

"Responding to change over following a plan" I don't see a real relationship here. We can respond to change even if there is some distance. OTOH, reducing the distance probably means we can not strictly follow a plan.

In closing

The first part of the "Agile Doctrine", "Reducing the distance between problems and problems solvers", relates well to the four values from the agile manifesto. In the next post from this series, we will look at the second part, "Validate every step", and how it relates to those four values.

[1] What is agile doctrine?
[2] Agile Manifesto

Pages

My name is David Tanzer and I have been working as an independent software consultant since 2006. I help my clients to develop software right and to develop the right software by providing training, coaching and consultanting for teams and individuals.

Learn more...

Subscribe to Developing Software Together RSS