Tag Archives: Agile
Late last year I set up a kanban system for a Yahoo! India team. They had lots of little features and bug fixes to work on, but they didn’t know how to organize it all to get it done. Scrum wasn’t working well because the nature of the work was too dynamic.
Armed with blue painter’s tape (imported) and post-it’s (imported too) I worked with the lead developer & producer to set it up. I briefly discussed the principals that makes the system work: reduce WIP to increase throughput, use the post-it’s as a signal to begin work.
Time for team indoctrination. The lead engineer explained the system to the folks on the team.
- The producer/product manager will queue up ‘things to do’, limited to 5
- The developers will take the top items from the top of the queue
- When the developers are done, they will be placed in the ‘dev done’ slot
- Then the testers will pick up items from the ‘dev done’ slot and test them
- When the ‘thing to do’ is done to the tester’s and producer’s/product manger’s satisfaction, it’s ready to release.
I sat back and just watched it unfold. Every few weeks I would go by and I’d stop in to see how things were going. The system was just like a machine; it was systematically pushing features & bug fixes through the team in a very transparent way. The tech lead moved to another project and the kanban system kept working. A new product manager came in and the kanban system kept working.
Ron Popeil sells a Rotisserie & BBQ Oven with the tag line, ‘Just set it and forget it!’ My Aunt ordered the machine. The first thing that you see when you open the box is (I’m paraphrasing from memory) “WARNING! While the slogan may be ‘Just set it and forget it!’ it doesn’t mean you can leave the machine unattended at any time. As with any kitchen appliance involving high temperatures, you must take caution.”
This team, did not literally ‘set it and forget it’. But it was a system that worked very for them with few modifications. They were largely in maintenance mode and were tasked with fixing bugs, making performance improvements, fixing production issues and making incremental improvements.
In software, one thing is certain — estimates never match reality. Teams build predictable schedules by creating buffers. There are two strategies to do this: 1) by forecasting how much buffer the team needs 2) by computing buffer based on past performance.
Velocity is a way compute how much real time it takes to complete estimated time.
So if I say something will take me 4 hours to build and it takes me two days to complete, my velocity would be 2h/day. That’s useful for future planning because the team knows my capacity (2hr x 5days = 10 estimated hours / week).
A velocity measurement doesn’t say how hard I worked or how much time I spent on the task. It’s merely a calibration tool for effective planning.
My 4 hour estimate (that took two days) from the above example might of seemed awfully optimistic. But that’s not how to see it. I could of ran into an unforeseen complexity, faced an unusually large meeting load, or could have been bogged down with operational issues. Or it might be true, I might just be an optimistic estimator, but with velocity that’s okay. It all averages out.
Teams will also come together to estimate entire features this way. They might estimate how long the feature will take in days. But again, estimated time never equals actual calendar time. So if it takes the team estimates a feature to take 1 day, and it ends up taking 2 days to complete, their velocity would be 2.5 estimated days / week.
Here’s where it gets tricky and controversial…
To get better at estimating when using velocity means getting more precise rather than accurate.
Teams who are good at estimating with velocity will normalize on an inaccurate, but precise value, rather than try to get more accurate. The consequence is that each team (or person) will have their own, unique velocity. Some teams will estimate conservatively and others will estimate optimistically. It is meaningless to compare from team to team or location to location. It just doesn’t make sense. In fact, the moment you start judging teams on ‘improving’ their velocity, their estimates just become more conservative. (Thereby increasing their velocity.)
Some teams have a difficult time using velocity. This is because when a team settles down on a velocity, they question themselves (or get questioned) if it’s not 8 hours of estimated work a day. “How come you’re only planning for 5 hours of work a day! What’s wrong?” (One of the most productive teams I’ve worked with averaged 2hr estimated / person / day!)
Use velocity, but keep in mind that a team’s velocity can’t be compared with other teams. So keep the velocity numbers within the team. If you must report your estimates externally, either take the time to explain velocity or normalize your estimates into real time. Better yet, translate your estimates into dollar (or rupee) values (talk with your finance person to work out some numbers).
A Minimal Marketable Feature (MMF) is a feature that is minimal, because if it was any smaller, it would not be marketable. A MMF is marketable, because when it is released as part of a product, people would use (or buy) the feature.
As a counter-example to the MMF approach: While working on an XP team, our team decomposed features into super-small stories. That way the customer (product manager) could pick-and choose from the sub-features to create the big feature. The team would present a list of each sub-feature like a grocery bill — each item has a cost. For example, the customer might decide that pagination (presenting a list of information on multiple pages) just isn’t worth it, because “hey, we only have 25 rows of data right now!”
An MMF is different than a typical User Story in Scrum or Extreme Programming. Where multiple User Stories might be coalesced to form a single marketable feature, MMFs are a little bit bigger. Often, there is a release after each MMF is complete.
An MMF doesn’t decompose down into smaller sub-feature, but it is big enough to launch on its own.
A MMF can be represented as a User Story — a short, one-sentence description.
The format of a user story is:
As a [some user],
I want [to do something],
so that [I can achieve some goal]
But in contrast to how a User Story is typically used, the team would not break down the User Story into smaller User Stories when using MMFs. Think of it this way: *Gather up all the stories that share the same so that clause — that’s your MMF*.
A team I’m working with has switched from Scrum to Kanban to manage their development efforts. As a result, the team doesn’t have regularly scheduled planning meetings to create a task-driven plan for the upcoming iteration time box.
So does Kanban development have no planning meetings? No! The team self-organizes meetings around a single feature rather than a specific period of time.
Arlo Belshee giving an overview of Naked Planning at Agile 2007
This video was taken at Agile Conference 2007 in Washington DC. I believe that Arlo was one of the first to lay-out the inspiration for Kanban systems for software development. Later, Aaron Sanders, Karl Scotland and I (Joe Arnold) paired these concepts with ideas from David Anderson to create Kanban systems for teams at Yahoo!. Jeff Patton later wrote an article distilling down the practices: Kanban Over Simplified
- Comments Off
- Posted under Cheap Ideas
I had the opportunity to be a guest lecturer at the National Institute of Design in Bangalore, India today. I taught a session titled ‘Idea to Implementation’.
My goal was to have the students conceive, build a prototype and test a product – in essence go through the entire product ideation lifecycle.
Here was the challenge: Invent and Build an Alarm Clock
- Create a Scenario
- Build Prototype
- Test Prototype
This was an experiment for me as well. I wanted to see if we could achieve all these things in such a short period of time. How fast can you get to a nonfunctional prototype?
I introduced the rules of brainstorming and the class split into teams to get to work.
I haven’t been satisfied with the post-brainstorming activities like multi-voting, so I asked them to do more explicit sorting with a technique called ‘funneling’.
When you pour a bucket of water into a funnel, what happens? You get a single stream of water. Funneling as applied to brainstorming is used to clarify ideas at various levels of abstraction. The levels we used were ‘users’, ‘needs’ and ‘features’. At the end of the brainstorming session each team had a prioritized list of users, their needs and the features that would serve those needs.
Who are our users:
One group came up with a lot of non-auditory methods of waking someone up. They were going back and forth between building an alarm clock for a deaf person or an elderly person who was hard of hearing. They made a decision to go for the elderly. When they started thinking about their needs in this context, it led to ‘remembering’ over ‘waking up’. As a result, the features were focused around creating a mechanism for reminders throughout the day. (If you’re retired, you don’t need an alarm to wake up and go to work!)
Create a Scenario:
I asked each team to tell a story about their alarm clock in action, because a list of features isn’t sufficient to understand how this product will be used. I didn’t really care how they did this (storyboard, written narrative, acted-out skit, video, etc). Each team decided to ‘act-out’ their alarm clock in the form of a skit and the results were hilarious. Afterwards the product ideas were opened up for critique. For example, one group used water as a startling mechanism and classmates challenged the practicality of that idea.
Prototyping & Testing:
Each team built a low-fidelity prototype of their product using ‘found’ materials in the design lab. Using objects like water bottles, tape, string and paper, each team constructed their prototype. To test their newly-invented product, members of another team poked and prodded the prototype to see if they could make sense of it.
The process for testing was simple. First the team came up with a list of tasks they would ask of the user, then each member of the team picked a role to play during the usability testing. There was one person would be the ‘guide’ to set the stage, ask the user to perform tasks with their device and ask questions to understand what the user was thinking. Another person played the role of note taker. A third person manipulated the non-functioning prototype to make it come alive.
From an Agile / UX / UCD perspective, I was impressed that each team was able to go from idea to prototype in such a short period of time. It makes me wonder if we shouldn’t be creating more prototypes. Teams could utilize lower-cost methods of documentation: skits vs storyboards, paper prototypes vs ‘clickable’ prototypes. Rather than doing a high-quality job with fewer ideas, what if a cross-functional team could churn out many low-quality prototype concepts? Would that shallow effort yield more knowledge than a deep-focus in one area?
Special thanks to Mamata Rao, a faculty member at the National Institute of Design for the opportunity to work with her students. What a fantastic group!
I designed a class borrowing ideas from other folks at Yahoo like Todd Hausmann, Gale Curtis, Matt Fukuda and Dan Wascovich, Kevin Cheng, Anand Nair and Anupama Kamath. I also incorporated techniques like prototype testing from Marty Cagan, and paper prototyping from Jeff Patton.
var scribd_doc = new scribd.Document(2195049, ‘key-465tsxks9o3pg4gezei’); scribd_doc.addParam(‘height’, 200); scribd_doc.addParam(‘width’, 450); scribd_doc.addParam(‘page’, 1); scribd_doc.addParam(‘mode’, ‘book’); scribd_doc.write(‘embedded_flash_2195049_y3qy5′);
Got to your blog and an interesting first article to read.
Is it by coincidence or design that most of leading product development companies like Yahoo, Google, MSN and others like Thoughtworks etc in recent years have been Agile driven and almost none are CMMi. Is there some significant difference in the way management perceives their role, environment and market-organization dynamics?
Another factor is to have a champion for change and belief in change of his/ her so strong that it comes around.
If you had a memo to write to management thats going towards deterministic processes like company wide formal estimation methods and evaluations – what would you write?
Product organizations (such as Yahoo, Google, MSN and others) do not concern themselves with certifying themselves at a CMMI level. Here’s why:
Certainty and traceability associated with CMMI has cost (in terms of being able to predict scope, cost and time of a project). A high CMMI-level rating, while it decreases risk, also has overhead that adds to costs. The higher the level of certainty, the higher the cost.
Product organizations manage risk differently than an internal IT shop or a IT outsourcing firm. These companies plan for failure by executing multiple options – build internally or acquire. Competition from startups and other internet companies is fierce. Time to market is critical. Persuing a CMMI level that would add any costs to product development isn’t acceptable. For example, it wouldn’t be possible for messenger.yahoo.com to catch up to meebo.com if we had CMMI controls in place. Because, guess what, meebo doesn’t either. We have to go as fast, or faster then them.
For an IT outsourcing firm it makes perfect sense to invest in a high CMMI-level rating. After all, they’re selling more than just software. They’re selling a service. And part of that service is certainty. Their customers do not want to take risks, and are willing to pay for certainty. An IT outsourcing firm passes on the costs of the overhead associated with the risk reduction. Unlike a product organization, failure carries punitive damages associated with a failed contract.
It’s also important to note that Agile & CMMI aren’t incompatible. There are many firms that use Agile development methods who certify themselves. Their customers are willing to carry the costs of certainty and tractability and still get benefit from Agile methods. Check out Jeff Sutherland’s work on the topic.
The concept of company-wide process definition is quite different then the pursuit of CMMI certification. CMMI doesn’t tell you how to reach a level of certification, it just tells you what you need to achieve the desired level. Management can do the same, articulate what the organization values and let the individual teams figure out how they can be achieved. There is a lot management can do to articulate organization needs — like minimum performance criteria, security standards or legal compliance.
In the memo to the company that was asking for standards on formal estimation methods and evaluations, I’d ask them to articulate the needs of the organization and challenge teams to find ways to meet those needs. Specifying how each team will operate may be comforting for the management team, but will just be an illusion of control. Management should set goals not dictate how to reach them.
Organizational transitions are hard. It really requires a lot of items to be put in place before any real change can happen. All of them are people related. 1) Risk-taking culture, 2) Leadership, 3) Coaching, 4) Team structure.
1) Risk-taking Culture
The company has to have a environment where people are free to take risks. If people who take risks are punished, the risk takers leave. You’re left with people who do not want to take risks. People who don’t take risks won’t stick their necks out. They’ll be on the far-end of the adoption curve.
This is tightly coupled with the ‘Risk-taking Culture’. The leadership must at least be willing to sponsor time and training for new methods to be used with the product development teams. They need to adopt the change themselves, but they can _even_ play a wait-and-see role by experimenting with a few teams and see how it goes.
Coaching and training. Learning new product development techniques is a really big mind shift. In an Agile context everyone’s world changes. Product managers need to be communicating features a lot differently than they had before, programmers need to be accustomed to requirements changing more frequently, QA needs to create more flexible testing infrastructure, user experience experts need to adopt their methods to teams working in iterations.
4) Team structure
There are two environments where introduction of Agile methods are difficult. a) A very large team (split up into multiple, smaller teams) working on a single large product that’s under the gun to deliver on time. Change is hard to manage when a deadline is looming. b) An organization which views their people as resources to get shuffled around every other month to work on new projects. When there are not stable teams, it’s hard for any change to take root.