Introduction
My friend Lee Copeland [i]asked me the following question:
Bob,
I'm putting together a keynote talk and need some examples --
projects that were successful in the sense that they implemented the requirements, within budget and time BUT didn't return any actual value to the business.
Got anything you can share?
Thanks,
Lee
And it made me think about my past projects. My initial reaction was no, I don’t think I’ve ever worked on a project that met the projects time and scope commitments, yet delivered NO business value. The keywords here are the “no business value”. But I have worked on some squirrely projects that did disappoint on business value. Let me describe a few of them as a means of sharing some things to avoid in your own projects.
In addition, I hope the stories hone in on some of the aspects of customer/business value, why achieving it might be a challenge, and how it can be a variable but worthy target.
Delayed Value
Quite a few years ago, pre all of the Agile hoopla, I was the project manager of a software project that was to deliver a new network management platform. The technology was a moving target and the company was trying to create a new platform on which to base subsequent updates for new networking protocols and increased performance. The team was composed of UI developers and some very low level, embedded device folks. At the time, it was a leading edge project that was targeted towards an opportunistic hole in the market. It was an exciting time.
However, I joined the project at a troubled point. It was spinning in a Find-Fix-Test cycle that it couldn’t break out of. The software had been in this iterative bug fixing phase for 4 months with no sign of breaking free for a customer release. The directive driving this was to produce “a nearly perfect, bug free initial release” and the testing team took this edict very seriously.
In order to break from the cycle, I met with key stakeholders and created a Minimal Viable Product [ii] definition that “allowed for” a more practical defect density aligned with the value the release brought to customers and the product released in 3 ½ weeks. Subsequent polls of the customer base found that the release had much greater value over what they’d been previously using and they were extremely happy with the new direction.
Moral of the story: Make sure you’re not trying to provide too much quality without authenticating your assumptions with real customer expectations for the quality levels vs. the value provided.
Assumed Value
Fast forward in time quite a bit and I was working on a SaaS (Software as a Service) product to do email marketing. There was quite a bit of competition in the space and we paid an inordinate amount of attention to them. One of our competitors had come out with a “Freemium” option and it had served to push them over the top with new subscribers. Their growth rate in new customer acquisition was phenomenal. And once folks were clients, they were enticing them to convert to paid options.
This particular competitor was private, so we couldn’t get too many specifics. But the general view was the freemium option had driven a 10x growth improvement and that they were going to pass us in size in a little over 9 months.
Our CEO decided that we too would provide a freemium offering. Believe it or not, that option was not “free” in its implementation. We had to defer our existing roadmap and divert most of our engineering team to implement core system changes to support this option. Not only was it a challenge from a software functionality perspective, but we had to anticipate the impact it would have on system performance (scalability) and to our customer support practices and team. All in all, it took us 6 months to ready our “Freemium Release” response.
When we exposed the new options, things did not follow the curve of our competitor. Yes, we saw a slight bump in free account sign-ups. But overall, it was a slight 10-20% increase over our normal rates. Long story short, we kept the free option open for six months and never realized nearly the increases our competitor had. In fact, the cost of maintaining the free service far outweighed any conversion increases we had assumed. All in all, the freemium service was deemed a failure. We felt we couldn’t withdraw the service because of our customer promises, but we deemphasized it on our web pages and tried to hide it wherever possible.
Moral of the story: Often we follow competitors or our feelings when determining what will have value to our customers without TESTING those assumptions. Assumed value doesn’t always equal actual value. And following the competition isn’t always the best way to create innovative products.
Degrading Value
I’m going to tell a sort of meta-story here because this has happened so often in my history. The goal and the vision were always the same, an intense need to get a product increment into the field to fend off some sort of competitive pressure. The triple constraint was always fixed across all three dimensions—Cost, Time, and Scope. That inevitably caused us to ditch quality as the project due date approached.
Now don’t get me wrong, nobody ever said: “We are running out of time ladies and gentleman—so in an effort to hit our date we will compromise quality and deliver a crappy product”. No, it was never as clear as that. It was much more subtle. The teams slowly started making compromises as the date approached; dropping quality practices like pair-programming, code reviews, and unit tests. From a testing perspective, you could see defect rates increasing, but triage conversely lowered the bar on what bugs needed to be fixed.
Almost always the code shipped “on time”, or if there was a date slip, it was minor and we shipped with the quality compromises intact.
But when the software hit the street, the proverbial “s..t” always hit the fan. Customers found the numerous bugs that we’d introduced in our frenzy to release and thus began a painful Fix-Fix-Re-release cycle that undermined our attention to our next release. It was a spiral that usually continued release over release.
Moral of the story: At some point we needed to realize that doing things right in the first place and holding quality dear and is in our own best interest and that of our customers. Value without inherent quality is valueless!
Misunderstood Value
Before the notion of measuring usage became popular with SaaS and similar products via Eric Ries’ recommendations in The Lean Startup, I worked on a project that did just that. This was a storage ecosystem product and the primary users were was Systems Administrators. It was hard for us to totally understand the customer environment and their usage patterns, so one of our engineers came up with the brilliant idea of collecting usage data and having the device “phone home” periodically to share. This practice is quite common today, for example check your browser of choice, but back then it was incredibly novel.
We then put in place data collection and reporting infrastructure so that we could aggregate the information into meaningful and actionable reports. Keep in mind that we were collecting rather finely grained information, so the amounts were quite impressive.
One of the primary lessons learned was that we were quite poor at guessing how our customers used our products. Only about 45% of the functions we delivered were actually used [iii]by any of our customers. Initially, there was a lot of internal denial when these numbers started to come in—with folks thinking the reporting software was in err. But we confirmed the truth of it, we were selecting value very poorly when envisioning and delivering features for our clients.
This ultimately led to our being more data driven and experimental in developing our software. We would often reduce the scope of a feature, deploy it, and then measure actual interest and usage before we would invest it in further.
Moral of the story: Value is not in the eye of the creator. It has to be in the eye of the customer and user. It also helps to measure actual usage and make decisions based on as much real data as possible.
Wrapping Up
I really want to thank Lee for inspiring me to think about these projects. Now that I’ve put even more thought into my history, I truly can’t think of a single project that delivered no value. And I’m guessing that there are only a few that meet that criterion in your own histories.
However, as I shared, I suspect all of you have experienced projects that compromised on the business and/or customer value proposition. And in doing so, you probably impacted your bottom-line in a significant way.
I hope Lee makes a strong point in his keynote surrounding the “the value of focusing on value” in all of our projects. In a small way, I hope that I have as well.
Stay agile my friends,
Bob.
[i] Lee Copeland is a deeply experienced consultant, author and teacher working for Software Quality Engineering (www.sqe.com) He’s also the Program Chair of their popular conferences.
[ii] In this case the Minimal Viable Product was produced as a working goal for releasing the product. If it had been defined at the beginning of the project (release) planning, it would probably have helped avoid the rework cycle the team found themselves in.
[iii] From a Lean perspective, this implies that 55% of the functionality we delivered was waste. And this waste didn’t simply incur a one-time cost, but a reoccurring cost as we maintained, tested, and supported it.