I remember over 25 years ago when I learned about developer to tester ratios. It was my first experience at management and we were planning our hiring for a 2-year forecast. We were expecting to grow by roughly 25 staff in our software engineering group and the question was posed to me regarding developers vs. testers in our hiring plan.

To be honest, I didn’t have a clue. My boss stepped in to help me out and he spoke about a 5:1 ratio as being industry standard at the time, so I took him on his word and used it for my forecast projections. Heck, I didn’t know any better. It seemed to work, but that was a long time ago.

Fast forward…mid 1990’s

Later in my career, I became quite experienced with ratios in software teams. I realized that there were no clear standards, so I did some research and I wrote some articles around the mid 90’s that talked about ratios in industry. Microsoft at the time was considered “enlightened” and they maintained a 2:1 or thereabouts ratio of developers to testers. I ran a poll of software companies in the Triangle area of North Carolina, and they ranged anywhere from 2:1 to 12:1. So it was quite a wide range.

I interviewed team members surrounding the health of the team at the various ratios and it seemed like 2:1 to 5:1 were healthy in most contexts. Anything above 7-8:1 seemed to put too little emphasis on quality and testing and ultimately stressed out the balance of the team. But it was incredibly nuanced for each organization and team, so hard to pin down.

Fast forward…mid 2000’s

Agile methods are taking over the software scene and have had some influence on ratios.

In some cases, teams are doing away with testers and therefore ratios. Google is famous (or infamous) for depending on customer testing and for converting “testers” to “software developers in test”. So in these cases, the ratios might reach 10-20-30 or more to 1.

But many traditional companies, for example financial firms in heavily regulated domains, have actually narrowed their rations – so are more heavily invested in testing than 10-15 years ago. Many seem to be settling around 4-7:1 as a ratio range that works best for them.

Developer to tester ratios seem to have less consistency now than they’ve ever had. Oh and the other factor that influences things is outsourcing of testing. It seems like it began in earnest in the mid-90’s and continues strongly into the next decade. With “cheaper” testers, businesses feel that they can invest a bit more in testing.

Sidebar #1 – What’s Included?

There has always been a question on whether to include testers solely working on test automation within the ratios. Similarly non-functional testing, for example security and performance testing is often an outlier. In my experience, the most common practice is to include them. And in fact, usually the ratios are driven from the org chart—add up the developers across all teams/skills/roles and then add up all of the testers across all teams/skills/roles, then derive the ratio for the organization.

Given this, it’s not uncommon for individual teams to be staffed at larger ratios – with less than the “plan”.

Moving on from “History”

Moving on from my walk down memory lane, it might be prudent to share an agile story about a team and their “ratios”.

While I was at iContact a few years ago, we initially had Scrum teams with only developers on them. The testers were outside of the core Scrum teams. One of the first actions I took as the new leader was to integrate our testers within the teams. Our initial “ratio” target was 5:2, but many of our teams had only a 5:1 mix until we hired more testers.

This was on a per Scrum team basis. As the company was growing quickly over, we needed to hire with this ratio in mind. For example, we couldn’t spin off a new Scrum team without having minimally a tester or two to fill out the skillset balance of the team.

I remember right after I integrated the testers one of our Scrum teams the developers treated the testers awfully. Statements like – “What value are you providing?” and “Just stay out of our way and don’t slow us down” ran rampant. The developers also came to me complaining that the testers weren’t adding value. To say that I was disappointed with the immature behavior is an understatement.

I think they were expecting me to swap in some test folks that they could get along with or that met their definition of added value. Instead I did something unexpected. I removed the testers from the team and placed them on other teams; teams who needed the help and who were thankful for them.

Ratios were then impacted in both directions. Some teams were over their ratio allocation of testers and in this one team, they had a 7:0 ratio.

However the team without testers did not get a pass on the quality of their sprint deliverables. Indeed, the Definition of Done remained the same for them as it did for every other Scrum team in our organization. So what happened without testers on their team?

First, their velocity took a nosedive. Imagine that. They also realized that testing was hard to do thoroughly and they complained about how hard it was and how much time it required. Indeed, they started talking to testers outside of their team to get help in how to approach it. Imagine that. I also heard a renewed interest in the developers around refactoring, building in testability and building more automation to lessen the “testing burden”.

Fairly quickly the team was looking for “their testers” to come back. They had discovered a newfound respect and understanding for the craft of software testing, it’s challenges, and for their teammates. I couldn’t have been happier with their maturation and improved outlook.

I tell the story because I think it highlights my current views towards ratios within agile teams—


Quality is a Whole Team responsibility – Everyone is responsible for it

Testing is a Whole Team responsibility - Everyone does it

Ratios don’t really matter; except to maintain a healthy balance towards team velocity and throughput

Let the team raise them (ratios) as and Impediment vs. monitoring or managing them

If your team reports an imbalance towards working their Backlogs, then fix it

Even if the eventual ratio is 3:7 (sort of kidding)


Sidebar #2 – Mary Thorn’s view

My good friend and colleague Mary Thorn finds more value in ratios than I do. From Mary’s perspective, they are healthy indicators whether an organization is sufficiently investing in testing. They help to create more meaningful discussion around team composition and effectiveness.

She also likes to examine the types of testers within the ratios; adding skill-set as a dimension that’s nearly equally important to the ratios themselves.

In a perfect world, Mary is looking for a balance between:

  • Automation & development skills
  • Manual testing
  • Domain experience & customer usage
  • Exploratory skills
  • Collaborative skills

as part of the test team that is “assigned” within each agile team. Skill balance is the operative goal in Mary’s model, as well as sufficient staffing.

Another important point is that Mary ties the ratios to strategy. Let’s say that for manual testing you’ve defined a ratio of 3:1 as “normal” for your teams. If you were then trying to build out automation infrastructure, Mary would recommend a 2:1 ratio. Once the framework effort is complete and you begin to extend automation across your application, via automating your test suites and cases, then she would say the ratio could move back to 3:1.

Mary also likes to point out that automation sustaining needs to be accounted for in your ratios. So you might never move back to that 3:1 ratio. It really depends.

We both agree on all of the above, while I put a bit less focus on ratios as something we measure and consistently communicate on.

Wrapping Up

What I’m really ranting about regarding ratios is the lack of thinking that they sometimes engender. I want thinking leaders and thinking teams, and magic ratios that guarantee a teams’ success, are not magic and sometimes inhibit situational thinking.

To Mary’s point, use them as guidelines in your planning – yes. Leverage them as risk indicators if you’re staffed too lightly with professional testers – absolutely. Fit them into conversations with your leaders to discuss imbalances – please.

But does the fact that you’ve hit these in your teams, imply that your leadership and thinking is done? And that your teams are magically staffed to handle all projects they may encounter? I would implore you to say – no.

Finally, to my original question – silver bullet or bunk? While they can be useful, I’m somewhat leaning towards bunk!

Stay agile my friends,