Many years ago, I volunteered to help a local conference team reenergize their conference program. They had been putting on the conference for a number of years and were looking for some new, fresh, and outside opinions on how to change the format and reenergize it.
You see attendance was declining, not too much, but a troubling trend. And attendee feedback, while positive, pointed to getting a bit bored with the current repeating recipe.
We went out to dinner to brainstorm. It was attended by the long-time program chair and an invited set of 4-5 outsiders who were asked for their ideas.
We’ve tried that before…
As the dinner commenced, the introductory conversations turned into a brainstorming session All seemed to be going swimmingly and I was excited about the possibilities.
I was reading a blog post from a coach who was working with a continuous deployment team. In essence, every story (work item, PBI, etc.) from their sprint made it into the customers hands immediately. They received feedback on each in real-time and took follow-up actions as appropriate.
Since they were using Scrum, they were still conducting a Sprint Review every few weeks. The coaches question related to the value of the review. As it seemed that everyone was questioning it in this particular context. That is, since they saw (and accepted) things in real-time, what was the need to see them again in a review? Or were they just doing it because the Scrum Guide said to do it?
And the backstory was that the coach was struggling with dogmatic Scrum in the organization. I.e., doing things just because the book said to do them, rather than thinking and adapting.
This question made me think a bit about Sprint Reviews. And it led to the following online response to that coach –
During the years 2009 – 2012, I worked at a small company called iContact here in the Raleigh/Durham area. iContact had developed an email marketing, SaaS application that competed (still does) with the likes of Constant Contact and MailChimp.
Ryan Allis was our CEO at the time and he was very innovative when it came to organizational change & evolution and leadership development. He happened to read The Five Dysfunctions of a Team, by Patrick Lencioni at that time, and became enamored with the ideas contained within.
At the same time, we were adopting agile (Scrum, Kanban, and Scrum of Scrums for scaling) across the organization. Quite successfully, I might add. So, the two efforts naturally converged. And I was pleasantly surprised how well our Agile efforts and the 5 Dysfunctions blended together. That’s really what this article is all about.
5 Dysfunctions & Agile, like Peanut Butter and…
I was having an email conversation with an agile coaching colleague the other day. In one of her replies, she said the following:
BTW I really like the way you articulate your concerns about the agile community at large. It’s helpful to share with my leadership and customers as we try to navigate a very messy space of certifications, frameworks, and competing agile voices
The final point she made really struck a chord with me. The notion of competing agile voices.
It made me realize that, YES, there are many, many agile voices today. And one of the real challenges is to figure out who to listen to. Where’s the value and the experience? And how to avoid the “noise” or how to separate the wheat from the chaff?
I want to share some ideas around this challenge. No, I’m not sharing any secret filter or the 1-person to listen to. They don’t exist.
But I do want to share some advice for handling the high voice count and how to become a more discerning listener when it comes to the noise.
And it’s getting worse…
It’s funny really. One of the key points of the agile methodologies and the manifesto is heavy collaboration, with the best being face-to-face collaboration. But one of the things I see happening in teams all of the time is, how can I say this delicately, over collaboration.
In other words, the teams, ahem, talk too much. There, I said it And I’m referring to open-ended discussion that takes too long if ever to narrow down towards a decision. Folks seem to be talking to hear themselves talk. Often it’s not everyone, with a few heavy talkers dominating discussions and the rest seemingly along for the ride. So it can be quite unbalanced.
In facilitation terms, there are two types of discussions going on when a team is trying to make a decision. There are divergent conversations, where options and ideas are getting put on the table. This is the brainstorming side of the discussion. And then there are convergent discussions, where the team is narrowing down options in order to make a decision.
I wrote a coaching article a while ago that illustrated an agile coaching anti-pattern. It was quite well-read and I received quite a bit of feedback on it.
One of the folks who responded was Mick Maguire.
A great article by Bob Galen, he shines a light on an all too common pattern, especially among the late adopter market that we are in these days... My advice... If you are about to engage agile coaching, and you don't want to waste (a very big pile of) your money, make sure the first conversations are "what does success look like?" and "How will we know if we are getting there?...”
I’m not focusing on the coaching part of his reply, but more so reacting to this entry level statement:
“Especially among the late adopter market that we are in these days…”
Mick’s comment has stuck with me since I read his reply. Making me think about Geoffrey Moore’s, Crossing the Chasm and where the agile (movement, methods, frameworks, etc.) might be on that scale.
My friend and colleague Shaun Bradshaw and I were coaching at a client recently. We started to have a conversation about velocity, not directly driven by the clients’ context, but simply in general.
Shaun was focused on velocity as a relevant metric within agile teams to drive conversations between teams and upper management. And I was struggling to “get there”.
Part of his focus was to create visibility around the difference between average velocity and current sprint velocity. Furthermore, the teams and management would be able to see:
- Velocity gaining stability over time (predictability, low variance)
- And increasing over time (short-term burst)
As part of newly formed and/or newly coached agile teams.
Now I really get what he was saying. And I agree that teams in these contexts should be displaying activity and behavior towards those two results.
One of the things that I’ve come to value in my agile journey is our local Raleigh / Durham agile community. It’s one that I’ve had a hand in creating and guiding over the years. But one that’s taken on a life of its own.
I can’t tell you how many wonderful agilists are here in my local area. Some are:
Mary Thorn, Josh Anderson, Ken Pugh, Jason Tanner, Laurie Williams, Agile Bill Krebs, Andy Hunt, Ken Auer, Catherine Louis, Cory Bryan, Jeff Barschaw, Tom Wessel, my colleagues at Zenergy Technologies, and the leaders of our local AgileRTP and ALN groups. Literally, we have a community of thousands in our Meetup groups and our local TriAgile conference draws 500+ folks annually.
A couple of other local folks that I want to call out are Laura Burke Olsen, Arjay Hinek, and Matt Phillips. They are collaborators in a group/website entitled Collaboration Explored. It is a website focused on Collaboration inspired by the late Jean Tabaka. I think it’s wonderful that these folks (and others) are continuing the work that Jean inspired.
I delivered a keynote at the Agile Development + Better Software + DevOps conference put on by TechWell on Wednesday, November 8, 2017 in Orlando.
The feedback I received was wonderful and it seems the talk resonated with quite a few of the attendees.
At some point, I'll get a link to the video of the keynote and I'll share it here. Until then though, here's a link to the slide from the talk.