Title Image

Embracing Uncertainty and a Learning Mindset

Interview with Ted Young

This is the transcript of an interview that Gil Broza conducted in 2014 for a virtual training on Agile journeys. Though several years have passed, the powerful insights shared here are as applicable today as they were then.

Gil: My guest in this interview is Ted Young. He will unpack the abstractions of “feedback,” “feedback loops,” and “iterating” for us. He will show us how to replace our traditional response to uncertainty, which is to put rules and controls in place, with solutions that drive better results and organizational health.

Ted is a coder, speaker, author, coach, and systems thinker. He’s been doing “the Agile thing” in software development since 2000. You know how some people grew up with Scrum, and some grew up with XP? Well, like me, Ted’s an XPer. Ted’s worked for eBay and Google, learning and discovering what happens when Agile meets Large Companies, and finding that the road to Agile isn’t always obvious. For over seven years, he’s been working at Guidewire Software, as a developer, a manager, and unofficial Agile Coach, helping them develop software better using ideas from the Agile and Lean software communities. Hi Ted, and welcome. Am I missing anything, Ted, by the way?

Ted: I think the only thing I would say is I’m sometimes seen as a maverick, because I ask questions that cause people to say, “But we’ve always done it that way…”

Gil: [laughs] Okay… And also, to our listeners, let me also share with you that even though Ted and I have never met face to face, we connect electronically quite a lot. He’s been my secret weapon on a few occasions – when I write articles, as well as when I wrote “The Human Side of Agile” – because he always has insightful and thought-out feedback for me. This guy is a sharp thinker.

Gil: Let’s start with the word “iterating.” I think most of our listeners know its definition, but in the context of Agile development it has a very deep meaning. Would you please share with us how you see it.

Ted: Sure. Like I tend to do when I’m working on a presentation or talking to people, I like to start with dictionary definitions as the basis of what I talk about. So I looked up in the dictionary both “iteration” and “iterate.” So “iterate” is pretty simple. It means just to do again, and that’s it for the word “iterate.” The word “iteration” is interesting because it’s the repeated application of a transformation or a process, and I love that there’s the word transformation in there, because I think that’s an important aspect of the Agile idea of iterating.

So iterating is, at one point of view, just doing something over and over again. The question is, is what are you doing over and over again? So for that I always go back to Jeff Patton, who did a presentation and wrote about iterating and incremental development back in 2007. I love his example because I’m going to steal it, because I’m not creative to come up with another one. He used the example of painting, for example, the Mona Lisa. You don’t just say, “I’m going to paint the upper right portion of it, get 25% of that complete, and I’m done with that part and will never look at it or touch it again. I can now move on to the next part.” What iterative is, is you sort of draw an outline. You sort of get some of the essentials, and this is where also I always think about the Lean Software Development movement, which is this idea of, what is the minimum I can do so that I can see a result, and then take action based on that result? So I will do an outline, look at it and then fill in some of the details. I might erase.

But it’s the idea of doing this process over and over again, and specifically in software development, it’s, you still have the same process. You still figure out what the feature is, or what the portion of the feature is. You figure out what it means to be done or acceptable about this feature, what we sometimes refer to as user stories. And then it’s coding time. Depending on how your team works together, you might have coders pairing or coders working with testers, or the more traditional just coders sitting at their computer working on the feature. And then when it’s done then it’s looked at, and there might be some testing, and then somebody who defined the requirements says, “Yep. I agree that that’s done,” and you do this process again and again.

To some people it’s, “Isn’t that just Waterfall really small?” and I think there’s some truth to that. When you start out it certainly seems like you just have this really tiny waterfall, but as you bring people closer together, and as the time between the start and the end really starts to get smaller, it’s almost like you’ve entered this other world. Like, you go from molecules to quantum mechanics, which is a totally different way of thinking even though it’s just smaller. But there’s a significant difference there.

Gil: And that’s really what we call “flow,” isn’t it?

Ted: Yeah. That’s one of the things that I’m really much more in favor of, this idea of let’s move small pieces through, and get them done to completion for that, knowing — I think this is an important aspect – knowing that we may have gotten it wrong.

So one of my current hobbies, and this shows you how strange I am, is reading up on behavioral economics and decision theory. Some of the work by Daniel Kahneman and Tversky, some of these other folks, and one of the things that I’ve really come to appreciate is how we all suffer from what’s called a “sunk cost fallacy.” And the sunk cost fallacy is that we feel like we put all this work into something, and that no, we can’t throw it out, we put all this work into it. We spent all this time, all this effort, even if it’s completely wrong. We are going to take this thing and we’re going to push it, and we’re going to tweak it, and we’re going to try to make it work, even though it’s wrong.

So what iterating allows us to do is it sort of sets up this playing field where we can do things, and because they are small, we can feel a little bit better about saying, “You know what? This doesn’t work,” and basically put it aside and say, “Okay, what else can we try?”

Gil: As I was saying in the introduction, I had no idea what you were going to say, and I’m actually very happy you brought this up. I suffer from the sunk cost fallacy like anyone else. I’m actually thinking specifically of one of the two cars that we have in the family [chuckles]. But when you speak about products this way, in many products there is a minimum that we have to produce. Some people might call it the MVP or something like that.

Ted: Sure. Right.

Gil: Even producing that minimum is a fairly substantial investment. I know that a lot of people get really worried about iterating towards that, and also what happens if, once they actually make that MVP – the minimal viable product – visible to other people, and oh dear, they say something is really wrong with it, what do we do now? That gets scary, doesn’t it?

Ted: It’s very scary, because again, we’ve done all this work, and it turns out that nobody likes it. I mean, nobody wants that to happen. It happens, but it’s probably rare that basically people reject the entire thing. What often happens is there is some aspect of it that people don’t really like, or perhaps the thing you thought – and we probably all hear stories about this — this little tiny feature over in the corner actually becomes the major business driver.

Gil: Yes.

Ted: Companies like Twitter and so on sort of have grown up around these accidental features, or this sort of one off kinds of things. But in the process of iterating, in terms of the MVP, what other choice do we have? I mean, that’s what I always look at, what choice do we have? We can either ignore reality, and us humans are extremely good at that, we have all sorts of cognitive biases as to ways we work around avoiding reality. But if we can have the strength and the understanding of these biases to say, “All right, let’s look at the feedback. Let’s look at what information we’re getting, and what then can we do with that?” Sometimes it’s, no, we really think we’re right and we think everybody else is wrong, and you push towards that. But it’s probably better to listen to what people are saying, and then you can use that and that feeds into information, into the next iteration. I think that is something that is getting at the Agile aspect of iterating: you are not ignoring information from what you’ve previously done. That would be ridiculous.

Gil: Of course.

Ted: I’m a runner, and so I don’t ignore my past runs when I do a next run. Even though I’m running the same path I’m, like, “Oh, I remember over here I kind of got out of breath, so maybe I should slow down upfront, and the last race I went out too fast,” which is the common complaint of runners. We all do it. Next time I’ll learn, and next time I’ll run into a different problem, or a different situation that I then have to learn from.

Gil: I remember reading this in Jerry Weinberg, I believe, “problem number two now gets a promotion.”

Ted: Exactly. Exactly.

Gil: Okay. So I really want to challenge this a little bit just based on all the stuff I see when I visit companies. We’re all intelligent, educated, competent professionals. People hire us for that, right? They pay us accordingly. For years we’ve used these attributes to develop products with some pretty demanding expectations. The expectations are still demanding. Okay. In trot the Agilists and they say, “You know, you don’t really know what customers will use, and don’t predict more than a few weeks out, and never be overly confident about what you think,” and I know that this rubs many people the wrong way. Would you shed some light on that?

Ted: Yeah. I think exactly what you said ends up causing the problem, to a certain extent. This idea that we’re all intelligent, highly paid professionals – how can we possibly say we don’t know? How can we admit that we don’t know? And I think what has changed is that Agile allows us sort of a safe way to say, “We really don’t know. We have some good ideas. We have some — based on our long experience, or based on conversations or research, or any other methods that we use to develop a product, we have some really good ideas. But we really don’t know.” I mean, if companies like Procter and Gamble, big consumer product companies, if they ran their companies the way we run software they would be out of business. The reason why, if you look at — I was just reading an article – they experiment. Like, for example, for diapers. 150,000 different models are made.

Gil: Seriously? That many?

Ted: Seriously. It’s insane the amount of experimentation they do. And it’s not like it’s trivial. It’s not like it’s line of code. Line of code is easy. There’s no weight to it, but producing these prototypes takes work, but they do it because they know that they can’t 100% predict what people are going to want. They can say, “Here are the features that we think people will like,” but ultimately we don’t know what people are going to want and use until they actually use it.

So I think that going back to we’re professionals — to me, and I use this almost as a question during interviews – I want to find out, I will ask sometimes a very difficult question that doesn’t have necessarily a right answer. I want to see if they’re willing to say, “I don’t know.” Because that is the start of learning, is saying, “I don’t know. But I know how to find out,” or, “I know some things I can try so I can get that information.”

I think the other piece is that we have a lot better sort of structure and foundation for building software. I talk about it with colleagues and friends; it’s so much easier to deploy things to large numbers of people than it was ten years ago.

Gil: Oh, yes.

Ted: And certainly before that, you’d had to have a data center and you had to have all sorts of people. Now you take your credit card and you’ve got Amazon Web Services, and you’re done. And you’re seeing this, I think, with some of the startups that are starting from one guy, literally in his garage, sort of going back to the startups in the garage. We’re able to try these things out and get them in front of a lot of people and do testing, and do experimentation. So now it’s just much cheaper, and therefore I think those who know are willing to say, “It’s okay if we fail. That was a small thing. We knew that it might fail. We tried it, and that’s okay.” Whereas, when you have these very huge, very long, multi-year, tens and hundreds, if not billions of dollars, it goes back to iterating. You can’t iterate that way, because by the time you’re done, you’re done; you’re not going to do it again. Whereas if you say, “Let’s do piece by piece, and let’s iterate, let’s try it again” – and I always go back to making things small – by making things small you lower the risk and it’s okay to fail.

Gil: If I can paraphrase, or sort of extrapolate from what you said, we are competent and intelligent and educated professionals. That means that we are able to come up with good ideas, we’re able to learn, we’re able to try, and we’re able to draw inferences from our experiments so we can run the next cycle better. So is that what we should be looking for when we get people in for an Agile environment?

Ted: Yeah. In some cases you are already going to have your software and your structures set up so that you can do that. More likely, it’s you’ve got a lot of these bigger pieces that, “Oh, we can’t test this,” or “We can’t try this out,” or “We can’t put this in front of a customer until we finish this piece.” And it’s important to challenge that and to say, “Really? Is there something we can pull out? Is there some piece that we can pull out so that we can get some feedback on it? So that we can put it in front of our customers, or in front of sales people, or just in front of other people who can give us an opinion and get feedback on that?”

Sometimes it’s not. Then it means that, okay, you’re not going to be agile in this sense, and that’s okay. That’s still a direction you want to go. But then you need to figure out how to make your structure and organization and code more flexible so that you can do that. Because I think if you can’t run small experiments, you can’t really be agile. It’s very much, like, as a runner — I’m going to use this metaphor all the time – it’s as if you’re running in concrete shoes. You’re not going to be very agile. You’ve got all this weight on you, and until you lose that weight, you are not going to be able to be agile.

Gil: I’m thinking back to one of the major advancements that we had when XP came along, which was to make code testable. Not just because it’s a nice thing, but now we actually had tools and techniques to do that. Everything we wrote we wrote with testing in mind, often with tests, and so it became very easy to approach any piece of the product. What you’re saying now is we actually build another layer on top of it, which is to test the usefulness of it.

Ted: Exactly, because it’s all well and good to build something correctly. That’s what we as professionals do. We want to build things correctly. The question is, are we building the correct thing? Are we building something that will sell, or fill a need, or solve a problem?

I always use this story. Getting into testing there’s always this talk about TDD and ATDD and BDD and all these kinds of things, and it can be very confusing, because, well, aren’t they all tests? What goes where? They all look similar. And they can; I think what’s important to look at is, what is the purpose of each level?

So the TDD (Test-Driven Development), which is sort of the lowest level, is really to ensure that your design is good and that it’s flexible, and that you can easy refactor and easily move code around, and basically provide you this, at least, core capability that you could run small experiments. Whether you do or not, it makes sure that you can. So that’s building the thing right.

The next level is, are we building the right thing? So this is where some of the BDD and ATDD, behavior-driven testing or acceptance-test driven development, that’s where it starts providing direction for the product. So you’ve got this core layer of, we’ve got a system that’s flexible, that we can run with, that we can be very agile with. Okay, now we want to point it in a direction. So sort of BDD (behavior-driven development) makes sure that when we said it should go in this direction it actually goes there.

Then we need this outer validation and feedback from the customers saying, “Yes, that’s the direction I want you to go in.”

So you may have great code with great BDD tests and all this kind of stuff, but again, if it doesn’t fill a need that people are willing to pay for or provide, then it’s like taking a trip in a car and you’ve taken a wrong turn. You need to figure that out quickly and try something else.

Gil: Okay. So let’s talk about feedback, then. In Agile specifically we always talk about feedback loops. So would you tell our listeners how you view this?

Ted: It’s pretty simple in the sense that, what a feedback loop — there are two parts to that phrase.

There’s the feedback. The feedback is the information that comes back in, right? So you put something out there, you put something out to measure, whether it’s code running a test that’s feedback, whether it’s user interface mockup that’s put in front of potential customers and you look at how they see it, that’s feedback.

And again, just like iteration, it’s a loop. You get information in. You look at it and compare it against what your expectations were. We thought, when we had this user interface, that people would click here and they would do this and they would be able to figure it out. You put it in front of them and you find that, “Oh my God, they didn’t even see that button. It was huge! How could they miss it?”

“Well, they were distracted by this other information.”

“Oh, wow, we didn’t realize that. We didn’t even think about that. Okay. Let’s go back and change that,” and again, because it’s lightweight, and we’re agile, we can iterate through these and go through these feedback loops very often.

So the two important things is you have some way of getting information to measure what you’re doing. So tests are a great way of measuring whether your code works. Feedback from users, there are all sorts of ways to do this at the product level. You can randomly show to 50% of your audience one UI, or one screen, and show a different one to a different 50%, and you measure which one, for whatever measurements — it might simply be sales, or it might be click-throughs to something else, or signups for our mailing lists. You’re basically measuring which one was better. You might say, “Well, that’s just a popularity contest.” And it’s, like, yes, exactly. In that case it is. For tests of the code it’s not a popularity contest, you’ve written tests and it’s logical, and hopefully those tests are doing what you think they’re doing.

Gil: Well, and deterministic.

Ted: Hopefully they’re deterministic. Non-deterministic tests, I’ve certainly lived with those for a while, and that’s no fun.

Gil: No.

Ted: And what you would like when you get to the higher levels — this, again, gets sort of into Lean product development – is you want lots of feedback. If you get two opinions, that’s not going to be sufficient feedback. So you might say, “Well, I can only get two people.” Okay, then don’t do it that way. So you want to use the right mechanism to get the feedback in the first place. You wouldn’t just give a screen to two users and see what they say. You would bring two users in and watch what they do, right, and then that gives you the feedback. And often the feedback at those higher levels is unexpected, and this, in one sense, makes things so uncertain. But to me, also, this is the fun part. This is the challenging part. It’s, like, “Wow, why did they do that?” Or, “How did that happen? We didn’t expect that.”

Gil: Okay. Sorry. So you approach this from a very inquisitive point of view. Does your company, do your executives also like to solve interesting problems, or do they really want successful products yesterday?

Ted: Well, so I think my company is somewhat unique in that. All companies are unique, right? That’s what everybody says. We’re not putting out products every week. Our release cycle is on the order of 18 months to two years. So we have a very different context, and I love talking about that because I say, “Look, you can only be as agile as your context and environment allow you.” That’s not to say you should be fatalistic and say, “Well, that’s just the way it is. I can’t do anything about it.” But you have to start from where you are, and you also have to understand that the constraints of that environment are not necessarily of your own making. So our customers are very large insurance companies. They don’t replace systems every year. They don’t replace them every five years. They might do it every 20 years.

So the entire environment is very different, but that doesn’t mean that we can’t look at ways of being more Agile — of trying to get to where we are as Agile as we can be, given the environment. So I will go back to my running metaphor. I can run a lot faster on a flat, nicely paved path, than in sand. I’ve run on sand. It’s not fun. You run really slowly. It works a different part of your body than you might ever have thought of, but you’re running on sand. So you can’t say, “Oh, well, if only we were running on pavement.” Well, you’re not. “Oh, can I turn the sand into pavement?” Well, you know what? You can’t. You’re on sand. So what can I do given the Agile principles and iterating? I can’t show it to my customer every week or every month, but can I get enough people who know about what customers need to see and provide feedback?

Gil: Right. So these sorts of people we’ll usually consider POs and stakeholders.

Ted: Or, in some cases what we call subject matter experts. People who’ve worked in the industry and know it backwards and forwards, and possibility know it, in some cases, more than some of our customers.

Gil: Okay. Okay. In the courses I teach I like to say that pretty much all of Agile is based on shortening your feedback loops. Can you give our listeners examples of typical loops in software development? Like, which ones you really like to pay attention to?

Ted: So, yeah. I mean, I tweeted this the other day. If you don’t have short feedback loops you may either not learn, or you might learn the wrong thing.

So there’s nothing worse than you do something and you wait two weeks for some kind of feedback, and you get feedback, and you then incorporate that in, realizing that that feedback was based on something else that happened and not on your original thing that you did.

So this is why having the short feedback loops is part of learning. To me, and I’m certainly not the only one to believe this, and certainly not the first one to say it, that software development is learning. It’s all about learning. So we learn best when we have short feedback loops.

Depending on which hat I’m wearing is the one that I’m going to pay attention to. So if I have my coding hat on I’m going to pay attention to my TDD, my test-driven development. I’m going to make sure that my tests are small,  that I go through the red-green-refactor where I write my test, it fails, I write my code, it passes, I might look at refactoring and repeat that process over and over again. And that’s on the order of minutes, or tens of minutes.

Moving up to the next level is where we start looking at, okay, I need to develop the story, and it has to be small enough that I can do it in a few days. Because if it takes too long, then I’m going to lose sight of it, of what am I trying to do? What am I trying to achieve here? So at that level I want the feedback of both my behavior-driven development tests and looking at the acceptance criteria, and then possibly even before I’m done showing it to the product owner or a tester. So I’m looking at those loops.

So you get these concentric circles of loops, and they are all important. It just depends on sort of who you are, what role you have, as to which ones you really pay more close attention to. So testers will – depending again on how your teams are set up – they may or may not be involved in the coding aspect, but they are involved at that next layer out, making sure that oh, we were supposed to do this in combination with this other thing. And the product owner is making sure that that then fits with the 30 other features that are in play. And then, at the outer level from there, getting feedback from sales people and potentially customers. Then it all feeds back down. So it’s going in and out of this set of concentric circles.

Gil: And the longer it gets – the longer the time until you actually get useful feedback from people – the more in the dark you are in terms of your iterations?

Ted: Yes, and unfortunately, here at Guidewire, we suffer from that. As I mentioned, our feedback cycles from customers are really, really long. I mean, it took probably three years from the time I wrote a feature to the time I actually got concrete feedback from customers saying, “It turns out that the way this feature was implemented makes it hard for us to configure.” Looking at that is, like, “Ah, you’re absolutely right! I didn’t realize that. I didn’t know.” And you could look back and you could say, “Is that something I could have figured out?” Maybe… But you don’t know how they’re going to use it. So the downside of that is, three years of me developing software with no idea, or very little idea, of how customers are really going to use it, and how that might affect what I’m doing. And that’s painful.

Gil: I can imagine!

Ted: But to a certain extent, as I said, it’s the reality. So we make decisions, therefore, that may not be optimal, and are actually not optimal, because we have to have some room to move and maneuver, that we wouldn’t have if we had shorter feedback loops. So in a sense, and this is bringing in some Lean terms, there’s waste involved. But in this case it’s sort of necessary waste.

Gil: Like a tax.

Ted: Yeah. It very much is like a tax. It’s necessary, we need it. If we could get feedback from customers more quickly, then we wouldn’t do it. But we can’t, and so we do, and it’s the price we pay.

Gil: Yeah. Just before the call started we talked about the weather where you live, where it’s almost summer, whereas I, in Toronto, still have piles of snow to reckon with. So it’s the tax I pay on living here, where it’s March and I still have to walk on ice in some places, and everything just takes a little while longer.

Ted: Yeah.

Gil: But still, I’m keeping it [chuckles].

Regarding this type of long loops. Okay, so Agile advocates for strong, cohesive teams. Shouldn’t that group power sort of alleviate some of our individual failings and really allow us for greater certainty?

Ted: Well, so I think, in terms of individual failings, this is where how cohesive the team is, and sort of how the team has been built, have they been allowed to really become a team? I often see when I talk to other people, I often hear about basically groups of people. They’re not really teams. To me it’s, are you looking out for one another? Are you noticing things? Like, “Hey, Joe, I notice this card you’ve been working on has been sitting in the ‘Doing’ column for three days. We thought it would only take half a day. What’s going on?” There are two possible reactions. There could be a defensive reaction, “No, no, I’m fine! Everything is great.” Or the team reaction, which is, “Yeah, you know, I thought I could handle it but I’m really having trouble with this. I could use some help.”

Now, as I was mentioning before, we’re all professionals and in some cases, nicely compensated. Before I was mentioning how hard it is to say, “I don’t know.” Possibly what’s harder to say is, “I need help,” and part of that is saying, “I don’t know.” Not only I don’t know, but, “I don’t know how to do it.” And that’s difficult. So to me that’s a good measure of do you have a good team that’s complementing each other. And one way to tell is how often do people ask for help, and get it, of course. But the asking, I think, is much harder.

Gil: Yes. You’re actually reinforcing what we heard in the previous two sessions, both from an executive’s standpoint and from the director and coach standpoint, and it was all about strong teams. They don’t just have each other’s back. People there make each other look good.

Ted: Right, exactly. This is where it gets more into flow: people are paying attention to the work that needs to get done. And they will do whatever is necessary. Testers need some help testing? I’ll go help testing. I almost always see — you know, you look at various storyboards, Kanban boards, those kinds of things. There’s often — you look in the columns for ready to be tested, and that column is always down towards the floor, and that’s your bottleneck. And you can sometimes say, “Well, that’s just the way it is,” but it’s also, “Well, what can other people do to help out? Are there tools that developers can create to help make the testing better, or easier, or faster, or at a higher level?”

Gil: Yeah. This is probably a point worth mentioning to our listeners. If you have boards and the boards have headings, like “Dev” and “QA,” that’s part of the reason. That type of board will talk about sort of the functional delineation, as opposed to the activity that we care about, such as, this needs testing.

Ted: Right.

Gil: Doesn’t matter by whom, as long as they know how to do it. But it is ready for it to be developed, it is ready to be tested, ready to be reviewed, what have you. So even the words you choose will make a big difference.

Ted: You mentioned the word certainty. We love certainty, and we have all sorts of cognitive biases to ensure that we think we have certainty, and we really don’t. I’ve already talked a lot about this; we don’t know how things will work out. We can guess, and those guesses can be couched in a lot of experience and education. But we don’t know. And I think this idea of certainty is dangerous, because it forces us to do things that end up making it even worse.

I use the example of customer support representatives. You talk to them and they’re running a script, they’re reading from a screen. Well, why do we do that? Well, we want to be certain that they say exactly what they want them to say. And that might work, but really, if you talk to the people on the other end of the phone who are very upset, they don’t want to hear that.

Gil: No!

Ted: They don’t want to hear a script. They want to hear a human. If I wanted to talk to a robot I will go onto a computer, but I want to talk to a person. So, by putting on these rules and these guidelines and these scripts, rather than saying, “Look, here are our values. Here are what we want you to represent, and how we feel about our customers.” I think that’s much more important.

The reason why it doesn’t happen often, at least from the way I see it, and I’m not saying I know the answers, is because it’s really hard to talk about values. It’s really hard to talk about, what is our mission, and spread that, and train people, and help them understand, “Look, this is what our company is about, and you represent our company. Here is the power that you have.” If you give them that, then… you read all these business books, and when that happens they’re surprised. Because they thought, “Oh, they would give away the store.” But no. You know what? Actually, it comes with better customer service. Zappos isn’t Zappos because it just has a great name. They said, “Look, we really care about our customers, and even though we may do things that may lose us money in the short term, we know that customers are what’s important.”

I think that’s the same in software development, especially – we have very educated, really, usually very nice working conditions, and yet we put rules on, you must finish this and commit to work for the next two weeks, and we want you to meet this deadline, and all these rules and structures that are there because we want to constrain people because we’re afraid that if we don’t, that things will be less certain. So it’s this illusion of certainty. This idea of, if we can write enough rules then we will magically have predictable outcomes.

Gil: Yes. So really what you’re saying is not “no rules”; what you’re suggesting is a different approach, which is the principled approach. Right?

Ted: Exactly. Exactly. We see this in Agile implementations. I’m sure you’ve seen this. I always tell the story of when I first brought Agile into Guidewire and started introducing it to other teams, because it had been successful on my team. I remember hearing someone say, “Well, we can’t do that. We have to do it this way because Ted said so.” I was, like, “No! That wasn’t what I meant.” That’s a very prescriptive way. There are all sorts of reasons about why we may start out with prescriptive ways and rules and so on. But I think, like certain laws, they should have a sunset period. They should be, like, “This is good for the next month, and then it no longer applies. If you want to continue it, that’s fine. In a retrospective you can decide that.” But things change and we learn, and we get better at things. The rules should not be there… if you’re going to have them to start out, to sort of get going, you shouldn’t have them there all the time.

Gil: I want to pursue this point a little bit. We’re saying that if we believe the Agile paradigm, then we will accept a high level of uncertainty. Okay. If we’re on an Agile journey we deliberately organize ourselves for uncertainty. We don’t just cope with it. We actually consciously, deliberately take responsibility for it. Can you give us a couple more examples of how we take responsibility for an uncertainty?

Ted: Well, I think taking responsibility for uncertainty has to start out with the face that, look, what we’re doing has a lot of uncertainty. So you have to start there, and that’s difficult for people to accept. I think it also goes back to the experiments and the sunk cost fallacy. For example, acceptance criteria. Acceptance criteria is something we might create for a user story to sort of know when it’s done, but that has an expiration date. When the story is done you put it aside, because you may have found, after trying it out, that, well, it does what we thought it should do, but that’s not what we need it to do. So you need to accept that feedback.

Dealing with uncertainty means being open to getting information that changes things, right? It’s being comfortable with seeing, “Oh no, this doesn’t work. Let’s see what’s wrong.” Or, “This works, not in the way I thought, but that’s okay.” I think that’s a very difficult hurdle to overcome. But I think it’s really important that, as long as we are always getting good feedback, and for whatever definition of good and for whatever level of feedback we’re talking about, as long as we’re confident that that feedback is working, that that process is working, then we have to trust in that. We have to say, we trust our process. If you look at us today we may seem really chaotic, and you look at us next week and we’re all really happy and calm. There are ups and downs, and you have to trust that your habits, your tools, the feedback cycles that you have, and your principles, are embedded so that you know how to make decisions and you don’t get caught up in sort of decision paralysis.

It goes back to sort of this idea of why we don’t like uncertainty, because it provokes feelings of anxiety: “All right. What’s going to happen next?” “I don’t know. So why is that a problem?”

Gil: So let’s change that. Let’s change that to, we do have the certainty of getting feedback.

Ted: That’s a great way of looking at it. We know that we will get good feedback, or we’ll know if we’re not getting food feedback. One of the best ways to introduce anxiety is to basically, and this gets into — I’m going to use an example which some people may not like – but performance reviews. What provokes more anxiety than an annual performance review?

Gil: Oh yes.

Ted: And why it provokes anxiety is you don’t know. You’re going for a whole year without knowing what you’re doing, whether it’s good or not, whether it’s appreciated or not, whether it’s in line with expectations or not. How much more anxiety could you ask for than from such a process?

Better companies say, “Okay, we have to do that,” and then ameliorate it with more frequent feedback cycles, more than once, so on and so forth. But the idea of these feedback cycles, it’s like driving. It’s, like, “Look, I know I have enough gas, so I will be able to get there. And if I have to take a different route because of construction, I know I have my wonderful feedback loop of my compass telling me which direction to go in and how to get back to the direction I wanted to go.” So I have confidence in that. Earth’s magnetic poles are staying put for as long as I need them to, at least on a human level. So I have confidence that I will be able to get to where I need to go even though I may have to takes detours, or I may find that where I want to go is no longer as interesting as this new place that people are telling me I should be going.

So if you have those sets of concentric feedback loops in place, and they are giving you mostly good information, then yeah. Then that takes care of — now you have, as you said it, you have certainty in that, even though the overall might give you uncertainty. I think that really does help with the anxiety: “Look, I don’t necessarily know where we’re going to go, but I know that we’ll get to a better place.”

Gil: You and I have spoken about this before, probably one thing that helps is actually to change one of our assumptions. I’m sure that a lot of our listeners like to work in environments where there are projects, right? Something has a start and an end, it’s of a temporary nature and so on. But you talk to Agilists and they say, “There’s no end. You’re never really done. You just decide when to stop.”

Ted: Right.

Gil: I think this plays nicely with what you just said, and that we will get to a better place and better and better, until, at some point, we decide it’s not worth continuing, there are other, better things to do.

Ted: Right. Yeah, and I remember in the early days of Agile and XP, a lot of it was created around consulting. It was the idea of, we’ll keep writing more software to give you more features until you say, “You know what? What we have is enough, and the money that we would put towards more software for this thing, we would actually like you do this other thing.” So the idea of a project sort of just completely falls apart, because, as you said, there is no end. It’s all about, we have an almost infinite amount of things we can do, and we’re always deciding what we should do next. But instead of deciding it every two years you decide at a much shorter timescale. That allows us to always get, fundamentally, a better bang for our money.

Gil: Yes, and this only emphasizes why iterating is so much a top-ten item to pack on the journey, because I see this every time. People take on a project. They’re going to use Scrum, and they just fill up a backlog with snippets of the spec and off they run, really chunking a project down and not quite iterating on it.

Ted: Yeah. You may get better at it, in the sense you may have success with that because you’re doing other things better, but what you lose out on is things that weren’t in the plan, opportunities that arise. It’s, like, “Oh my God, if we can capitalize on this, this is going to bring in an additional 20% income stream for us. It’s worth putting aside this other stuff that’s actually not really important. Let’s go do that.” So the ability — and to me, the word is right there, “agile” – Agile doesn’t just mean going fast. It means being able to also switch directions.

Gil: Or bending over backwards [laughs].

Ted: Or bending over backwards, but being flexible, and being able to say, “I’m going to take advantage of this opportunity that just came up, because I can, because I don’t have this commitment and this plan for a year, and I’m only six months into it. I have done some work, and at six months we can stop.” This is always shippable software. If you do that you can stop at any point and reassess, do I continue with this or do I take this new opportunity? But if you’re always sticking to this plan you might not even see that opportunity.

Gil: Which really circles back to what you were saying about software development being all about learning, right? You’re learning of this new opportunity and what it can do for you, and now you have the opportunity to do something about it.

Ted: Exactly.

Gil: So in the couple of minutes we have left, let’s talk about the learning mind-set. So not just saying, “Yeah, we value learning. We’re a learning organization.” Learning mind-set: what is that? What does it look like?

Ted: So I think it goes back to some of the stuff I’ve already said, which is willingness to fail. Learning… and I’ve already mentioned this in terms of the feedback, if you have quick feedback. I mean, this is why we love games, right? We get immediate feedback. We know if we push this candy over here and it doesn’t work, that’s immediate feedback. We know we didn’t do it. We have to move it over here in order to make them disappear. That’s learning, and that’s really short timescale. So in order to promote learning it means we have to let people do things.

I used to have a training company. I used to give these wonderful lectures — at least I thought so – these would be week-long trainings, I would spend 80% of the time talking, and I thought, “All my slides are great.” I realized, after getting feedback, that people weren’t learning. So I completely flipped it to, “Here’s a little information. Let’s go and write some code.” So it’s in the doing that we learn, and if we don’t let people do something . . . So writing code for this feature is not necessarily learning if I’m doing something I already know how to do. It’s necessary; in order to implement this feature I have to do this, but maybe there’s a better way.

So this goes into this idea of sort of 100% utilization. Let’s push everybody and make sure they’re always coding on features. But if you do that you provide no opportunity for learning, or you force people to learn in a way that’s not good, because they are, “I have to get this feature done. I’ve got to figure this out, so I’m going to figure out whatever hack I found on StackOverflow, and I’ll implement that, and I’ll move on.” But that’s not learning, certainly not learning of the type that we want to promote.

So I think the hardest part is, ours is a very knowledge based business. In order to acquire that knowledge we have to try things out and see, “Oh, that didn’t work.” We need that space in order to promote learning. It’s not just sending people to training courses, although that’s certainly valuable. I think another aspect is, send people to conferences, not specifically for the sessions, and they might pick up something from the presentation; for to me it’s this opportunity to think and be exposed to new ideas.

Gil: Oh yes. The conferences have done so much for me it’s not even funny. Yes.

Ted: Yeah, and I think about conferences I’ve been to in the past year. It wasn’t any necessarily particular technique from a presentation. It was, like, “Oh, I didn’t think you could do that.” Or, “I didn’t think about this problem that way. Oh…” and substantially changed the way I think about developing large systems, in that you develop them out of really small systems. So it may be obvious, but sometimes the obvious isn’t always available to us.

Gil: Right.

Ted: And that goes into one last aspect of learning that I think is really important. We talked about feedback, and if you’re so busy and so tied up and so focused on what you’re doing rather than looking around a little bit, what people often call mindfulness, being mindful about what you’re doing, being observant about what’s going on – the more observant you are, the more mindful you are, the better your learning will be. Now you’re open to more and different kinds of feedback. It’s, like, “Oh, when I said this in the standup that was weird. What happened there?” Or, “Why did this person say that? What’s going on there? Oh, maybe there’s a problem here that nobody is talking about. Maybe I should ask about that.” It’s this noticing of things that I think is a critical part of learning. You can’t learn, you can’t get the feedback, if your eyes are closed.

Gil: Yes. Well, Ted, you’ve given our listeners so much. Let’s just wrap this up with a quick question. What can our listeners implement in the next 30 days to get more towards the iterating and learning mind-set?

Ted: So I think if you’re not set up with some feedback cycle… retrospectives are certainly the ones I always — for some reason, and I think it’s maybe the way they’re run, and so I would beg of you to have them and ask for help in running retrospectives. But some feedback cycle that helps the team become more of a team. That helps you be comfortable and confident in your processes, as we mentioned, because then that will allay some of your anxiety. So if you’re not holding retrospectives, at least every 30 days, then at least do that.

And I think if you’re doing things that have become routine, question them. Do they still fit, and do they still meet with your goals? For example, are you doing a standup every day? Are they working? How do you know? What measurement would you use to say, “Yeah, standups are working well.” So take a step back out of your daily routine and look at how are you iterating; what are the things you’re doing over and over again, and are those things still helping you?

Gil: And if you need a cheaper solution to that, bring somebody in who’s not a participant and get their feedback, because they are probably going to notice more, right?

Ted: Absolutely.

Gil: Okay. Ted, this has been wonderful. Thank you so much.

Ted: Oh, my pleasure. Thank you for having me.

To see what Ted’s up to these days and learn more from him, check out his LinkedIn profile.

Gil Broza’s Books

New book: Deliver Better Results


Pragmatic, people-first, holistic guidance for delivering better results


Inspiring experiences that change mindset, develop skills, and catalyze action


Expert consulting and facilitation for high-stakes needs


Training opportunities (online and in-person) with Gil Broza