Lead, Align, and Build what matters with Radhika Dutt, Author of Escaping the Performance Trap

Lead, Align, and Build what matters with Radhika Dutt, Author of Escaping the Performance Trap

On this week's episode of Inside Outside Innovation, we sit down with Radhika Dutt, author of the upcoming book Escaping the Performance Trap. Radhika and I talk about the challenges with traditional OKR systems and how companies can break free from the performance theater to create a better way to lead, align, and build what matters. Let's get started.

Inside Outside Innovation is a podcast to help new innovators navigate what's next. Each week we'll give you a front row seat into what it takes to learn, grow, and thrive in today's world of accelerating change and uncertainty, join us as we explore, engage, and experiment with the best and the brightest, innovators, entrepreneurs, and pioneering business. It's time to get started.

Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger, and as always, we have another amazing guest. Today we have Radhika Dutt. She's the author of a new book called Escaping the Performance Trap. Welcome.

Radhika Dutt: Thank you, Brian. It's great to be here again. We talked a few years ago.

[00:00:58] Brian Ardinger: I should say welcome back. Yes, the last time you were on, I think it was episode 273. And you had your first book that came out, which was Radical Product Thinking. And when you said you're writing a new book, and it focused on things like OKRs and goals and how people are misusing that. I said, hey, we need to get her back on to talk about some of the things that she's seeing. So welcome back to the show. Let's get started refreshing the audience a little bit about your background and, and how you got here.

[00:01:24] Radhika Dutt: Yeah, my background is I started as an engineer. I did my undergrad and grad at MIT. I started companies and I later went to work at bigger companies and it, it was in so many different industries from broadcast media and entertainment, advertising, robotics, even wine. Oh, and telecom was in there too. Government agencies. It was all over.

And so, the one common theme in working across all of these industries, working all of these, at different sizes of companies, the one common theme was I kept seeing the same set of product diseases over and over, and I was learning hard lessons in terms of how do you build good products and avoid these product diseases?

And so that's what led me to write the first book Radical Product Thinking: The New Mindset for Innovating Smarter. And It's been fantastic, like so many people have read it. It's become the staple when people are building products. And what's most satisfying for me is that really a lot of people describe it as it has changed their mindset, that now they apply product thinking to all sorts of things, even to parenting, to personal life, et cetera.

So that's been wonderful to see. Right. But that it's a philosophy of how do you think systematically about products being vision-driven as opposed to iteration led, and let's just throw things at the wall and see what sticks and keep iterating. It was this vision-driven approach. How do you envision, what is the problem you're trying to solve? The end state you wanna create. And how do you systematically drive that change?

So, what brings me to the second book? A lot of people who wrote to me saying, you know, just how helpful Radical Product Thinking was, there was also a set of people who wrote saying, you know, I love what you're saying in Radical Product Thinking, but what do I do in my organization that sets all these goals and OKRs and I have all these short term deliverables and that's what I have to focus on.

I can't do all of this long-term vision driven stuff. And this question came up to me so many times, I realized we really do need to tackle it. And for a long time, by the way, I was seeing the downsides of goals and OKRs, but for so many years I just didn't know how to articulate for people, why should you stop using goals and OKRs?

What's the, what's the problem with them? But most importantly, even if I could say, look, don't use OKRs, the question was always, well. It's the devil I know. What are you proposing instead? You know, I didn't have an answer until I started trying this new approach and it's worked so well and that's what's driving me to write this new book, Escaping the Performance Trap.

[00:04:03] Brian Ardinger: What is a performance trap? What are you seeing and how does that show up in organizations and teams?

[00:04:09] Radhika Dutt: You know, whenever I talk about the problem with goals and OKRs, people instantly identify one thing, which is we've all been in these monthly cycles where every month for our monthly business review, oh, what are some numbers we can show so that we can show, look, we are achieving these results, things are going well.

What actually happens, right? Is for leaders, you think, oh, I'm seeing these numbers. My team is being rigorous in terms of metrics. You wanna see the numbers, you want to see rigor, you want to see progress. But what you're seeing is an illusion, because what happens is teams are showing you numbers to say, Tada, look, I achieved whatever you wanted.

What is actually happening in ways that you can't see is whenever there are bad metrics, the incentive is to sweep that under the rug. To not show you the bad metrics. And the reality is you learn more from these bad metrics, quote unquote, because those are the numbers telling you what's not working, what you actually need to do to course correct.

So as a leader, you don't always get the clear view. And one example I'll give is I was working at Avid. Where every movie in Hollywood that won an Oscar was made using AVID's video editors. Such a fantastic number, right? Like every, hundred percent of all Oscar winners, et cetera, used avid. It turned out that our market was getting commoditized. That competitors like Apple and Adobe were entering the low end and even encroaching the middle end.

And so, for us, we kept going further up into the high-end niche, and that's where we were really focusing to be able to make our targets. If you looked at the targets and numbers, everything was looking great, and it just seemed like all we had to do was keep focusing on just whatever we were doing. We were going to hit the numbers, right?

And this is what I see often. This approach works until it doesn't work. What you want in the team is not the incentive to show you what's working, but rather have those open discussions to learn, experiment, et cetera.

[00:06:20] Brian Ardinger: That's a great point and, and I see this a lot in corporate innovation types of teams. They set OKRs at the beginning of the year and you know, especially in innovation where you don't necessarily know what you're building or why you're building it, and things pivot, and change based on what goes on during the year. Oftentimes they don't go back and reevaluate, you know, what are these metrics that we're looking at and are these the right things to measure?

And, and so I see a conflict a lot of times, especially in innovation where, again, it's not necessarily a here's the business model. We know how to exactly execute it and it's more systematic and, and certain. But when you're dealing with new product development or innovation areas where there's naturally more uncertainty, having and picking at the beginning of the year, here's the one thing that we're going to look at. It almost puts you in a bind from day one.

[00:07:03] Radhika Dutt: Exactly. I love your point. It has two issues, right? One, you assume that, you know, at the beginning of the year, and really just that starting assumption that I know what the right answer is and what I need to hit, that in itself is the wrong assumption to start with, right?

Like our assumption has to be, I don't fully know. What you need is puzzle setting and puzzle solving as opposed to knowing at the beginning. So that's one. The second thing that you point out, you know, a lot of OKR experts, they'll say, oh, well there's an easy solution to that. You know, instead of just setting OKRs once at the beginning of the year, you should revise every quarter.

And you know, I've actually once heard someone say this to an executive and they just scoffed at that, and they were like, yeah, well. You know, right now it's tough enough that I have to align all of these cross-functional groups at the beginning of the year to set OKRs. If I had to do this multiple times in a year, we would just die.

[00:08:02] Brian Ardinger: I think you're right on that. And so in your book you talk about, rather than OKRs, you talk about something called OHLs. Can you talk a little bit about that and and what's this new methodology?

[00:08:11] Radhika Dutt: I touched on this idea that instead of knowing we have to focus on puzzle setting and puzzle solving. Let me step back for one second and just talk about intel and innovation in the nineties at Intel.

You know, Andy Grove was credited by John Doerr, the one who wrote, Measure What Matters. He was credited as the father of OKRs because he instilled OKRs at Intel, and he said, you know, that's what made Intel so successful because they had OKRs. And if you look at actually what Walter Isaacson said, when Andy Grove was named Man of the Year by Time Magazine, you know what Isaacson said was actually it was Andy Grove's paranoid obsession with never getting complacent. That was what made Intel successful.

So, interestingly, what we adopted from that success was OKRs, but we didn't adopt how can you be constantly experimenting, adopting, that was what made Andy Grove different from so many of his successors. OKRs thrived all along at Intel, but the later CEOs of Intel, they were missing that experimentation and adaptation.

So, what I'm working on in this book is articulating a framework for how you do this experimentation and adaptation, this puzzle setting and puzzle solving. So OHLs, instead of OKRs, OHL, stands for objectives, hypotheses, and learnings. So let's start with puzzle setting first. Puzzle setting means, you know, instead, and let's pick the hardest problem, which is sales.

We've always assumed that sales requires targets. So, a sales scenario with puzzle setting would be, I would describe the problem statement. So, sales grew in the last three years and sales has stalled in the last year. What's going on? And I'd write some guiding questions; things I honestly don't know answers to.

Are there trends in the market that are shaping our industry that we've not accounted for? Are there things our competitors are doing that we are falling behind on? And lastly, have things changed within our sales team? Or like, are we less equipped to sell in this current market? Like there may be other questions, right?

But I would invite feedback and genuinely list questions that we don't know answers to. So, this is the puzzle setting. And so, the objective is a summary of this puzzle, like how can we start growing our business again? I would, by the way, also in the objective say, my expectation is that we get back to that growth trajectory and here are the numbers that I want us to get to. How might we get to these numbers? So that's the puzzle.

And then the puzzle solving part involves asking three questions. So the first question is how well did it work? So whatever we're trying, how well did it work? And notice it's not a binary question like, okay, OKRs. Did you or didn't you hit the goal, right? We're asking how well.

And this is where the hypothesis comes in. You can't even know how well it worked if you didn't have a hypothesis, if you don't know what you're measuring. So, this approach OHLs requires a lot of rigor and measurement and so the hypothesis, and by defining leading and lagging indicators, I can really figure out how well it's working. And this, by the way, is the analytical left-brain mode when you're answering this question.

The second question is, what did we learn? And so, this is where, you know, when I ask teams, you know, how well is the product working? Often, I get numbers like, okay, we have this number of weekly active users, this number of pages that were visited, bounce rate. I have to tell teams to stop. Tell me the story. What's the narrative? What have you actually figured out from those numbers that you've learnt? So that's the second question, right?

And the third question is. What will we try next? So meaning based on how well it's working and what you learned, what will you ask for if I give you a magic wand? These last two questions, the storytelling and what you'll ask for if I gave you a magic wand. This is what triggers the creative part of the brain.

And so, this is where using OHLs is super helpful for complex puzzle solving, which is really the kinds of problems that today's workforce is solving as opposed to what OKRs and its predecessors were really intended for.

[00:12:38] Brian Ardinger: Yeah. You see differences in this working in different types of industries. For example, leading edge more innovative industries versus ones that are large enterprises, regulated industries, things along those lines. How do you see the, the industry playing a different role in how companies would implement this?

[00:12:55] Radhika Dutt: Such a great question. Let's start with where goals and targets actually work.

If you look at history and how OKRs even came about, they were an evolution of Peter Drucker's management by objectives. And you know, this is something that most people don't know, but how did Peter Drucker come up with management by objectives?

And I dug into history a little bit, and he came up with MBOs in the 1940s because he was working with GM at the time. Think about that workforce and the problem that he was solving for GM, right? It was mostly an unskilled workforce working on assembly lines where there was very little automation.

And so at the time, Peter Drucker's ideas were revolutionary. He said, well, instead of doing command and control, set goals or targets together with your teams, together with employees, and then measure them by it. Revolutionary idea for the time. And it was easy to tell that Andy here is a better performer than Bob. He installed 45 tires. Bob did 40.

So, in settings where there's a clear single right answer to how to solve something, like there's only one way you can install a tire, goals and targets work really well whenever you have a workforce, and yet it's a matter of like you're churning out X number of lattes or something like that. You know, that works.

Well, where doesn't it work when you're solving a complex problem, so let's even take the manufacturing example. When you set goals and targets for production, and you're talking about not a repetitive task on an assembly line, but you're constructing a Boeing Dreamliner, we've seen what sort of problems that leads to and the quality issues that Boeing has consistently run into, right? Because of such a goal-based mindset.

So, whenever you have a complex problem, a strategy, something you have to figure out where there's no single right answer. That is where goals and targets don't work. And you know, if I look at enterprises and what are listeners might be facing, if we look at primarily knowledge work. What happens is we think of it sometimes as super tactical. We think of it as assembly line work just done with laptops, but it's not really that.

And so, some of this is a change in mindset recognizing. All of our people are constantly puzzle setting and puzzle solving, and what we've not done is challenged how we address the needs of this new workforce. And we're still using methodologies that worked in the 1940s for managing performance in today's workforce.

[00:15:38] Brian Ardinger: It seems to be very much a, a psychological difference that we're looking at today as well. You know, there is so much change, whether it's AI or other things that are changing the existing playbook for most every organization.

And I think a lot of people that I talk to and work with are frustrated with the fact that the old ways of managing and looking at how we tackle problems today is just shifting so quickly and it's very hard to, to maintain sanity in that particular environment. So it seems like this particular approach allows that sanity to be questioned and then make progress in a different way.

[00:16:14] Radhika Dutt: Yes. And I love that you brought up AI. I really have some strong opinions on that. One of the things AI is good at is optimization, right? And AI can really help you optimize for metrics, but what happens is, there's this term that was coined by Cory Doctorow. The term is enshittification, where platforms go to shit.

And it happens in a three-step process. The step one is you give value to users. And it's true value to users so that you lock in users. Step two is you take value from users and give it to business customers. And step three is you screw over both consumers and the business users to give value to shareholders, right?

And so, we can see so many examples of gentrification this has happened with Facebook Unity for gaming, even a lot of smart TVs like we constantly see enshittification. Oh, airlines are another fantastic example, right? Things tend to go to shit. And if you, if you look at what AI is really good at, right?

It's optimizing numbers. And so, if you look at this unification curve where we offer value to users, consumers and business users, and then take value away and give it to shareholders, you know, all of this works until it doesn't, right? Like Facebook was doing really well. But then there was this network effect where people have started leaving Facebook and Facebook is trying to figure out its next pivot.

The same thing's happening with Google. But my point is this enshittification curve is just going to get accelerated by AI. Optimize for numbers and screw over users much faster, but that's not necessarily good for longer term business. Right. And I guess. I'll bring it to a more personal question for our listeners.

You know, you could use AI for doing a lot of this optimization, but when you don't think about the learnings and what you want to bring into the world, the risk is that you leave behind a legacy of enshittification. Right? And as a listener to this, and you hear this, that when you use AI, you could be creating all of these changes in the world.

In ways that could be really shitty for society, but in at a much faster pace. You know? How do you think about your legacy and sort of what can you do differently in terms of experimenting, being deliberate about that so that you leave behind the legacy you want?

[00:18:51] Brian Ardinger: So maybe some practical takeaways as a team leader, for example, you're listening right now who's frustrated with their current OKR performance review system and that. Typically, a lot of those systems are driven from top down, and the team leader doesn't have an opportunity to say, hey, we're just going to throw the OKRs out and do something else.

What's some advice or thoughts on how a person in an organization might be able to practically implement some of this stuff and not go astray of what they're ultimately being measured on.

[00:19:17] Radhika Dutt: Such an important point. And so, one of the things that I like to tell leaders is this whole approach of OHLs, you don't even have to challenge the OKRs in your system that you're working in, right?

Because what happens is, and I've tried both ways. I initially used to share first top down with leaders, let's not use OKRs. When you empathize with them, what you realize is they have this fear that if I stop using OKRs, well maybe my team is not going to be rigorous with numbers. Maybe like I lose the one form of control I do have today. Right?

And so, the answer is, instead of challenging OKRs, you just quietly introduce OHLs and as an addition into your team. Because OHLs doesn't add process burden. Honestly, it's a way of thinking. So, the way I instituted this at a company, and the company's called Signal Ocean. It's in the maritime space.

They used OKRs. Teams were constantly saying, you know, there's our number of weekly active users, here's how we're doing against the OKRs, et cetera. And you know, what we started doing instead was, let's just have an open conversation within our own small team. Let's use this approach and just talk to me about how well is it working, what have you learned? What are you going to try next?

And just on the safety of our space and within just two such presentations. The complete mindset shifted, Brian. It was amazing that people started becoming this detective that was constantly puzzle setting and puzzle solving. And so, you know, within our team, we started using this approach and just the whole team was taking more ownership in terms of features. What had more conviction, they were making better decisions.

And the CEO noticed it because then, you know. When I had enough confidence that the team was doing really well with this approach. Then we used this for our monthly business reviews, and we presented in this format to the CEO, and there was such a huge difference that was noticed. He then commented later, he said to me. OKRs, I now see them as the view in the rear-view mirror. Whereas OHLs are the ears on the track. They help you anticipate what's coming so that you can make course corrections that are more effective. And so that to me was fascinating. And so, as a leader, you know, you can just quietly introduce them, OHLs, within the safety of your team, just start using it. It's a mindset. It's not a process burden.

[00:21:50] Brian Ardinger: Well, this has been a fantastic conversation. I really do appreciate you coming on and giving us a little sneak peek into, what you're building and how this can be used in the real world. For people who want to find out more about yourself or the book, what's the best way to do that?

[00:22:01] Radhika Dutt: The book, I'm in the writing process, so in fact, this is a sneak preview for listeners. So, if you're listening and if you want to use OHLs and you start using OHLs and you want to share with me your story. This is an exciting opportunity that you can tell me your story, and it might actually make it as a case study in the book.

So, I have a few case studies, and I'd love to hear our listener's stories so they can reach out to me on LinkedIn or you can find my website rdutt.com. Or you can look up Radical Product Thinking as well. That's the website, radicalproduct.com, and I'd love to hear from people.

[00:22:38] Brian Ardinger: Thanks for being on Inside Outside Innovation. Looking forward to Escaping the Performance Trap and looking forward to having you on the show again. Thanks very much.

[00:22:45] Radhika Dutt: Thank you. This was great.

[00:22:50] Brian Ardinger: Thank you. That's it for another episode of Inside Outside Innovation. If you want to learn more about our team, our content, our services, check out InsideOutside.io or follow us on Twitter @theIOpodcast or @Ardinger. Until next time, go out and innovate.

FREE INNOVATION NEWSLETTER & TOOLS

Get the latest episodes of the Inside Outside Innovation podcast, in addition to thought leadership in the form of blogs, innovation resources, videos, and invitations to exclusive events. SUBSCRIBE HERE. You can also search every Inside Outside Innovation Podcast by Topic and Company.

For more innovations resources, check out IO's Innovation Article Database, Innovation Tools Database, Innovation Book Database, and Innovation Video Database. Amazon Affiliate links for books. Transcripts done through Descript Affiliate.

Subscribe to the IO Newsletter

checkmark Got it. You're on the list!
2022