Edd Baldry

Director of Innovation

How to Solve the Right Problem (with AI)

5 mins read

Humans hate problems. They feel dangerous, awkward and threatening. As an innovator, I’m supposed to lean happily into problems and embrace them. I have tools, strategies and frameworks to talk about the “problem space”. But if I’m not careful it’s not long before Tim Urban’s panic monster steps in and I find I’m creating solutions for a problem I don’t properly understand.

AI has appeared. It feels like the panic monster has stepped in for lots of folks. We can see it will transform our ways of working. We can see it will transform how we interact with people using services. We can see it will transform how we interact with donors. These all promise awesome futures, but for the moment, they look a lot like problems.

To state the obvious: you can’t solve a problem if you don’t know what the problem is.

Most of the time the wrong problem is solved because it feels uncomfortable trying to define the problem. Having a problem, and sitting with it, creates vulnerability. It requires folks to say the dreaded words, “I don’t know”. This is made even worse when it’s a gnarly, wicked, problem. AI unfortunately sits in that category. Lots of people are jumping straight to technical solutions when we should all be sitting with our problems for longer.

The danger of the first plausible problem

Six people are on a video call. They’re trying to solve a problem. Each person is giving their different perspective when someone jumps in. They give a plausible description of what the problem is. It’s like a light switch has gone off. All of a sudden no-one is talking about the problem, everyone’s moved onto the solution.

Changing scenarios. Two friends kayak down a river. They stop for lunch. Whilst eating they see a kitten float past. Then another. When a third one appears they both get on their kayaks. They paddle out and start rescuing kittens. The kittens keep arriving.

Kayaking into the river was the equivalent of identifying the first plausible problem.

After a few minutes one of the people paddles back to the shore and disappears. The kittens keep floating down. Frustrated, the person left on their own redoubles their effort to save every kitten. 10 minutes later no more kittens arrive. After a while the person who had left returns. The person who stayed to save all the kittens is very angry. They shout at the other one, "Why did you leave me all alone? I had to do all the work myself!" The person who had left said, "I had to go upstream to stop the idiot who was throwing kittens in the river."

Back to our video call. No-one has taken the time to understand if the problem has been adequately defined. Everyone just wants to get on and solve it. “Can’t you see we HAVE to save the kittens!?!” our fictional friend on the kayak might say. This is normal. Through school, university and our experience at work we’ve always been pushed to solve things. Humans love to solve things, we quite literally seem to be hardwired to do so.

If we try to solve the first plausible problem we’re likely to be treating symptoms. This is true of all problems, not just those that involve AI.

Getting to the root

I’ve had lots of incredibly exciting conversations with people trying to make the world better who now want to use AI to help them. There’s often a laundry list of things that it can solve, processes it can improve, people it can reach. Often they sound like they’d be valuable but equally often they’re looking at a symptom rather than getting into the weeds of why things aren’t working as they should.

Applying AI to symptoms won’t fix your problems. It may well create new, unexpected, problems.

Getting to the root requires two steps. We need to ask questions whilst also thinking about potential consequences. At Torchbox, we use a workshop we call a ‘Problem Definition’ workshop. There are lots of other strategies that can be applied here: Five whys, the fishbone (or Ishikawa) drawing, the iceberg model etc.

The root cause is often the key to the solution. As Einstein probably didn’t say “If I had an hour to solve a problem I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.” There’s an understanding that once the problem is properly defined the solution becomes much easier to see.

Avoiding solutions

Solutions feel tangible. They have a mass and gravity to them. This is even more true with all the new AI tools since they’re so shiny and new. Implementing a solution offers the glowing sense of satisfaction at the end of the day that you’ve “done something”. You can go to bed safe knowing you’ve created a design, written some code or produced a report. Something exists thanks to your labour. It’s not so easy if you’re stuck in the dark world of figuring out what the problem is.

To avoid the lure of the solution there’s a simple trick. Don’t talk about them when you’re defining the problem. If you’re trying to make decisions about what problems AI might solve: don’t talk about AI.

The British Design Council came up with the Double Diamond in 2005. The simplicity of the Double Diamond is its biggest strength. On the left side you define the problem. You define the problem by using divergent and convergent thinking. Only once the problem has been fully defined do you get to move into the solution stage.

Over the years the British Council have transformed the Double Diamond into their systemic design framework. It loses some of the simplicity but gains by recognising that defining a problem is often non-linear and requires time to solve.

Putting theory to action

Defining problems is hard. It’s hardest when the problem has lots of layers to it. Remember, if you jump straight to solutions you are not necessarily solving the problem. It’s totally possible you’re ignoring the actual problem and solving something else entirely. Moving too quickly can make things worse.

What to do? To understand a problem you need lots of perspectives. At a minimum that involves getting different people within your organisation into the same workshop at the same time. Ideally it should also involve information from people beyond your organisation who you’re working with.

At Torchbox, we’re as excited as everyone else is about AI. I believe it will completely transform how nonprofits work and the value we can deliver. Used correctly - to solve deep, systemic problems - it can change everything. The caveat is that we need to be taking a problem-first approach and understanding what needs to be solved before we try and solve it.

We’d love to support you on this journey. If you’re interested in organising problem definition work for your charity or nonprofit, please drop me a line. I’d love to hear more about the problem you’re trying to solve! Book into my calendar to chat.