Menu

Edd Baldry

Director of Innovation

Adopting AI at Torchbox

4 mins read

In March, OpenAI released GPT4. The rest is history.

Overnight everybody became an AI expert, and everyone was clamouring to adopt the technology as quickly as possible. We were no different. Our perspective was that, used correctly, using AI could increase our creativity.

Increasing our creativity meant increasing the social impact we could create with our partners. But we knew the journey to adoption wouldn’t be straightforward.

Where were we?

Agencies are always supposed to pretend to be confident and infallible. But the truth is, with a technology like GPT4, we were scared for our future. If it could do everything it said it could, lots of us worried Torchbox might become redundant. It might explain when we first asked folks how they felt about generative AI the feeling was muted.

Many of us worried about the ethical side of AI too. We worried about the harmful content that large language models could generate. We worried about the process that had been used to create the datasets. There continue to be lots of conversations here about this. Six months ago almost all our conversations were around ethics and governance.

It wasn’t obvious how we should adopt generative AI tools. And it wasn’t obvious how we would be able to get them adopted through the team if we wanted to. Our solution was to take an emergent strategy.

Emerging adoption

I’ve been working with AI for the past four years. In the three years before Torchbox I was running an AI-based startup. When ChatGPT was released in November last year it meant I was onboard immediately. Most others were caught completely off-guard. For some AI looked like the same hype that had made blockchain and “Web3” a big deal. For others it looked like a threat to their craft. Others didn’t have the headspace in their busy days to think about it.

That was also the case here at Torchbox. Within our April survey only 30% of the team said they were confident in adopting generative AI. This is normal. New technologies, culture changes or strategy shifts always take time to generalise through an organisation.

An effective strategy to help adoption is using an ‘Emergent’ framework. Emergence is a change-management process. It works by creating advocates for the change. That means connecting them and allowing them to adopt the change as works for them.

An illustration showing a curve moving downwards with a second curve moving upwards. It visually describes the process of a new way-of-working emerging

The process of a new way of working emerging

There’s a great Harvard Business Review giving an overview of different change management processes.

As an Innovation team we worked with almost half of our 120-person team. It was cross-functional working with every team across Torchbox, not only developers or designers.

Each emergent group contained between four to six people. We worked with these groups across four weeks, with two sessions in each of those weeks. We did this in cycles over the summer working with three or four groups at any given time.

Emergent sessions are a deliberately collaborative process. In it folks are asking and answering their own questions. The Innovation team worked as facilitators creating space and a direction. But it was important that we didn’t force our views or opinions on participants. With an emergent strategy the change has to come from the group.

Where are we now?

We ran a company-wide survey last month. You can see the Typeform report here.

  • 93% of the team are using AI in some form
  • 28% of the team are using AI multiple times a day
  • 72% of the team are using AI at least weekly
  • 0% of people are using Bing. Poor Bing!

The fact that 28% of the team are using it multiple times a day is interesting. It implies they’ve embedded it into their workflow. It was a surprise for us to see how often GPT3.5 and Claude are being used. As Innovators we’ve been advocating for GPT4 and Github’s Copilot. We believed they had the most value because of their higher ability to reason. It means other tools are creating value for the team too.

There’s likely to be bias in these results. The survey we sent risks suffering from recall bias, response bias and reporting bias.

Despite the biases it indicates that our team is starting to work with these new tools. That will bring benefits to us internally, but most importantly it brings benefits externally. Our belief is that we can create more impactful and creative work using generative AI. That means better work for our partners and their end-users.

Where do we go next?

We are still early on this journey. Last week OpenAI released a new feature where you can upload an image as a prompt. That creates new possibilities we’ve already started to explore. In the coming months our expectation is that capabilities will continue to accelerate.

But there is also fragility. In May researchers at MIT published an optimistic paper saying that generative AI could increase efficiency by 50%. We haven’t observed this for everyone. In our emergent sessions we heard some people getting outsized gains from using AI, others felt it held them back.

For the Innovation team specifically we’ve started working with our partners looking at this. Since July we’ve been working with people like Oxfam, Art Fund and CRUK to set direction, experiment and accelerate their Innovation work. Unsurprisingly, given the uncertainty AI has created, most of our work has been focussed there.

Get in touch if you’re interested in hearing more about how we can support your organisation with this transition.

Get in touch