Navigating AI: burst bubbles, challenges, and the path to social innovation
The bubble has burst. The emperor clearly has no clothes. We can go back to what we were doing before. I’ve been reading a lot of comments like this across social networks.
OpenAI has started to lose users and Microsoft is struggling with their economics. So, we, as social innovators, don’t need to pay attention to AI anymore. Or do we?
Gartner first published their Hype Cycle in the 90s. It’s not without its critics. Not least because it’s rare for items to move from start to finish on the cycle. But it’s a useful thinking tool. It runs from the Innovation Trigger, through the Peak of Inflated Expectations before arriving at the Plateau of Productivity.
Back in April and May we were at the Peak of Inflated Expectations. We have been here before with AI. When Kasparov was defeated by DeepBlue in 1997 there was a sense that it would be a short time before machines overtook humans.
The peak for me was an NVidia sponsored piece of research released by MIT. In it researchers suggested that generative AI could create 50% productivity gains. Eye-watering when you consider that electricity created gains of around 30%. The Internet, just under 10%.
For others, the peak was when researchers connected to OpenAI published their paper on Arxiv. It boldly states that 80% of the US workforce would be impacted by large language models. Alongside all of the chatter on socials it felt like the world was changing overnight.
The trough of disillusionment
Since the peak cracks have started to appear. It’s been revealed that Microsoft loses $20 for every user of Github CoPilot. OpenAI saw a dramatic drop of usage over the summer. The implication was it really was just college kids who were using the tool. The life-cycle analysis of the tools paints a sad picture too. It appears that a request to a large language model is five times more polluting than a Google search.
The National Eating Disorders Association (NEDA) fired their helpline team. They were replaced by an AI chatbot. Unfortunately the chatbot started generating deeply harmful content. Meanwhile team members at Samsung were accidentally revealing trade secrets whilst using large language models. In the US, A lawyer got in trouble for using imaginary court cases. Amnesty created fake photos. Suboptimal all round!
And organisations have been finding it hard to integrate the new tools into existing systems. AI creates such a fundamental challenge to existing ways-of-working. It’s proven difficult to adopt into already complex systems. We experienced this ourselves at Torchbox. We took an Emergent approach to integrate the tools. And we’ve experienced it working with our partners who are starting on their journey to set direction and accelerate the impact they can create with AI.
Towards enlightenment?
Roy Amara was an American futurist. There’s a rule-of-thumb he coined. “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” We’re seeing this play out with AI at the moment.
In the past months, Andy and I have done webinars with Mind, GOSH and academic publishers Sage. We’ve talked about Barbie, Maybelline, and Coca-Cola. Not your usual cohort when thinking about social innovation. But all have been doing interesting things with the possibilities created by AI.
There have been a range of interesting new tools released. You can now prompt OpenAI with images. JustGiving is empowering fundraisers to write better stories. SalesForce’s EinsteinGPT is helping anyone build propensity scores of their customers. Fabric is promising to organise every disparate piece of information you have into a personal knowledge store. Unstructured is promising it can take unstructured enterprise data and make it usable.
Inspiring stories are also starting to emerge of how social innovators are using the tools. It’s been exciting to see the enhanced creativity from WWF, the Dog’s Trust and Friend’s of the Earth. Or greater impact being created by translating content, making content simpler or better understanding donors.
At the same time we’re learning more about the dangers. Yuval Noah Harari, author of Sapiens, has written about how AI has hacked the operating system of civilization (paywall). Mo Gawdat, formerly of Google X, talked about the danger of AI’s compounding capabilities. Whilst the Distributed AI Research Institute (DAIR) continue their community-rooted AI research.
We'd love to hear from you. Get in touch if you've got a challenge, question or want to chat about innovation.
Book a call