AI and Innovation at Torchbox
Our innovation team has been working on responsible experiments using Large Language Models (LLMs) since OpenAI released GPT3.5 back in November last year. The release of GPT4 was the kick we needed to start exploring them more seriously, to understand their impact on charities, education institutes and open-source projects.
Ethics
Let’s be up-front about this. There are a number of strong arguments around why a purpose-led organisation like Torchbox shouldn’t be using LLMs. These ethical dilemmas aren’t far off thought-experiments like Artificial General Intelligence developing and deciding humans are no longer relevant but are instead based on the real potential for harm from the biases and incorrect information contained within these models.
We have a small team researching and analysing the risks and responsible use cases to allow us to develop an ethical framework of how we engage with LLMs, mitigate or eliminate unintended consequences and stop these systems harming our communities. It’s top of our list because if we can’t resolve the ethics of using LLMs we will stop experimenting with them.
Getting ready for the future of AI
Irrespective of whether Torchbox or our clients adopt LLMs, disruption is happening. So we have been working on a strategic Foresights and Futures thinking workstream across corporate, government and charity sectors - and we are synthesising that into a package of work and new services.
Kaleidoscope is our blueprint for creating AI future-ready organisations. Our approach blends the best of Strategic Foresight, Design Thinking, Systems Theory, Lean Start-up and Creativity to harness the power of AI to help purpose-led organisations maximise their positive impact. There will be a more exciting reveal soon, but if it piques your interest, drop us a message, and we'll set up a virtual call to chat further.
Weekly experiments
We’re taking as our starting point that we know nothing. Within the innovation team, we have previous experience working with AI - I even had a startup based around it - but Large Language Models fundamentally change how human-machine interaction works.
We believe very strongly in Eric Ries’ concept of validated learning. We’re deliberately taking a fast cadence to learn as much as possible as quickly as possible. That isn’t to say we’re running experiments for the sake of it, we’re taking a problem-first approach, ensuring responsibility is at its heart to create meaningful value for our customers and Torchbox.
A snapshot of the past four weeks:
- Conversations with your Content (CoCo) is an experiment synthesising complex data from multiple sources.
- Conversations with your Data (CoDa) is an experiment to see if natural language can improve the quality of data reporting from a server (niche, but very valuable if true).
- Wagtail AI is an experiment around how useful content generation can exist within a content management system.
We’re aiming to keep up this cadence until at least June. If you’re interested in making a suggestion, we'd love to hear it.
Client experiments
We’re already working with two clients on different angles of the LLM rubric. The first, a UK charity, is exploring how they can synthesise their internal data for the helpline team to give faster and more comprehensive information to their users.
We’ve built a feedback system to record whether the response was helpful, which is already enabling us to improve the technology and give suggestions on where the internal data is missing pieces of information.
The second, a US-based educational institute, is taking an even more experimental approach and using time to explore how GPT4 and GPT3.5 can perform at creating new experiences for students or internal team members. It’s fascinating watching these experiments develop. We’ll be writing more about them in the near future.
Catching up with our customers
We are also starting to talk to our clients about their perception of LLMs. These conversations are essential for us to empathise with their needs and to help identify our blind spots in relation to the pace of change and anxiety it might create. We want to ensure that our work is always based on problems that customers are looking to solve - either for themselves or their users. If you’re interested in getting involved we’d love to get your views in this survey here.
An emerging skill-set
Internally we’re leveraging systems thinking to nurture the new skills and behaviours required to engage with Large Language Models. This cuts across all our teams and departments with the needs of developers, delivery managers and designers, for example, all being different.
We’re approaching this through an emergent system strategy working with small groups of ‘pioneers’ to map out what they’re working on and where LLMs might create an advantage. As an innovation team, we hypothesise that LLMs can improve the quality of work whilst reducing the difficulty or complexity.
Taking an emergent approach allows us to test that theory and provides options to all of us as co-owners of Torchbox.
A space to share our learnings
With everything moving so fast, we’re conscious of how easy it is to feel behind the curve. We believe Large Language Models - assuming we can surmount the ethics - can be transformational for charities and social innovation. We’re determined to ensure that lack of knowledge isn’t a barrier to adopting the technology.
If after reading all of that, you’ve got any thoughts or just want to get in touch for a chat, drop us a line, and we can arrange a virtual call.
Get in touch