Menu

Eleanor Salter

SEO Executive (Academy)

6 biggest AI announcements from Google I/O 2024

5 mins read

At its annual I/O developer conference on 14 May 2024, Google announced an array of AI software updates.

​​AI was the order of the day, with no Google product left untouched by the possibilities of generative AI by Gemini, Google’s “largest and most capable AI model”.

Google CEO Sundar Pichai has said that these advancements bring us ‘closer to our ultimate goal: making AI helpful for everyone.’

GPT-4o from OpenAI

This update comes just one day after the announcement from OpenAI, where we learnt about the new GPT-4o model, also designed to create a more natural human-computer interaction. OpenAI shared content showing GPT-4o interpreting video and responding to what it could see in real-time. It is clear that both Google and OpenAI are heading in a similar direction, creating AI products which aim to do more of the work for users whilst mimicking human interactions.

Many new technologies and concepts were demoed, all with new names and features. Project Astra, Veo and Gemini Nano, Pro and Flash – we’ve summarised the biggest announcements from Google I/O 2024:

1. Google Search: AI Overviews

Sundar Pichai introduced the ‘Search’ section of the Keynote speech by saying that, as their founding product, Search “one of our greatest areas of innovation and investment”.

After a year of being able to test Search Generative Experience (SGE) in Search Labs, Google is officially launching AI Overviews, an evolution of their AI-powered search results. The update is already rolling out in the US and is expected to be available to “over a billion” users by the end of 2024.

During the keynote, Google shared that AI Overviews would be an entire AI-generated search results page. In our SEO team’s early testing to trigger these results, we can see so far that these overviews resemble featured snippets, but combine information from a variety of sources to present a mega-answer to the query you’re searching for.

At present, they appear at the top of results pages, above the organic results.

Screenshot of a search engine results page for 'what are the signs of ms'. The AI Overview section lists various symptoms of Multiple Sclerosis (MS), such as fatigue, numbness and tingling, muscle weakness or spasms, blurred or double vision, loss of balance, tremors, pain, trouble walking, bowel or bladder control problems, emotional changes, and more. Additional symptoms listed include red-green color distortion, electric-shock sensations, lack of coordination, loss of vision, vertigo, slurred speech, cognitive issues, flashes of light, involuntary eye movements, and clumsiness.

An example of an AI Overview for the search “What are the signs of MS?” using Google’s Search Labs.

With expanded AI Overviews, more planning and research capabilities, and AI-organised search results, the new Gemini model aims to make searching easier. Liz Reid, Head of Search at Google, said that with this update, ‘Google will do the Googling for you.’

The key features of AI Overviews include:

Multi-step reasoning - Google will be able to answer more complex questions and help with tasks such as planning trips or creating meal plans.

AI-organised search results page - Searchers will see results categorised under relevant, AI-generated headlines.

Adjustable AI Overviews - Users will be able to adjust their AI Overview with options to simplify the language or break it down in more detail.

Ask with video - This feature will enable users to search Google users video input.

Google presented their latest updates to search as: ‘Generative AI at the scale of human curiosity’, setting their intention to use AI to answer people’s complex questions better than ever before.

Search is one of the most important areas for Google to innovate with AI, and crucial to get right, as advertising revenue from Search accounts for 77.8% of Google’s parent company Alphabet’s total revenue (2023).

2. Gemini Enhancements: Gemini 1.5 Flash

Gemini 1.5 Flash is a smaller and faster version of the next generation of Gemini large language models (LLMs).

Flash was announced at the same time as Gemini 1.5 Pro – and it has many of the same advanced capabilities – except this version has been optimised for speed, making it handy for tasks where quick processing is essential.

It is also capable of multimodal reasoning, meaning users are not limited to one input and one output type and can prompt a model with virtually any input to generate virtually any content type.

This update will be available today, in preview, in more than 200 countries and territories and will be generally available in June.

3. Project Astra

Project Astra is Google DeepMind's vision for the future of AI assistants. The aim is to develop AI that can understand and respond to situations similarly to humans. This AI should be able to perceive, reason, and converse quickly and naturally. Google has been working on improving AI systems to process information faster and respond in real-time, making interactions with AI feel more natural and conversational.

4. Gemini Nano, Google’s on-device mobile large language model, is getting a boost and allows integration of features to Chrome and Android.

Google is integrating Gemini AI into Chrome on desktop, starting with Chrome 126. This integration will use Gemini Nano, a lightweight large language model, to power on-device AI features like text generation.

Google has updated Gemini Nano and optimised the browser to load the model quickly. With this integration, users can generate product reviews, social media posts, and other content directly within Chrome.

Gemini Nano has also gained multimodal capabilities, meaning it can process text, images, audio, and speech. It can also return any of these modes as outputs.

5. Veo: video creation

Veo is Google’s new video generation model that can create high quality video output from text, image and video prompts. This is a competitor for Open AI’s Sora model which can create video from text.

Google highlighted key selling points of Veo AI-generated videos, including: its ability to understand tone of prompt, creative vision, cinematic styles and terms like “aerial shots”, and outputting realistic and long (1+ minute) videos. They pitched the idea of using Veo to create concept videos as part of the creative process.

Donald Glover (filmmaker, Childish Gambino) partnered with Google to create a video using Veo, which was demoed at Google IO – lending credibility to Google’s new tools.

6. Google Workspace & Gemini

Gemini in the Workspace side panel now uses Gemini 1.5 Pro, offering more advanced reasoning and longer context windows. This means you can ask Gemini questions and get insightful responses right from your Gmail, Docs, Drive, Slides, and Sheets. Plus, new features are coming to the Gmail mobile app, like email summarisation, contextual smart replies, and Gmail Q&A powered by Gemini.

Our take

Google's annual I/O developer conference showcased a wide range of AI software updates, highlighting the company's commitment to enhancing user experiences across various platforms and applications.

With these updates, ​​this latest conference has once again shown us Google’s direction of travel and – in the case of Google search – it is a continuation of what they have been hinting at, and working towards, over the past year.

In a race to out-perform Open AI, some of the features announced are in development but not-yet launched, with some release dates extending to the end of 2025. Promo videos for features that are not yet available made the keynote speech feel aspirational and futuristic, although Google hope it won’t be long before we are using their full suite of Gemini-powered tools.

Google continues to lead the way in advancing AI technology, bringing users closer to a future where human-computer interactions are seamless, intuitive, and natural.

Contributors to this blog also include Jenny Hearn, Edd Baldry, and Olivia-Mae Foong.

Want to chat about these updates in more detail?

Get in touch