
Google Gemini Live Adds Video & Screen Sharing | Image Source: mashable.com
BARCELONA, Spain, March 09, 2025 - During the Mobile World Congress (MWC) in Barcelona, Google revealed a significant improvement to its AI, Gemini Live conversation. Starting this month, Gemini Advanced Android subscribers will be able to use live video and screen sharing capabilities, marking an important step in AI interactions in real time. This improvement, originally envisaged under the Astra project, aims to make AI discussions more interactive and visual.
What is the new feature of Google Gemini Live?
Google’s Gemini Live, a multimodal AI designed for dynamic conversations, receives an update that changes the game. Users will now have the opportunity to share live videos or their screens directly with AI, allowing for a new level of interactivity. According to Google’s official blog, this feature will allow users to point their smartphone cameras to objects, menus or workspaces and receive instant contextual information from Gemini Live.
For example, a user could show Gemini Live a dish in a foreign restaurant, and AI could provide real-time menu translation while offering ideas about ingredients or cultural significance. This is aligned with Google’s broader vision of making AI more naturally integrated into everyday tasks, removing obstacles from text interactions.
How does this compare to other artificial intelligence assistants?
Gemini Live’s new capabilities put it in direct competition with ChatGPT’s voice mode and other AI-driven vocal assistants. Unlike standard voice interactions, Google’s AI will now be able to handle and respond to the visual of the real world. According to Ars Technica, this movement brings Google one more step towards the realization of a true “universal agent of AI” capable of understanding and responding to the complex entries of the real world.
One of the strengths of this development is its real-time video processing. While other AI models can summarize YouTube videos or analyze images, live video interaction remains largely unexplored. The Google approach ensures that users can interact with their environment while receiving instant support from artificial intelligence, thus closing the gap between digital intelligence and physical reality.
Integration of the Astra project with the Gemini
The introduction of live video capabilities comes with a strategic internal change on Google. The team behind the Astra project, which was originally part of the DeepMind research division, now officially joins the Gemini application team. This transition, as reported by 9to5Google, means a step from a research prototype to a complete consumer product.
The Astra project, introduced in Google I/O 2024, was conceived as an advanced IA capable of contextual consciousness. A remarkable demonstration showed the IV recognizing handwritten notes, analyzing the code on a screen, and even remembering where a user had placed his glasses. By integrating Astra’s research into Gemini, Google aims to accelerate the development of an AI capable of long-term memory and environmental awareness.
Why is live video important to AI?
The addition of live video processing opens up a plethora of possibilities for AI applications. Helping users with step-by-step cooking instructions to provide real-time support for solving technical problems, the potential is vast. Google provided an example in which a ceramic artist asked Gemini Live for enamel recommendations while showing fresh glasses. AI was able to analyze objects and provide style suggestions tailored to the artist’s preferences.
This feature also affects accessibility. Users with visual impairments could use Gemini Live’s video capabilities to receive oral descriptions of their environment. Similarly, students could use the IV to mark diagrams or explain complex concepts simply by showing them to the camera.
Google’s New Leadership in AI
In addition to these product updates, Google announced a change of leadership within the Gemini division. Chris Struhar, former director of Meta, has been appointed new product vice president for the Gemini application. Struhar, who has directed products previously focused on the Facebook creator, offers extensive experience in developing user experiences driven by AI. As Mashable reported, Struhar will report to Sissie Hsiao, Vice President and CEO of Google Assistant and Gemini, indicating a strong focus on the integration of AI into daily digital interactions.
This change in leadership highlights Google’s commitment to make Gemini a central part of its AI ecosystem. With competition from OpenAI, Microsoft and other technology giants growing, Google is positioning itself as a precursor in the AI arms race.
When will users have access?
According to the official Google blog, live video and screen distribution functions will start shooting Gemini Advanced subscribers later this month as part of the Google One AI Premium plan. This plan, which offers improved IA features, should serve as a test basis for Google’s most ambitious IA features before broader public versions.
Although no official calendar was given for unskilled users, industry experts believe that these features could become standard in the Gemini application by mid-2025. Google’s approach reflects the progressive deployment of its previous advances in AI, ensuring the stability and integration of user feedback before mass adoption.
With these updates, Google redefines AI’s assistant landscape, making digital interactions more natural, intuitive and visually interactive. As the competition is hot, the success of the new features of Gemini Live could determine how users get involved with AI in the future.