Google Lens transforms your search with video and voice
Google Lens transforms your search with video and voice


Google Lens revolutionizes the user experience by integrating video and voice search functionalities. Thanks to these innovations, it becomes easier to explore and understand the world around us, simply by using a smartphone. In this article, we take a close look at these new features and their impact on visual search.
A new dimension of research with video

If you’ve ever had trouble identifying an object using just a picture, Google Lens now has a solution for you. Using video, you can ask questions about what you see in real time. Imagine yourself in an aquarium, observing the fish with curiosity. Just open the Google Lens app and hold the shutter button. Then ask your question out loud, such as: “Why are they swimming together?” “.
This functionality is based on the model Gemini AI from Google, which analyzes the content of the video while taking into account the question asked.
Advanced technology at the service of the user
Rajan Patel, vice president of engineering at Google, clarified that this advancement is based on a fundamental change in the way Google Lens processes videos. Instead of just capturing the video, Google extracts a series of images from it, allowing the Gemini model to:
- Understanding multiple images in sequence
- Provide an accurate response based on the content viewed
This approach significantly improves Google Lens’ ability to provide relevant responses, providing an enriched user experience. Although there is no support yet for identifying sounds in a video, such as birdsong, Google plans to incorporate this feature in the future.
Voice questions: a simplified search
In addition to videos, Google Lens also includes the voice question function for image search. To use this feature, simply point your camera at the subject of interest, hold down the shutter button, and ask your question out loud.
This update simplifies the search process, as previously users had to enter their questions after taking a photo. From now on, this vocalized interaction allows for more fluid and natural use of the application. Currently, this feature is rolling out to Android and iOS, but it is only available in English at the moment.

How to take advantage of these new features?
To maximize the use of Google Lens, here are some tips:
- Use video to ask questions about dynamic contexts : Whether it’s a sporting event or an animal in motion, video allows for richer interaction.
- Use voice questions : This simplifies interaction, especially when you’re on the go.
- Stay up to date with new features : Google continues to improve and integrate new options, so don’t hesitate to explore the application regularly.
Conclusion: the future of visual search
With these innovations, Google Lens opens a new horizon in the field of visual search. By integrating video and voice, Google allows users to interact more intuitively with their environment. This promises to redefine the way we access information and makes learning more engaging and interactive. It remains to be seen how these features will evolve, but one thing is certain: the future of search is getting more and more exciting.






