...

Google Gemini used to hack a smart home: Researchers just showed how AI chatbots can be tricked


Published on: Aug 07, 2025 12:33 pm IST

Imagine AI being used to trigger actions in your smart home. That’s exactly what researchers have demonstrated using the Google Gemini AI bot.

Imagine if hackers could use a popular AI chatbot, such as Google Gemini, to manipulate your physical surroundings, like turning off lights and appliances, and potentially trigger bigger, more dangerous events down the line? Seems straight out of a Sci-Fi movie, but this is precisely what researchers have demonstrated by infecting a Google Calendar invitation, which in turn allowed them to hijack Gemini and manipulate the real-world environment.

The infected commands were sent as Google Calendar invites.(Google)
The infected commands were sent as Google Calendar invites.(Google)

As spotted by WIRED, three security researchers demonstrated this by hijacking Gemini, Google’s primary AI assistant, which is found on various Android phones. The researchers achieved this by first infecting a Google Calendar invitation with instructions to change the state of electronic devices in a home. Later, when they asked Gemini to summarise the calendar invitations for the upcoming week, these infected instructions were activated, ultimately turning off the lights.

Researchers are calling this a first-of-its-kind hack

This is reportedly the first time a generative AI system was infected to manipulate the real-world environment. It demonstrates the kinds of risks that large language models (LLMs) can pose as they become increasingly connected to physical objects, like smart home devices and integrated with AI agents to complete tasks.

This is part of broader research titled ‘Invitation Is All You Need: TARA for Targeted Promptware Attack Against Gemini-Powered Assistants’.

“LLMs are about to be integrated into physical humanoids, into semi- and fully autonomous cars, and we need to truly understand how to secure LLMs before we integrate them with these kinds of machines, where in some cases the outcomes will be safety and not privacy,” says Ben Nassi, one of the researchers at Tel Aviv University, was quoted as saying by the report.

Google is taking action

The report also details how Google is aware of this. Google’s Andy Wen claims that the vulnerabilities have not been exploited by hackers, but the company is taking them seriously. The report adds that the researchers behind ‘Invitation is All You Need’ reached out to Google in February, and the teams have since been working on the flaws and developing defensive mechanisms against AI prompt injection attacks.

MOBILE FINDER: iPhone 16 LATEST Specs And More



Source link

Previous Article

Dreame Technology Unleashes Eleven Smart Home Breakthroughs, Ushering in a New Era of Intelligent Living in the U.S.

Next Article

Vornado 133 Small Room Air Circulator Fan, 2 Speeds, Adjustable Head, Table Fan for Desk, Nightstand, Black

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Submit Comment

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.