Synopsis
The vulnerability stems from a flaw in how Gemini in BigQuery handles tool execution and session persistence within shared Canvas environments. The attack begins with the creation of a malicious Gemini Agent configured with hidden system instructions that utilize the data extraction and joiner tool. By embedding directives that command the LLM to ignore user input and instead prioritize queries against a specific target path, such as victims-project.dataset.table, the attacker creates a trap. When this malicious agent is attached to a shared Canvas and sent to a victim, the UI obfuscates the underlying system instructions, making the assistant appear benign and connected only to the attacker’s disclosed data sources.
The core of the exfiltration relies on a synchronization inconsistency between the client-side UI and the backend server. When a victim interacts with the assistant, even with a neutral greeting, the LLM executes the hidden instructions, pulling private data from the victim’s BigQuery environment into the active Canvas session. While the victim may attempt to exit without saving to prevent data exposure, the attacker can simultaneously attempt to save the report from their own session. Although the UI generates a "saving failed" error message to the attacker, the victim’s private data is covertly persisted to the server's version of the Canvas. This allows the attacker to bypass the failed save notification and retrieve the sensitive data by simply refreshing the report or querying the underlying server state, effectively turning the Canvas saving mechanism into a stealthy exfiltration channel.
Solution
Google added a warning in the UI.
Proof of Concept
1. Create a canvas in Big Query, make it appealing
2. Insert the malicious instructions such as the ones in this report in the Canvas assistant and choose a random data source you own (to avoid the LLM erroring out when nothing is attached)
3. Save the Canvas
4. Send any message to the assistant so he'll get the context (Optional - proved to work better this way while testing)
5. Share the canvas with the cross tenant victim
6. Wait for the victim to interact with the malicious assistant (you can query for changes server-side)
7. Save the canvas again, and refresh the page
8. The victim's requested data is in the shared canvas
Victim:
1. Send any message to the malicious assistant
Disclosure Timeline
All information within TRA advisories is provided “as is”, without warranty of any kind, including the implied warranties of merchantability and fitness for a particular purpose, and with no guarantee of completeness, accuracy, or timeliness. Individuals and organizations are responsible for assessing the impact of any actual or potential security vulnerability.
Tenable takes product security very seriously. If you believe you have found a vulnerability in one of our products, we ask that you please work with us to quickly resolve it in order to protect customers. Tenable believes in responding quickly to such reports, maintaining communication with researchers, and providing a solution in short order.
For more details on submitting vulnerability information, please see our Vulnerability Reporting Guidelines page.
If you have questions or corrections about this advisory, please email [email protected]