In February 2026, Truffle Security published research showing that nearly 3,000 Google API keys deployed in client-side code can also authenticate to the Gemini API. Around the same time, a string of public billing incidents made the rounds: ~$82,000 in Gemini charges in 48 hours, ~€54,000 in 13 hours, a $15,000 bill that ended a solo developer’s startup, a student briefly facing a ~$400,000 quota turnaround bill. Different victims, different paths to the same outcome: an AIza... key — leaked, scraped, or originally shipped per Google’s own guidance into a public artifact — ended up authenticating to Gemini and racking up bills before anyone noticed.
For over a decade, Google’s documentation said AIza... keys weren’t secrets. Maps keys were meant to ship in the Android manifest. Firebase keys were meant to ship in client JavaScript. Developers built accordingly. Then Gemini changed the rules.
How the rules changed
Truffle calls this retroactive privilege expansion. When Generative Language launched, every existing unrestricted AIza... key on a project where it was enabled silently became a Gemini credential. That’s the mechanism Truffle documents, and what explains the public billing incidents. Google has acknowledged the issue and is shipping fixes, though I haven’t seen them publicly enumerate which keys were affected — so the broader claim is still inference. “No warning. No confirmation dialog. No email notification.”
The lesson the wave of incidents drives home:
An API key is only as safe as the restrictions on it. A “Maps key” with no API restriction isn’t a Maps key. It’s a Gemini key, a Vision key, a Translate key, whatever the attacker wants.
What “restrictions” actually look like
A typical Android Maps integration declares the key right there in the manifest:
<!-- AndroidManifest.xml -->
<application>
<meta-data
android:name="com.google.android.geo.API_KEY"
android:value="${MAPS_API_KEY}" />
</application>
On iOS the equivalent lives in the Info.plist or in the SDK init call:
// AppDelegate.swift
GMSServices.provideAPIKey("YOUR_MAPS_API_KEY")
Neither of these is hiding the key. APKs can be unzipped. iOS bundles can be cracked open on a jailbroken device. Build-time obfuscation slows somebody down for an afternoon. The actual control lives outside your app code, in the Google Cloud Console, on the key itself:
- API restrictions. Limit the key to specific APIs only — for a Maps key, that’s Maps SDK for Android, Maps SDK for iOS, Places, etc. With this set, a request to
generativelanguage.googleapis.comfrom that key returns a hard403. - Application restrictions. For Android, lock to a package name + SHA-1 of the signing certificate. For iOS, lock to a bundle ID. The Google SDKs send those as headers, and the API rejects requests that don’t match.
On iOS, application restrictions are weaker than they look. They’re a check the SDK adds to outbound requests, not a cryptographic proof. Somebody calling the REST API directly with a spoofed bundle ID header will get through. In practice it’s still a meaningful barrier on iOS because sideloading isn’t common, App Store distribution enforces signing, and a non-jailbroken device won’t run a tampered binary anyway. Android is the noisier surface. APKs are easier to pull, repackage, and run on a developer device. Either way, the API restriction is the part that’s enforced server-side, and the part you should rely on.
Test your own keys today
If you want to know whether any of your own keys are exposed to this right now, Truffle’s PoC is one line:
curl "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"
A properly restricted key returns 403 Forbidden. An unrestricted key on a project where Generative Language is enabled returns 200 OK, which means you have a problem.
The rule of thumb I use: every key I create, I assume will leak eventually. Then I ask what an attacker can do with it the day it does. If the answer is “call Gemini” or anything else expensive, the restrictions need to land before the key ships.
What we do on our own projects
- Every API key gets restrictions at creation. API restrictions (which APIs it can call) and application restrictions (which apps, IPs, or domains can use it). No more unrestricted keys, ever. No “we’ll lock this down later.”
- Disable APIs the project doesn’t use. The blast radius of a leaked key is bounded by what’s enabled at the project level. AI Studio enabled by default but never used is an open door.
- Application Default Credentials wherever possible. ADC removes the key from the equation entirely — there’s no string to leak. Vertex AI, Cloud SDKs, Firebase Admin from Cloud Functions: all of these support it.
- Secret scanning at two points. A pre-commit hook with
gitleaksto catch things on the way out, and GitHub push protection on the way in. Defense in depth, not a substitute for restrictions. - A calendar reminder for rotation. Quarterly. Anything older than the last rotation gets treated as burned and replaced. It costs me an hour every three months.
None of these are exotic. They’re the small habits that turn a leaked key into a non-event instead of a five-figure incident.
The platform is catching up
The worst version of this problem is being patched, even if it took public bills in the high five figures to get there. Per Truffle’s disclosure, Google has committed to:
- Gemini-only defaults for keys created through AI Studio. No more silent inheritance from cross-service keys.
- Blocking keys discovered as leaked when they’re used against the Gemini API.
- Proactive notification to project owners when a leaked key is detected.
Some of this is already in flight. None of it retroactively fixes older keys created under the original “API keys aren’t secrets” guidance, so the audit on your own project is still yours to do.
The takeaway
Keys leak. Through APKs, screenshots, old branches, lost laptops, documentation somebody helpfully made public. Assume the key will leak, and design the restrictions so the leak doesn’t matter.
—Joshua