Your AI sees everything

Git history. File contents. Terminal output. That .env it just read "for context." AI coding assistants are the new attack surface. They're helpful. They're also watching.

what claude sees
[Claude reads your codebase for context]
Reading .env...
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxx
STRIPE_KEY=sk_live_xxxxxxxxxxxx
[Now it's in the conversation context]
[Logged. Analyzed. Maybe trained on?]

The New Threat Model

AI assistants create exposure vectors that didn't exist before.

Context ingestion

AI assistants read your files to understand your codebase. Every file they read becomes part of the conversation. Including .env. Including git history.

Screenshot sharing

You share a screenshot of an error. Your terminal is visible. Your environment variables printed in the stack trace. Now they're in Discord, Slack, Twitter.

Code suggestions

The AI saw your API key in context. Now it suggests code with the key hardcoded. You accept the suggestion. You commit. You pushed your secret.

If the AI can't read it, it can't leak it.

Encrypted secrets are gibberish to AI. That's the point.

What AI Sees

With .env (plaintext)

# .env - AI reads this for context
ANTHROPIC_API_KEY=sk-ant-api03-xxxx
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxx
STRIPE_SECRET=sk_live_xxxxxxxx
DATABASE_URL=postgres://prod:s3cr3t@host/db

# AI now knows your secrets
# AI might suggest code using them
# AI conversation is logged somewhere
# AI might be trained on this later

With .env.encrypted (ciphertext)

# .env.encrypted - AI reads this
ANTHROPIC_API_KEY=ENC[AES256_GCM,data:xxx...]
OPENAI_API_KEY=ENC[AES256_GCM,data:xxx...]
STRIPE_SECRET=ENC[AES256_GCM,data:xxx...]
DATABASE_URL=ENC[AES256_GCM,data:xxx...]

# AI sees encrypted blobs
# AI can't suggest hardcoded secrets
# Logs contain nothing useful
# Training data is worthless

Claude Code

Your AI pair programmer needs API keys. Keep them out of its context.

agent.py

#!/usr/bin/env python3
"""Claude agent that can't leak its own API key."""
from anthropic import Anthropic

# API key comes from environment, injected by envv
# Claude Code sees this file, but not the actual key
client = Anthropic()

def ask_claude(prompt: str) -> str:
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}]
    )
    return response.content[0].text

if __name__ == "__main__":
    print(ask_claude("Explain quantum computing simply."))
run
# Secrets decrypted at runtime, never in files
envv run -- python agent.py
Quantum computing uses...

OpenAI / GPT

chat.py

from openai import OpenAI

# OpenAI SDK reads OPENAI_API_KEY from environment
# Cursor/Copilot might read your .env for context
# But .env.encrypted is just ciphertext
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is Python?"}
    ]
)

print(response.choices[0].message.content)
run
envv run -- python chat.py
Python is a high-level programming language...

AI-Specific Traps

Accepting suggestions blindly

# AI saw your API key in .env
# AI suggests this code:
client = OpenAI(api_key="sk-proj-xxxxx")

# You hit Tab to accept
# You commit
# You pushed your secret to GitHub
# Bots find it in seconds

Environment variables only

# AI sees .env.encrypted (ciphertext)
# AI suggests this code:
client = OpenAI()  # Uses env var

# No hardcoded secrets
# Nothing to accidentally commit
# AI couldn't leak what it can't read

Debug screenshots

# Your terminal shows:
$ printenv | grep KEY
OPENAI_API_KEY=sk-proj-xxxxxxx
STRIPE_KEY=sk_live_xxxxxxx

# You screenshot the error
# You post to Stack Overflow
# Your keys are now public

Keys in memory only

# Your terminal shows:
$ cat .env.encrypted
OPENAI_API_KEY=ENC[AES256_GCM,...]
STRIPE_KEY=ENC[AES256_GCM,...]

# Screenshot is safe
# Only ciphertext visible
# Actual keys never on screen

LangChain

Chains, agents, tools—all need API keys. None should see them in plaintext.

langchain_agent.py

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

# Both keys come from environment
# Your IDE's AI can't see them
gpt = ChatOpenAI(model="gpt-4")
claude = ChatAnthropic(model="claude-sonnet-4-20250514")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful coding assistant."),
    ("user", "{input}")
])

# Choose your model
chain = prompt | claude

response = chain.invoke({"input": "Explain async/await"})
print(response.content)
run
pip install langchain-openai langchain-anthropic
envv run -- python langchain_agent.py

The Safe Workflow

What AI tools see

# Your project structure
src/
  agent.py       # Code (AI reads this)
  config.py      # Code (AI reads this)
.env.encrypted   # Ciphertext (useless)
.gitignore       # Config (AI reads this)
README.md        # Docs (AI reads this)

# AI has full context of your code
# AI has zero knowledge of your secrets
# AI can't suggest hardcoded keys
# AI can't leak what it can't read

What your app sees at runtime

# When you run with envv:
envv run -- python agent.py

# Environment contains:
ANTHROPIC_API_KEY=sk-ant-actual-key
OPENAI_API_KEY=sk-proj-actual-key
DATABASE_URL=postgres://prod:pass@host/db

# Decrypted in memory
# Available to your process
# Never written to disk
# Never in AI context

Multiple AI Services

One encrypted file. All your AI keys.

.env.encrypted

# All your AI service keys in one place
ANTHROPIC_API_KEY=ENC[AES256_GCM,data:xxxx...]
OPENAI_API_KEY=ENC[AES256_GCM,data:xxxx...]
GOOGLE_AI_KEY=ENC[AES256_GCM,data:xxxx...]
COHERE_API_KEY=ENC[AES256_GCM,data:xxxx...]
REPLICATE_API_TOKEN=ENC[AES256_GCM,data:xxxx...]

# Safe to commit
# Safe for AI to read
# Useless without your age key

Setup

secure your ai keys
# Install envv
curl -fsSL https://getenvv.com/envv | sh
# Create your keys file
cat > .env << 'EOF'
ANTHROPIC_API_KEY=sk-ant-your-key
OPENAI_API_KEY=sk-proj-your-key
EOF
# Encrypt and delete plaintext
envv push .env --env dev
rm .env
# Now AI assistants see only ciphertext
cat .env.encrypted
ANTHROPIC_API_KEY=ENC[AES256_GCM,data:...]
# Run your AI scripts
envv run -- python agent.py

Shell Alias

Make it second nature.

~/.zshrc or ~/.bashrc

alias ai='envv run --'

# Now you can run:
# ai python agent.py
# ai npm start
# ai pytest

AI is your assistant. Not your auditor.

Give it context. Keep it blind to secrets.

curl -fsSL https://getenvv.com/envv | sh