Git history. File contents. Terminal output. That .env it just read "for context." AI coding assistants are the new attack surface. They're helpful. They're also watching.
AI assistants create exposure vectors that didn't exist before.
AI assistants read your files to understand your codebase. Every file they read becomes part of the conversation. Including .env. Including git history.
You share a screenshot of an error. Your terminal is visible. Your environment variables printed in the stack trace. Now they're in Discord, Slack, Twitter.
The AI saw your API key in context. Now it suggests code with the key hardcoded. You accept the suggestion. You commit. You pushed your secret.
Encrypted secrets are gibberish to AI. That's the point.
# .env - AI reads this for context
ANTHROPIC_API_KEY=sk-ant-api03-xxxx
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxx
STRIPE_SECRET=sk_live_xxxxxxxx
DATABASE_URL=postgres://prod:s3cr3t@host/db
# AI now knows your secrets
# AI might suggest code using them
# AI conversation is logged somewhere
# AI might be trained on this later
# .env.encrypted - AI reads this
ANTHROPIC_API_KEY=ENC[AES256_GCM,data:xxx...]
OPENAI_API_KEY=ENC[AES256_GCM,data:xxx...]
STRIPE_SECRET=ENC[AES256_GCM,data:xxx...]
DATABASE_URL=ENC[AES256_GCM,data:xxx...]
# AI sees encrypted blobs
# AI can't suggest hardcoded secrets
# Logs contain nothing useful
# Training data is worthless
Your AI pair programmer needs API keys. Keep them out of its context.
#!/usr/bin/env python3
"""Claude agent that can't leak its own API key."""
from anthropic import Anthropic
# API key comes from environment, injected by envv
# Claude Code sees this file, but not the actual key
client = Anthropic()
def ask_claude(prompt: str) -> str:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}]
)
return response.content[0].text
if __name__ == "__main__":
print(ask_claude("Explain quantum computing simply."))
from openai import OpenAI
# OpenAI SDK reads OPENAI_API_KEY from environment
# Cursor/Copilot might read your .env for context
# But .env.encrypted is just ciphertext
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is Python?"}
]
)
print(response.choices[0].message.content)
# AI saw your API key in .env
# AI suggests this code:
client = OpenAI(api_key="sk-proj-xxxxx")
# You hit Tab to accept
# You commit
# You pushed your secret to GitHub
# Bots find it in seconds
# AI sees .env.encrypted (ciphertext)
# AI suggests this code:
client = OpenAI() # Uses env var
# No hardcoded secrets
# Nothing to accidentally commit
# AI couldn't leak what it can't read
# Your terminal shows:
$ printenv | grep KEY
OPENAI_API_KEY=sk-proj-xxxxxxx
STRIPE_KEY=sk_live_xxxxxxx
# You screenshot the error
# You post to Stack Overflow
# Your keys are now public
# Your terminal shows:
$ cat .env.encrypted
OPENAI_API_KEY=ENC[AES256_GCM,...]
STRIPE_KEY=ENC[AES256_GCM,...]
# Screenshot is safe
# Only ciphertext visible
# Actual keys never on screen
Chains, agents, tools—all need API keys. None should see them in plaintext.
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
# Both keys come from environment
# Your IDE's AI can't see them
gpt = ChatOpenAI(model="gpt-4")
claude = ChatAnthropic(model="claude-sonnet-4-20250514")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful coding assistant."),
("user", "{input}")
])
# Choose your model
chain = prompt | claude
response = chain.invoke({"input": "Explain async/await"})
print(response.content)
# Your project structure
src/
agent.py # Code (AI reads this)
config.py # Code (AI reads this)
.env.encrypted # Ciphertext (useless)
.gitignore # Config (AI reads this)
README.md # Docs (AI reads this)
# AI has full context of your code
# AI has zero knowledge of your secrets
# AI can't suggest hardcoded keys
# AI can't leak what it can't read
# When you run with envv:
envv run -- python agent.py
# Environment contains:
ANTHROPIC_API_KEY=sk-ant-actual-key
OPENAI_API_KEY=sk-proj-actual-key
DATABASE_URL=postgres://prod:pass@host/db
# Decrypted in memory
# Available to your process
# Never written to disk
# Never in AI context
One encrypted file. All your AI keys.
# All your AI service keys in one place
ANTHROPIC_API_KEY=ENC[AES256_GCM,data:xxxx...]
OPENAI_API_KEY=ENC[AES256_GCM,data:xxxx...]
GOOGLE_AI_KEY=ENC[AES256_GCM,data:xxxx...]
COHERE_API_KEY=ENC[AES256_GCM,data:xxxx...]
REPLICATE_API_TOKEN=ENC[AES256_GCM,data:xxxx...]
# Safe to commit
# Safe for AI to read
# Useless without your age key
Make it second nature.
alias ai='envv run --'
# Now you can run:
# ai python agent.py
# ai npm start
# ai pytest
Give it context. Keep it blind to secrets.
curl -fsSL https://getenvv.com/envv | sh