Why Credential Security Matters for AI Agents
AI agents are powerful because they can interact with your digital world on your behalf. They send emails through your Gmail account, commit code to your GitHub repositories, manage your Google Drive files, and access your financial data. To do all of this, they need your credentials: API keys, OAuth tokens, passwords, and access tokens for every service they interact with.
This creates a security challenge that is fundamentally different from traditional software. A word processor needs no credentials. A web browser manages its own password store. But an AI agent that orchestrates actions across dozens of services becomes a single point of failure for your entire digital identity. If an agent's credential store is compromised, an attacker gains access to everything the agent could access — your email, your code repositories, your cloud storage, your financial accounts.
The problem is made worse by how most automation platforms handle credentials. Cloud-based tools like Zapier, Make, and IFTTT store your credentials on their servers. You are trusting a third party with the keys to your digital life, and their security is only as strong as their weakest point. Data breaches at automation platforms would expose not just user accounts but the credentials for every connected service.
Nemo takes a fundamentally different approach: your credentials never leave your machine. They are encrypted at rest with AES-256-GCM, decrypted only when a skill needs them, and never transmitted to any external server. The AI language model itself never sees your credentials — they are injected at the runtime level, completely outside the LLM's context window. This architecture eliminates entire categories of credential theft that affect cloud-based platforms.
What Is the Nemo Vault
The Nemo Vault is an encrypted local credential store built into the Nemo application. It serves as the single source of truth for all secrets that Nemo's skills need to function: LLM provider API keys, OAuth access and refresh tokens, cloud service JWT tokens, and any other sensitive values that skills depend on.
The vault is not a standalone password manager — it is purpose-built for AI agent credential management. While tools like 1Password and Bitwarden are designed for humans to store and retrieve passwords, the Nemo Vault is designed for programmatic access by automated skills with strict access controls and full audit logging.
Key characteristics of the vault:
- Local-only storage: The encrypted vault file lives in your
~/.nemo/directory on your local file system. There is no cloud sync, no remote backup, and no server-side copy - AES-256-GCM encryption: All credentials are encrypted at rest using the AES-256 cipher in Galois/Counter Mode, which provides both confidentiality and integrity
- Programmatic access: Skills access credentials through the CREDENTIAL_MAP pattern, which maps credential keys to vault entries. The vault decrypts and provides the credential at runtime
- LLM isolation: Credentials are never included in LLM prompts, tool schemas, or conversation history. The language model never has access to your secrets
- Audit logged: Every credential read and write is recorded in an encrypted audit log with timestamps and the requesting skill's identity
The vault is accessible through the Nemo desktop application's Vault tab, where you can view stored credential names (not values), add new credentials, update existing ones, and delete credentials you no longer need. The UI never displays decrypted credential values in full — sensitive values are masked by default.
How Encryption Works
Understanding the encryption behind the vault helps you trust it with your most sensitive data. Here is how the cryptography works:
AES-256-GCM
AES (Advanced Encryption Standard) with a 256-bit key is the gold standard of symmetric encryption. It is approved by the U.S. National Security Agency (NSA) for protecting TOP SECRET classified information. The "256" refers to the key length in bits — a 256-bit key has 2^256 possible combinations, a number so large that brute-force attacks are computationally infeasible even with all the computing power on Earth running for billions of years.
GCM (Galois/Counter Mode) is an authenticated encryption mode that provides two guarantees simultaneously:
- Confidentiality: The encrypted data cannot be read without the correct key. An attacker with access to the vault file sees only random-looking bytes
- Integrity: Any modification to the encrypted data is detected. If someone tampers with the vault file (even changing a single bit), the decryption operation fails with an authentication error rather than producing corrupted data
Key Derivation
The 256-bit encryption key used by the vault is derived using a key derivation function (KDF). Key derivation transforms a source of entropy into a cryptographically strong key of the required length. The derived key is stored securely using the operating system's credential management facilities, which means it benefits from the OS-level protections that your computer already provides (such as Windows DPAPI or macOS Keychain).
Initialization Vectors
Each credential entry in the vault uses a unique, randomly generated initialization vector (IV). The IV ensures that encrypting the same plaintext value twice produces different ciphertext. This prevents an attacker from determining whether two vault entries contain the same value by comparing their encrypted forms. The IV is stored alongside the ciphertext — it does not need to be secret, only unique.
At-Rest Encryption
Credentials are encrypted immediately when they are written to the vault and decrypted only when a skill needs them. There is no window where plaintext credentials sit on disk unencrypted. The decrypted value exists only in memory for the duration of the API call that needs it, and is not persisted in any cache, log file, or temporary storage.
What Gets Stored
The vault stores several categories of credentials that Nemo's skills and infrastructure need:
LLM Provider API Keys
Nemo supports 5 LLM providers: Anthropic (Claude), OpenAI (GPT), Ollama (local models), OpenRouter (aggregated access to hundreds of models), and custom API endpoints. Each provider requires an API key (except Ollama, which runs locally). These keys are stored in the vault and injected into HTTP headers when Nemo calls the LLM provider's API. The keys are never logged, never included in error messages, and never visible in the UI after initial entry.
OAuth Tokens
Skills that access Google services (Gmail for email_triage and email_composer) and GitHub (for git_assistant and code_reviewer) use OAuth 2.0 for authentication. The OAuth flow produces an access token (short-lived, typically 1 hour) and a refresh token (long-lived, used to get new access tokens). Both tokens are stored in the vault. When an access token expires, the skill uses the refresh token to obtain a new one transparently.
Cloud JWT Tokens
If you use Nemo Cloud features (marketplace, cloud relay, collective intelligence sync), your authentication is managed via JWT (JSON Web Tokens). The access token and refresh token for the Nemo Cloud API are stored in the vault under cloud.access_token and cloud.refresh_token. These are never stored in plaintext configuration files.
Agent Identity
Your Nemo agent has a unique identifier generated on first boot and stored in the vault as agent.id. This ID is used for collective intelligence participation, marketplace interactions, and trust scoring. It is a random UUID that contains no personally identifiable information.
What Is NOT Stored
The vault does not store your operating system passwords, browser passwords, or credentials for services that Nemo does not directly integrate with. It also does not store LLM conversation history, task results, or any user data beyond the credentials themselves. Those are managed separately by the audit and history modules.
Credential Injection: How Skills Access Secrets
The most innovative aspect of Nemo's credential architecture is credential injection — the mechanism by which skills get access to the secrets they need without those secrets ever being exposed to the AI language model.
Here is the flow, step by step:
- LLM receives the task: When you ask Nemo to "check my email," the agent's LLM receives the task description and the TOOL_SCHEMAS for the active skill. The tool schemas describe what each tool does and what parameters it accepts. Crucially, credentials are never listed in the tool schemas. The LLM has no knowledge of API keys, tokens, or passwords
- LLM makes a tool call: The LLM decides to call
email.fetch_inboxand provides the parameters it knows about (e.g., max_results: 20, label: "INBOX") - Bridge intercepts the call: The OpenClaw bridge receives the tool call from the LLM. Before executing it, the bridge consults the skill's CREDENTIAL_MAP
- Bridge injects credentials: The CREDENTIAL_MAP tells the bridge which vault entries to inject. For example,
{"access_token": "google.access_token"}means "fetch the value stored atgoogle.access_tokenin the vault and pass it as theaccess_tokenparameter." The bridge reads the encrypted value from the vault, decrypts it, and adds it to the tool call parameters - Tool executes with credentials: The tool handler receives the full parameter set including the injected credentials and makes the authenticated API call
- Result returned without credentials: The tool's response is returned to the LLM for processing. The response never contains credential values — it contains only the result of the operation (e.g., the list of emails)
This architecture means the LLM is structurally prevented from seeing your credentials. Even if a prompt injection attack tried to trick the LLM into revealing credentials, there are no credentials in its context to reveal. The injection happens at a layer below the LLM, in Python runtime code that the language model cannot inspect or influence.
Comparison: Nemo Vault vs Alternatives
How does the Nemo Vault compare to other approaches for storing credentials used by automation tools?