Fill out this form and an expert will reach out to schedule time to talk.
After briefly getting acquainted, we’ll show you how Faros AI helps:
Learn how to achieve secure Kubernetes deployments using a lightweight deployment agent inside your private network. Discover secrets management, Helm templating, and CI/CD integration for enterprise-grade security.
Kubernetes has become the de facto compute platform for running and managing microservices at scale. However, as with any powerful system, secure deployment to Kubernetes clusters—especially in enterprise environments—presents a number of non-trivial challenges.
In this article, we’ll walk through the architecture and implementation of a secure deployment solution that avoids the complexity of traditional agent-based approaches and ensures that secrets and cluster access are properly protected.
At its core, Kubernetes deployments involve interacting with the Kubernetes API server. In cloud environments, that API server typically resides inside a private network—exactly where it should be from a security perspective. Public access to the Kubernetes API is a security risk and must be avoided in enterprise setups.
This introduces a primary challenge: How do we deploy services to Kubernetes clusters when we cannot access the Kubernetes API from outside the private network?
Furthermore, deploying applications to Kubernetes often involves Helm charts, which require several configuration parameters. Many of these parameters are secrets—API keys, credentials, tokens—that should never be committed to source control or exposed in plain text.
That’s our second challenge: How do we securely populate secrets into Helm chart values?
There are several tools available today that attempt to enable secure Kubernetes deployments:
These approaches work but often introduce operational burdens, brittle configurations, and unnecessary complexity, particularly for smaller teams or simpler use cases.
To overcome these challenges, my team developed a lightweight, secure deployment mechanism built around a containerized script we call the deployment agent.
Here’s how it works:
Below is an architecture diagram of the secure Kubernetes deployment solution:
Let’s go through a secure Kubernetes deployment step by step.
The deployment agent runs as a containerized job inside the same private network as the Kubernetes cluster. This ensures that access to the Kubernetes API server is secure and local—no need to expose it to the internet.
Managing secrets securely is critical for production-grade Kubernetes deployments. In our architecture, secrets are never hardcoded or stored in source control. Instead, we leverage native secret management services provided by the cloud provider:
These secrets are created and maintained using Terraform, which ensures that access policies and secret lifecycles are fully defined as code. The deployment agent uses its associated IAM role or Azure service principal to authenticate and retrieve the secrets securely at runtime.
To simplify secret integration with Helm, we use a placeholder system in our values.yaml
files. Rather than embedding raw secret values, we define them as templated references. For example:
database:
password: {{ az:kv:db-password }}
username: my-app-user
Here’s how this system works:
az
indicates the cloud provider (Azure in this case)kv
refers to the backing secret service (Key Vault)db-password
is the key within that secret storeThe deployment agent parses the values.yaml
file before deployment. When it encounters a placeholder like {{ az:kv:db-password }}
, it queries the designated secret store, fetches the secret value using the configured credentials, and replaces the placeholder in-memory. The final rendered values.yaml
—with real values substituted—is passed to Helm for deployment.
This process ensures that:
This flexible and secure templating mechanism lets us use standard Helm workflows without customizing upstream charts to explicitly reference Kubernetes Secret
objects. It keeps secrets external, dynamic, and decoupled from chart logic.
Deployment logic is abstracted into simple YAML-based deployment scenarios. Each scenario defines:
This makes deployments repeatable, declarative, and version-controlled.
The agent is triggered by an external CI/CD system, which is authenticated via a limited-permission identity. Depending on the environment, the setup looks like this:
AWS Deployment:
Azure Deployment:
This separation of concerns ensures that the CI/CD pipeline doesn’t have direct access to the Kubernetes API, secrets are never exposed outside the private network, and deployment actions are scoped and auditable.
There are multiple benefits to this secure Kubernetes deployment architecture:
Kubernetes provides powerful orchestration capabilities, but deploying to it securely requires thoughtful design. By placing a minimal deployment agent inside the private network, integrating with native secret stores, and tightly controlling CI/CD roles, we’ve built a solution that balances security, simplicity, and scalability.
This architecture has proven effective in real-world deployments and can be adapted to fit a variety of organizational setups. If you're looking for a secure and manageable way to deploy to Kubernetes without exposing your cluster or secrets, this approach may be the right fit.
We'd love to answer any questions you have. If you'd like to learn more, be sure to reach out.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript