I have heard something like the following a few times: "You should store all your secrets in a secrets management system. This is the only secure approach for managing your secrets".
This is not universally true. Furthermore, what does "secure" in this context even mean? What are the alternatives to using a secrets management system and why should they be less secure?
This post will explore secrets management from the perspective of cloud native applications. We will take a look at the different solution options for managing secrets.
For reading this blog post you should have a very basic understanding of Kubernetes (K8S) and Kubernetes custom resources as we are going to use Kubernetes as example for a cloud-native application runtime environment.
Why do we need secrets management?
Backend applications often require credentials to access other systems. E.g. an API key or token required for calling some other service that your application depends on, a certificate and it’s private key for TLS authentication, etc.
So how do you provide a credential/secret to your application? The naive approach is to embed the secret into your application’s source code. That is however a terrible idea. Everybody who has access to your source code now also has access to your secret.
Adding the secret during the build process is also a bad idea. Whoever has access to your application binary or container image, which contains your application, now also has access to your secret.
The only remaining option is to provide the credential at runtime to your application. There are only two ways to achieve this that work universally with every programming language/framework: [1] [2]
-
Provide the secret as an environment variable that can be read by your application
-
Write the secret into a file and have your application read the secret from the file
Before we discuss how these two approaches can be implemented, we first look at how secrets are typically used.
User Cases
There are basically two use cases where secrets have to be integrated.
Applications
The standard approach for deploying applications into public/private cloud environments is:
-
Create a container image that contains your application binaries
-
Deploy this image into a container runtime environment of your choice. Some examples:
-
Kubernetes (K8S): define a Helm Chart with a Deployment for your application
-
AWS ECS: define and deploy an ECS task, for example using Terraform and the aws_ecs_resource
-
GCP Cloud Run: create a service, e.g. using Terraform and the google_cloud_run_v2_service resource
-
As discussed before, there are two options to provide a secret to an application. This is also called injecting a secret into the container. Most runtime environments support both options.
In Kubernetes, you create a secrets object resource and then either attach this secret as an environment variable or mount it as a file inside the container.
The same two options also exist for AWS ECS and GCP Cloud Run.
In Kubernetes, resources are defined as code, stored in YAML files. This also applies to the Kubernetes secrets object that contains the actual secrets. An example:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
username: am9obmRvZQo=
password: dGhpcyBpcyB0b3Agc2VjcmV0Cg==
The type indicates that this object is a secrets object |
|
This secret object contains two actual secrets with the keys username and password. The secret values themselves are base64 encoded. |
This code should be stored in a version control system (VCS). But then we have the same problem again as with applications. When you push this code to your VCS, then whoever has access to your source code also has access to your secrets.
Infrastructure
Secrets are not only required on the application level but also on the infrastructure level: API keys/tokens for agents you are deploying to your virtual machines, TLS certificates and private keys for your load balancers, etc.
Infrastructure should also be defined and deployed with Infrastructure as Code (IaC). As IaC is also stored in a VCS, we have the same problem as with applications: how to manage secrets for IaC?
Apart from source code level problem, there is an additional issue. IaC frameworks like Terraform use a state file to track metadata on the resources it is managing.
So if your IaC generates secrets, or you provide secrets as input variables to Terraform, then those secrets will be written (in cleartext) to the state file.
State files are written to a persistent storage systems. For Terraform, the supported systems include AWS S3, GCP GCS, PostgreSQL, etc. Terraform itself does not encrypt those state files. While the storage systems themselves usually provide data-at-rest encryption, for cloud based systems this is primarily a compliance feature and not useful from a security perspective.
So for IaC, secrets could end up in either source code or state files.
Solution Options
We need something that can create secrets without exposing the cleartext values of those secrets on either the source code, application binary or container image level.
There are many solutions, but they can all be grouped into one of the following categories:
-
Generate secrets in infrastructure code. This only supports a limited number of use cases where you generate secrets yourself. This will not work for situations where a 3rd party credential has to be provided to your application/infrastructure.
-
Store the encrypted secret in your source code repository / VCS. During deployment, the secret is decrypted and it’s cleartext value can then be used to, e.g. create a Kubernetes secrets object
-
Retrieve the cleartext secret from another storage system at runtime and then use it, e.g. to create a Kubernetes secrets object
We are omitting the solution option of storing secrets in cleartext in the VCS for obvious reasons.
Whatever solution is used, once the the secret is available in cleartext, it can be injected into a container as either an environment variable or mounted as a file, as mentioned before.
Generating Secrets in Infrastructure Code
For IaA some types of secrets can actually be generated during deployment.
We will use an example where an application requires an access key for calling the AWS API [3]. This kind of secret can be generated by infrastructure code and passed on to the application, using container runtime environment specific configuration options.
A Terraform example where an AWS IAM user access key is generated and provided to an application deployed in AWS ECS:
resource "aws_iam_user" "test" {
name = "test"
}
resource "aws_iam_access_key" "test" {
user = aws_iam_user.test.name
}
resource "aws_ecs_task_definition" "test_task" {
family = "test"
container_definitions = <<TDEF
[
{
"cpu": 10,
"command": ["myapp"],
"environment": [
{"name": "mysecret", "value": "${aws_iam_access_key.test.secret}"}
],
...
}
]
TDEF
}
| Generate an IAM user | |
| Generate an access key for the user | |
| Secret is provided as an environment variable to the application that is deployed as an ECS task [4] |
The secret is both generated and provided to the application from within Terraform.
Encrypted Secrets in VCS
The probably best known solution for this category is Sealed Secrets. Another option is to use the cloud provider’s Key Management System (KMS) to encrypt and decrypt data.
We will use Sealed Secrets in the following example. As a prerequisite, you have to do the following:
-
Deploy the
SealedSecretcontroller into your Kubernetes cluster. This Kubernetes controller will add support for aSealedSecretcustom resource to your Kubernetes cluster. -
The controller will generate a public-private key pair. Retrieve the public key from the controller. [5]
Managing secrets is then relatively simple:
-
Use the
kubesealclient application to encrypt an existing Kubernetes secret file. You have to use the controller’s public key for the encryption process. The output of this encryption process is a customSealedSecretKubernetes object, encoded in YAML, that contains the encrypted secret object. -
This YAML file can be stored in your source code repository (as it’s encrypted) as part of your application’s infrastructure code (e.g. Helm chart).
-
When you deploy your application to Kubernetes (e.g. using Helm), a
SealedSecretobject will be created in the cluster. -
The Sealed Secrets controller in the cluster will see this object, decrypt it and create the corresponding regular Kubernetes secret object that can be used by your application container.
An example for how a Sealed Secret object looks like is provided below:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: my-secret
spec:
encryptedData:
username: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq...
password: AgCZy85yQmqvS5nZc8ASs5/ix5aKgAeq...
The basic structure is similar to the Kubernetes secret object. The secret values themselves are encrypted, with the ciphertext being Base64 encoded.
Secrets Management Systems
The final option is to store your secret in a central database. A component within your cluster can then retrieve the secret from this database and create the corresponding Kubernetes secrets object. In the cloud native world, these databases are called Secrets Management systems.
There are plenty of implementations available:
-
Cloud provider specific services such as AWS Secrets Manager, Azure Key Vault, GCP Secret Manager
-
Self-hosted applications such as Hashicorp Vault
Important to mention is the Kubernetes External Secrets Operator that can integrate different secrets management systems. This allows you to create Kubernetes secrets objects from secrets that are stored in different secrets management systems.
While the details might differ, the overall structure of how these secrets management systems are used is usually the same:
-
You first have to deploy a component inside your Kubernetes cluster. This is a Kubernetes controller, or more generically, some agent. This component has the permissions to retrieve secrets from your secrets management system [6] and create corresponding Kubernetes secrets objects.
-
The infrastructure code for deploying your application (e.g. Helm chart) must contain a custom Kubernetes object that references a secret in the secrets management system. E.g.
ExternalSecretobject when using the External Secrets Operator. -
When deploying your application to Kubernetes, an
ExternalSecretobject will be created from the infrastructure code. -
The secrets management component deployed in the first step (e.g. the External Secrets Kubernetes Operator) will see this object, retrieve the actual secret from the secrets management system and create the corresponding regular Kubernetes secret object that can be used by your application.
The involved components are similar to the Encrypted Secrets approach we discussed before. The only difference is the source of the secrets, which are now retrieved at runtime from an external system (the secrets management system).
An example for an ExternalSecret Kubernetes object is provided below.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: "my-secret"
spec:
secretStoreRef:
name: aws-store
kind: SecretStore
refreshInterval: "1h"
data:
- secretKey: username
remoteRef:
key: user-credentials
version: v1
property: username
sourceRef:
storeRef:
name: aws-secretstore
kind: ClusterSecretStore
- secretKey: password
remoteRef:
key: user-credentials
version: v1
property: password
sourceRef:
storeRef:
name: aws-secretstore
kind: ClusterSecretStore
SecretStoreRef defines from what secrets management system the secret can be fetched from. In this example, we are assuming AWS Secrets Manager. For brevity, we are omitting the definition of the SecretStore object that defines how the External Secrets Operator can access the AWS Secrets Manager. |
|
The RefreshInterval defines how often the secret value will be read from the secrets management system and updated again, if necessary. |
|
The data section defines the secrets to be retrieved from AWS Secrets Manager and how to build the Kubernetes secrets object from them. |
The regular Kubernetes secret object created by the External Secrets Operator will be identical to the one shown before.
It should be noted that you can also use a secrets management system directly from IaC. E.g. when running Terraform with the Terraform Vault Provider, you can retrieve secrets from Hashicorp Vault and use them as variables within your Terraform code. While the secret will not be within your source code, it will show up (in cleartext) in the Terraform state file!
Threats
Now that we have discussed how secrets are used and what solution options exist, we can finally start looking at the threats.
Overview
When looking at the CIA triad, the main problem is confidentiality. If a secret becomes accessible to a non-authorized principal, every system that can be accessed with this secret could be compromised.
So the first question is: in what locations are secrets accessible in cleartext? In the cloud-native world, many different components are involved in building, deploying and running an application. This is illustrated in the figure below.
Note: not all of these components might be used at the same time. E.g. when you store all your secrets encrypted in the VCS you might not need a Secrets Management System anymore.
Let’s look at how these components have access to a secret in cleartext:
-
Version control system (VCS) repository: this is where application source code is stored
-
CI/CD system #1: has access to source code to build the application and the application container image
-
CI/CD system #2: when running Terraform, uses a Terraform state file that contains information on all resources created during the deployment, including secrets
-
Kubernetes cluster: has access to the container registry and its images to run the application. And to the secrets objects, which are are Kubernetes objects.
-
SealedSecret Controller: decrypts a sealed secret and creates the corresponding Kubernetes secret object
-
Agent: retrieves secret from a secrets management system and creates the corresponding Kubernetes secret object
-
Secrets management system: central repository where many secrets can be stored
Now let’s look at what component has access to secrets in cleartext, depending on what solution option is being used.
When you generate secrets in IaC, the CI/CD system is executing the IaC code that generates the secret, which is then stored in the Terraform state file. Due to the deployment process, the secret will also exist in cleartext in the runtime environment, e.g. the K8S cluster.
When using Encrypted Secrets, a human will push an encrypted secret to the source code repository. The only components that will then see the secret in cleartext are the the SealedSecrets controller and other components in the Kubernetes cluster, including the application.
When using a Secrets Management System, the secret will never exist in cleartext until the agent fetches it from the secrets management system. The only components that will see the secret in cleartext, apart from the secrets management system, are the Kubernetes cluster with it’s components and the application.
For completeness, using a Secrets Management System with a secrets management plugin from IaC should also be discussed: this is equivalent to generating secrets in IaC. Because the secrets end up in the state file.
If you don’t use any of these options and store your secrets in cleartext in your VCS, then basically every component will have secret cleartext access: VCS, CI/CD system, Terraform state, container registry (when cleartext secret is included in application or image) and the K8S cluster.
This is summarized in the table below that indicates where a secret is accessible in cleartext, depending on the solution approach.
| Encrypted Secrets | Secrets Mgmt System | Terraform generated | Cleartext in VCS | |
|---|---|---|---|---|
VCS |
x |
|||
CI/CD system |
x |
x |
||
Container Registry |
x |
|||
Terraform State |
x |
x |
||
Secrets Mgmt System |
x |
|||
K8S, incl. components within the cluster |
x |
x |
x |
x |
Summarized, secrets will always exist in cleartext inside the K8S cluster. In case of generating secrets in Terraform, they will also be accessible in the CI/CD system and via the Terraform state file. When storing secrets in cleartext directly within your source code, they will obviously be all over the place.
Discussion
So what can we learn from this?
Irrespective of what solution you are using to manage secrets, if your K8S cluster is compromised (or at least the K8S namespace where your application and secrets reside), then all your secrets are compromised.
Ultimately, the goal will be to minimize the number of components/locations where secrets are accessible in cleartext.
Additionally, and this is probably the most important aspect: it’s all about access control. When you use a central secrets management system, but 100 people have access to it, then this could be considered less secure compared to generating secrets in IaC, where only a few people have access to the CI/CD and storage system where the Terraform state file is stored.
You also have to take into account operational aspects: whoever is administrating your Kubernetes clusters will also have access to all those secrets that are loaded into the cluster. So introducing separation of duties between K8S cluster operations and secrets management system operations does not provide any security benefits (at least from the secrets perspective).
Encrypted Secrets are, in theory, a tool that allows to implement separation of duties: an application team, or a completely different team, can own the application’s secrets. They can generate the encrypted secrets and push them to VCS so nobody else has access to them in cleartext. However, once deployed to the K8S cluster, the cluster operations team will have access to the secrets. So separation of duties is actually not working well with this approach.
Summary
Security in the context of secrets management means:
-
Don’t store secrets in cleartext in your source code
-
Limit the number of systems/locations where secrets are accessible in cleartext
-
Apply strict access control to these systems
From a technical perspective, it doesn’t matter too much whether you store your secrets in a secrets management system or encrypted in your source code. In both situations, your secrets are compromised as soon as your Kubernetes cluster is compromised.
When deploying secrets management system agents/controllers or controllers that can decrypt secrets do not use one deployment for the entire cluster. Instead, use one deployment per application/team with different permissions for accessing secrets. This way you can properly segregate applications/teams from each other.
Even generating secrets in Terraform and storing them in your Terraform state file can be totally fine for as long as you can ensure that a) your CI/CD system is not compromised and b) you have strict access control for CI/CD and the storage system that contains the state file.
Secrets management systems really start to shine when you use their more advanced features that go beyond acting as a key-value store for secrets: automatically rotate secrets, support for different authentication plugins that can be used for very fine-grained access control (e.g. only a particular AWS IAM role can access a certain secret) or act as a Certificate Authority and issue certificates.
Last but not least: you have to think about who should have access to secrets and how this aligns with your operational model. When planning your separation of duties, keep in mind that whoever has full access to your Kubernetes clusters also has full access to those secrets [7].