CLOUDSEC – Hey CLOUD PROVIDERS! FIX THIS insecure secrets mgmt trend


It feels like we’re taking a huge step back in secrets management security. AWS, Azure, GCP all have the concept of “roles” and “permissions”. As many of you already know, those roles and their permissions can be mapped to your servers, lambda functions and native cloud services.

But what’s the impact to the Application Security Architecture when we start combining this feature with other sensitive security controls, let’s say Secrets Management?

The result is a big step backwards in securing your company secrets and protecting them from exfiltration and replay based attacks……


I’m going to HashiCorp VAULT to illustrate the Application Architecture vulnerability but this concept applies beyond just HashiCorp. I only use this product because I happened to have it installed and configured to whip this demo up quickly.


Firstly, I’m going to configure HashiCorp to use the AWS Authentication method. What this means is it allows vault clients to authenticate to HashiCorp using native AWS token services and native AWS roles.

In other words, the AWS Auth methods, tricks AWS behavior into treating HashiCorp VAULT just like any other AWS service endpoint …

The problem is that this design pattern significantly weakens our secrets protections from malware attempting to establish persistance and elevate privileges



#1 – Create a policy that only your server can access

path "kv/apiToken" {
    capabilities = ["read", "list"]
path "kv/dbSecrets" {
    capabilities = ["read", "list"]

path "kv/sshPassword" {
    capabilities = ["read", "list"]

path "sys/leases/*" {
  capabilities = ["create", "update"]
path "auth/token/*" {
  capabilities = ["create", "update"]
vault policy write cicd-builder builder.hcl

#2 – Write some passwords to mimic your applications password needs

vault secrets enable -version=2 kv
vault kv put kv/apiToken target=YourApiToken
vault kv put kv/dbSecrets target=YourDbSecrets
vault kv put kv/sshPassword target=YourSshPassword

#3 – Enable and configure the AWS Auth Method

#AWS access key user role, within AWS platform, used for Vault AWS Auth Method API itself 

    "Version": "2012-10-17",
    "Statement": [
            "Sid": “VaultRole”,
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
#Create the AWS ec2 server role with no permissions and assign it to your server.. this will have to map back to your policies in VAULT to access secrets basically 

vault auth enable aws
#Use the access and secret key from previous step when creating api key and mapping AWS role to it ... 

vault write auth/aws/config/client secret_key= SecretKeyFoobar access_key= AccessKeyIpsum
# Map your server ec2 AWS roles to the VAULT policy you created before 

vault write auth/aws/role/builder-role-iam auth_type=iam ^
              bound_iam_principal_arn=arn:aws:iam::12345678910:role/vault_authn_builder policies=cicd-builder max_ttl=1h

design vulnerability demo

So, let’s explain how the AWS AUTH method works and how most other cloud offerings work when using the role based approach on servers ….

Here’s the native AWS role assigned to my server

A user simply needs to export VAULT_ADDR or configure it locally in a file so the client knows where to direct it’s API request …

Afterwards, a user only needs to issue the following command to authenticate to the secrets management service in exchange for a time limited token that fetches secrets …

I’ve purposely left the tokens in the print screen to demonstrate that, no username or password was sent by the user.

Instead the vault clients sent the AWS STS token to VAULT and in exchange VAULT handed over it’s own token for subsequent use. BTW, these tokens should NOT be printed to screen and there is an argument to keep them out of your logs.

Don’t even think about replaying my token..I know your are tempted… the TTL is set low…so it will be revoked and the API is private… (-;

After your authenticate via native AWS tokens, you can get whatever secrets the AWS role has access to based on VAULT group/policy mappings..

GREAT!! This is the design vulnerability



Create a file and put the same fake secrets inside of it and store is locally on disk …

Now, let’s expose those secrets locally over the address to the localhost

Now, all files within the /Secrets directory are exposed via HTTP server on the loopback address only …


Let’s create s separate user to mimic the idea of of a rogue process or user… we’ll call it randomUser

Okay so randomUser represents, some compromised user/process outside of the scope of my current UBUNTU user-space …

…. An attacker will need to enumerate and know the VAULT_ADDR you added to the shell environmental variables… there are a few ways of doing this, some my fail or be successful… but I wanted to point out that malware may listen using netstat, may look into shell history or may look into local vault client configs with world read to find the target API address …

So now the malware has compromised the randomUser process on the local machine and enumerated to find the VAULT API. Btw, this would be even easier with native AWS service as those APIs do not need to be enumerated.

Can the malware grab your secrets from VAULT as a randomUser outside of the ubuntu user namespace ???

At first glance, you might think, NO .. because the the VAULT token that was exchanged is stored in a hidden file within the UBUNTU namespace

But sadly, I wouldn’t write this article if that were true …

Let’s just switch over to the randomUser and re-authenticated to VAULT

Remember that Crazy thing we did earlier, exposing our passwords over

Here’s the ubuntu user grabbing those passwords …

Here’s the randomUser user grabbing those same passwords …

For most of you that should be enough to understand that ALL secrets access using ec2 roles is essentially just hanging of the side of your servers with WORLD READ …

HD version | Surprised Pikachu | Know Your Meme


Okay okay, but you need local access to the machine right? Kind-of… I want to point out the following scenarios that may lead to to exploiting this design vulnerability …

So between all the agents on your server, the CICD tool integrations, the libraries and dependencies and possibility for front-end bugs… I mean… do I need to go on why WORLD READ to secrets is bad…?


Every company is different, and some companies are willing to take on more risk given their unique situation. Maybe, for some companies internal systems with low value data is okay to accept the risk here in exchange for the developer productivity gains. In other cases you may be wondering what you can do …

For HashiCorp, you can use AppRole Auth Method and lock down the secret-zero (secret-id/wrapping-token) used to access VAULT to a restricted process environment or user namespace ….putting aside the argument that you still need a secret to access the rest of the secrets …. the point here is that local access is not world read but instead limited to the users/process permissions …

After locking down the local access on the secret-zero … you can also layer other protection to help minimize replay

For Cloud Providers, it’s a double edged sword. If you wanted to replicate a similar pattern for the HashiCorp AppRole concept, you have a few options … use IMDsv2 with local firewall or use staticAPI access keys with restricted local permissions …

Use IMDSv2 as oppose to IMDSv1

In IMDSv2, external users can be blocked from receiving credentials from metadata service, allowing only allowed application resources to recieve them. This helps mitigate malware impersonating the server permissions. This is done via a PUT request that exchange’s a token with the local process which must be used subsequently much like the wrapping token from VAULT. This token is ec2 specific and normally cannot be replayed from other machines.

However, this solution may not be the silver bullet, if the malware can execute the following commands in exchange for a token and impersonate a new legitimate process..

TOKEN=`curl -X PUT "" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v

Ideally, you’d be running a linux or windows firewall that can lock down the program and user to the destination meta-data url. In this way, you’re using a “wrapping token” and your restricting permissions to secret-zero endpoint (IMDS) based on program and user namespace.

Another option is to provision static AWS access keys with role based permissions to your secrets management solution, lock down local access to the static API key to the user / process environment while disabling the IMDS (meta-data-service) altogether..

However, this solution does lose some of the benefits of the IMDSv2 replay protections and creates a complex manual API token secrets management problem in AWS that you’ll need to address with something like VAULT to track and rotate API tokens anyway ….


To the cloud providers.

  • Stop encouraging these design patterns without thoughtful training and awareness. You would have never encouraged people to create a WORLD READ local loopback address to a password file before the advent of cloud … shame shame
  • Consider a new feature to lock down meta-data URL to specific programs and user, as appose to broad access to the meta-data local address. Maybe an out-of-band certificate or token based approach?
  • Consider an out-of-band validation of program/user execution against an allow/deny list , maybe idk SSM agent for monitoring and optional meta-data ACL policy table cache?

To the developers

  • Please consider the implications of using these “easier” features
  • Would you really expose your secrets on a web sever to all the programs and users on the local loopback?

Obviously, our heads are in the clouds or up our @#$

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s