I was triaging a database migration issue in production. To investigate, I needed the prod DB credentials. I opened a terminal on my corporate laptop, typed:
az keyvault secret show \
--vault-name kv-prod \
--name db-password \
--query value -o tsv
And the production password printed to my screen.
Not a hash. Not a masked placeholder. The actual plaintext production database password, authenticated with my normal corporate Azure account. No bastion host. No SRE approval. No ticket. Nothing between me and the credential except the Azure CLI and a role assignment that had been granted to my AAD group months earlier for “debugging convenience.”
I had spent years before this working on an Oracle ATG monolith on JBoss. In that world, getting prod DB credentials wasn’t just hard — it was structurally impossible for me. The password lived in a file on a server I had no shell access to, and the only people who could read it were a handful of SREs with keys I would never be given. It wasn’t a policy. It was a topology fact.
The migration from ATG to microservices on Azure Kubernetes didn’t just change our architecture. It quietly flipped our insider-threat model — and nobody noticed, because the new model was “cloud best practice.”
Part 1: How the ATG Monolith Protected Secrets (By Accident)
JNDI Datasources on JBoss
In the old world, the JBoss application server owned the database connection configuration. It was defined once, in standalone.xml, by the infra team:
<!-- standalone.xml on the JBoss box -->
<datasource jndi-name="java:jboss/datasources/AppDS" pool-name="AppDS">
<connection-url>jdbc:oracle:thin:@prod-db:1521:APPDB</connection-url>
<security>
<user-name>app_user</user-name>
<password>${vault::app::db::1}</password>
</security>
</datasource>
The ATG application code never saw the password. It asked JBoss for a datasource by JNDI name and received a connection pool:
$class=atg.service.jdbc.MonitoredDataSource
dataSource=java:jboss/datasources/AppDS
I worked on that codebase for years and could not have told you what the production database password looked like, let alone fetched it.
The Access Model Was Filesystem + SSH
To get the prod credentials in the ATG world, here is the path a curious developer would have had to walk:
- SSH into the production JBoss host — an infra-team-only gate, approval-bound and audited.
- Read
standalone.xmlor the JBoss vault keystore from the local filesystem. - Know the vault keystore password, which lived in a separate privileged-access vault.
Three gates, two of them organizational. A developer on VPN with full repo access and a corporate laptop could not clear any of them. The secret lived on a machine I had no login to.
This was not good security — it was security by topology. The vault file was effectively plaintext to anyone with root on the box, audit logging on vault reads was thin, and rotation meant editing XML and bouncing servers. But it had one emergent property nobody designed for: the blast radius of an insider reading production credentials was bounded by who had shell on the production JBoss host. That was a handful of SREs. A compromised developer laptop could not exfiltrate a database password, because the laptop had no path to the file where the password lived.
Nobody articulated this property. Which is why, when it disappeared, nobody missed it.
Part 2: What Microservices on Azure Actually Looked Like
The New Topology
The modernization broke the ATG monolith into a set of Spring Boot microservices on Azure Kubernetes Service (AKS). Some services owned their own databases. The rest consumed shared infrastructure: Azure Cache for Redis, an Azure Service Bus namespace, storage accounts.
Every one of those credentials lived in Azure Key Vault, injected into pods via the Secrets Store CSI Driver — the standard AKS pattern:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: app-secrets
spec:
provider: azure
parameters:
keyvaultName: "kv-prod"
objects: |
array:
- |
objectName: db-password
objectType: secret
- |
objectName: redis-connection-string
objectType: secret
- |
objectName: servicebus-sas-token
objectType: secret
On paper, this is exactly what the cloud-native playbook tells you to do. Secrets out of source control. Rotation is an API call. Workload identity authenticates the pod. Every secret read is logged to Azure Monitor. This is the “twelve-factor” answer — the green checkmark on the architecture diagram next to “secrets management.”
For the runtime system, it was a genuine upgrade over the JBoss vault file. The pods got what they needed, rotation got cheaper, secrets left git history.
The Role Assignment Nobody Audited
Here is the turn. The Key Vault RBAC assignments, when I finally read them, looked like this:
- The AKS workload identity for each service — correct, it needs to read its own secrets to run.
- The CI/CD service principal — correct, it deploys.
- The developer AAD group, with the
Key Vault Secrets Userrole on every environment’s vault, includingkv-prod.
That last one had been granted during the early weeks of the migration, when developers needed to verify that Key Vault was working and reproduce prod connection issues locally. Somebody wrote “for debugging” on a change request, got it approved, and moved on. It was never revoked.
There were network controls on top of this. Reaching Key Vault required being on the corporate VPN. My laptop had to be on an approved device list. Conditional Access enforced MFA. These are real controls, and I am not dismissing them. But every developer on the team cleared all of them by default — that was the normal working setup. Once you were on VPN with a whitelisted machine, every member of the developer group could run:
az keyvault secret list --vault-name kv-prod
az keyvault secret show --vault-name kv-prod --name db-password --query value -o tsv
# -> plaintext production password, in your terminal, right now
And it was not just the database. The same vault held:
- Redis connection strings (with full access keys — Azure Cache for Redis access keys are admin)
- Azure Service Bus SAS tokens (queue and topic access in one token)
- Storage account keys (full container and file share access)
A single az keyvault secret list enumerated the blast radius. A single az keyvault secret show exfiltrated any line item on it. The network controls gated who could reach Key Vault — they did not gate which humans could read secrets once they were there. And the answer to “which humans” was: almost every developer on the project.
Part 3: The Comparison Nobody Put In The Migration Deck
| Dimension | ATG Monolith on JBoss | Microservices on Azure + Key Vault |
|---|---|---|
| Where secrets live | Vault file on the JBoss host | Centralized Key Vault per environment |
| Insider read path | SSH to prod host + vault keystore password + SRE role | az login + one CLI command |
| Developer reachability of prod secrets | Effectively zero — no shell on prod | One command away with default “debugging” roles |
| Blast radius of a compromised developer laptop | Whatever source code that developer could edit | Every secret in the vault — DB, Redis, Service Bus, storage keys |
| Audit trail of secret reads | Thin — JBoss doesn’t log vault file reads | Strong — Key Vault logs every access, but only if Diagnostic Settings are enabled and somebody actually reviews them |
| Rotation cost | High — edit XML, bounce servers | Low — Key Vault API call |
| Who pays the cost of access control | Infra team (bounded, operationally painful) | Every team: IAM review, JIT access, secret brokering, synthetic data, rotation automation, audit review |
The verdict the table forces: the cloud model is better at everything except the one thing that matters when something goes wrong — keeping humans away from production secrets by default.
The ATG world had weak auditing and painful rotation. The Azure world fixed both. The ATG world also had a thick wall between developer laptops and production secrets — a wall nobody designed and nobody understood until it was gone. The Azure world replaced that wall with an RBAC role assignment that, if misconfigured even slightly, evaporates it entirely.
Part 4: What “Security as a First-Class Citizen” Actually Costs
Fixing this is not a weekend project. Every item on the list below is weeks of platform team work that ships no customer features. Skip it, and you run a security model that is strictly worse than the monolith you replaced.
Strip developer access to prod Key Vaults entirely. No
Key Vault Secrets Useron developer groups in prod. Not for debugging. Not for the migration. Dev, test, and pre-prod are fine. Prod is not. The moment you grantget secretto a human, the topology wall is gone.Just-in-time access via Privileged Identity Management. If a developer genuinely needs prod secret access for an incident, they elevate via PIM with approval, time-bound expiry, and a logged reason. Standing access to prod secrets for humans should not exist.
A secrets broker for debugging. A small internal service that exposes sanitized credentials: a read-only replica with a different password, a Redis instance with scrubbed data. Developers get credentials that look like prod but cannot touch prod. The real secrets never leave the vault.
Synthetic data in lower environments. The root cause of “I need prod credentials to debug” is usually “my lower environment doesn’t have realistic data.” Solve the data problem, not the credentials problem.
Separate vaults per secret class, with least-privilege roles. Do not put database passwords, Service Bus SAS tokens, and storage account keys in the same vault with the same role. Split by blast radius. The most sensitive vaults should have a handful of identities on them, and none of those should be human.
Automated rotation with short TTLs. If every secret rotates every 24 hours, a grepped credential has a 24-hour shelf life. Rotation doesn’t prevent exfiltration, but it shrinks its value.
Key Vault logs shipped to a SIEM, with alerting. “A developer identity read 47 secrets in two minutes” should page someone. Audit logs sitting unread in Azure Monitor are useless.
In the ATG world, the topology gave you most of this for free — badly, accidentally, and with terrible rotation hygiene, but you got it. In the microservices world, you either pay for it in engineering time, or you don’t pay for it at all and hope nobody notices. Most teams I have seen pick option two by default — not because anyone decides to, but because nobody has the budget line to pick option one.
The Takeaway: One Nuance Among Many
The part I keep coming back to is not the specific credential exposure. It is the fact that nobody on the migration ever sat down and asked: what accidental security properties of the monolith are we losing, and what is the plan to recreate them? That question was not on anyone’s checklist. The exposure sat there, silent, for months, until one debugging session made it obvious.
Four things I would put on the list now:
Modernization migrations inherit the security properties of the target platform, not the source. The accidental protections of the legacy system do not transfer. They have to be re-created intentionally, and usually nobody is funded to do it — because nobody articulated them as protections in the first place.
“Cloud-native” is not “secure-by-default.” Azure Key Vault, AWS Secrets Manager, and GCP Secret Manager all solve storage and rotation. They do not solve who should be allowed to read this secret, when, from where, and how will we know if they did. That is an organizational problem, not a product problem.
Security has to be a first-class citizen in microservices specifically. Microservices multiply the surface area where secrets must be stored, accessed, rotated, audited, and revoked. What a single
standalone.xmlused to manage is now a matrix of N services × M secret types × K environments × J identities. That matrix either has a budget line behind it, or it has a breach waiting to happen.This is just one nuance. There are many. Credential exposure is one entry on a long list of things that silently get worse when you decompose a monolith. Others on that same list include transactional consistency across service boundaries, schema evolution without a shared database, cross-service authorization, cascading failure modes, deploy orchestration complexity, and the onboarding cliff for new engineers I wrote about in Overengineering Microservices. Before breaking a monolith apart, teams owe themselves an honest accounting of all of these — not just the ones the architecture diagram makes visible. Microservices solve real problems. They also create new ones that the monolith was quietly absorbing for you, and the security model is the easiest to miss — because nothing breaks until someone notices. Or worse, until someone uses it.
The monolith was not more secure because it was better designed. It was more secure because it was harder to reach. When you replace “harder to reach” with “one CLI command away,” you owe yourself a real access control model — and that model is work that does not show up on feature roadmaps and does not get celebrated when it ships.
This is one of the quiet taxes of microservices: easy to miss going in, expensive to fix retroactively, and absent from every migration pitch deck I have ever read. Before you break the monolith, make sure you know what you are paying for on the other side. The ATG world gave us security by topology. The microservices world demands security by policy — and policy is work that somebody has to actually do.