<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Security Architecture on Pawan Khandavilli</title><link>https://pawankhandavilli.com/tags/security-architecture/</link><description>Recent content in Security Architecture on Pawan Khandavilli</description><generator>Hugo -- 0.160.1</generator><language>en-us</language><lastBuildDate>Wed, 22 Apr 2026 21:30:00 +0000</lastBuildDate><atom:link href="https://pawankhandavilli.com/tags/security-architecture/index.xml" rel="self" type="application/rss+xml"/><item><title>Follow the Data: Five Questions That Make Security Architecture Clearer</title><link>https://pawankhandavilli.com/posts/follow-the-data/</link><pubDate>Wed, 22 Apr 2026 21:30:00 +0000</pubDate><guid>https://pawankhandavilli.com/posts/follow-the-data/</guid><description>&lt;p&gt;Every security architecture problem I have ever worked on, from payments to confidential computing to AI agents, has come down to the same question:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Where is the data, and what happens to it?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Not &amp;ldquo;what framework are we using.&amp;rdquo; Not &amp;ldquo;are we zero trust.&amp;rdquo; Not &amp;ldquo;which compliance checkbox do we need.&amp;rdquo; Those matter eventually. But they are not where you start.&lt;/p&gt;
&lt;p&gt;You start by following the data.&lt;/p&gt;
&lt;h2 id="the-questions"&gt;The questions&lt;/h2&gt;
&lt;p&gt;When I was at RBC working on mobile payments, I learned this the hard way. Every time I was confused about how to approach a security problem (and there were many times), the answer was always the same: stop thinking about the system. Start thinking about the data.&lt;/p&gt;</description><content:encoded><![CDATA[<p>Every security architecture problem I have ever worked on, from payments to confidential computing to AI agents, has come down to the same question:</p>
<p><strong>Where is the data, and what happens to it?</strong></p>
<p>Not &ldquo;what framework are we using.&rdquo; Not &ldquo;are we zero trust.&rdquo; Not &ldquo;which compliance checkbox do we need.&rdquo; Those matter eventually. But they are not where you start.</p>
<p>You start by following the data.</p>
<h2 id="the-questions">The questions</h2>
<p>When I was at RBC working on mobile payments, I learned this the hard way. Every time I was confused about how to approach a security problem (and there were many times), the answer was always the same: stop thinking about the system. Start thinking about the data.</p>
<p>Five questions, in order:</p>
<ol>
<li>
<p><strong>What is the data?</strong> A primary account number. A user credential. A model weight. A delegation token. Name the thing you are protecting.</p>
</li>
<li>
<p><strong>How sensitive is it?</strong> Not everything is crown jewels. A payment card number in the clear is catastrophic. A hashed vendor ID is not. Sensitivity determines how much protection is justified.</p>
</li>
<li>
<p><strong>Who can access it?</strong> Not who should. Who actually can. The engineer with SSH access to the database host. The cloud operator with hypervisor privileges. The AI agent with an API key in its environment variables.</p>
</li>
<li>
<p><strong>What makes that access trusted?</strong> Authentication is part of this, but it is not the whole picture. An API key proves identity. A signed delegation token proves bounded authority. A hardware attestation report proves the execution environment. Freshness guarantees prove the session is live. The strength of the legitimacy chain determines the strength of the trust boundary.</p>
</li>
<li>
<p><strong>Is it protected when it moves?</strong> Data at rest, data in transit, data in use. Most teams cover the first two. The third, data in use, is where confidential computing enters the picture. Most architectures still have gaps here.</p>
</li>
</ol>
<p>That is it. Five questions. Follow them through your system and the architecture reveals itself.</p>
<h2 id="payments-follow-the-card-number">Payments: follow the card number</h2>
<p>At RBC, I worked on bringing Android Pay to market using host card emulation. HCE meant the phone did not have a hardware secure element to store the card credential. The easy security answer was: do not ship it.</p>
<p>Instead, we followed the data.</p>
<p>The data was the card credential. In a traditional tap-to-pay flow, the credential lives in a hardware chip on the phone and never leaves it. Without that chip, we needed a different architecture.</p>
<p>We asked the five questions:</p>
<ul>
<li><strong>What is the data?</strong> A payment token derived from the real card number.</li>
<li><strong>How sensitive is it?</strong> Extremely. But we could limit its blast radius by making tokens session-bound and transaction-limited.</li>
<li><strong>Who can access it?</strong> The phone&rsquo;s application memory. No secure element meant no hardware isolation.</li>
<li><strong>What makes that access trusted?</strong> Device attestation at the time of provisioning, plus user verification. The token service provider verified the device before issuing credentials. But once the token was in application memory, no further legitimacy check existed until the next refresh.</li>
<li><strong>Is it protected when it moves?</strong> TLS to the token service provider. But the real question was: is it protected while it sits in application memory on the phone? The answer was no, so we had to limit the token&rsquo;s lifetime and transaction count to shrink the window of exposure.</li>
</ul>
<p>Following the data told us exactly what we needed to build: session-based tokens with limited transaction counts, periodic refresh, tighter constraints on offline use, and a shorter credential lifetime.</p>
<p>We did not start with a framework. We started with the card number and asked what happens to it at every step.</p>
<h2 id="confidential-computing-follow-the-data-in-use">Confidential computing: follow the data in use</h2>
<p>Years later, I moved to confidential computing — first at Fortanix, then at Microsoft on Azure.</p>
<p>The entire field exists because of a gap in the five questions. Most enterprises had solid answers for data at rest (encryption) and data in transit (TLS). But data in use, data being actively processed in memory, was unprotected. A privileged operator, a compromised hypervisor, or a malicious insider could read it.</p>
<p>Trusted execution environments close that gap. TEEs create hardware-isolated memory regions where data can be processed without being visible to the host operating system, the hypervisor, or the cloud operator. Remote attestation lets a relying party verify what code is running inside that boundary before sending sensitive data.</p>
<p>But as the Trail of Bits audit of WhatsApp&rsquo;s Private Processing showed earlier this month, TEEs do not automatically produce trust. The audit matters because each finding broke a different link in the legitimacy chain:</p>
<ul>
<li><strong>What is the data?</strong> User messages being processed for AI summarization inside a confidential VM.</li>
<li><strong>How sensitive is it?</strong> End-to-end encrypted messages for billions of users. About as sensitive as it gets.</li>
<li><strong>Who can access it?</strong> In theory, only the code inside the enclave. In practice, the audit found that environment variables loaded after measurement could inject arbitrary code, and unmeasured ACPI tables could expose memory to fake virtual devices — all while attestation still appeared valid.</li>
<li><strong>What makes that access trusted?</strong> Remote attestation: a cryptographic report proving what code is running. But the audit found the firmware&rsquo;s patch level was trusted via self-reporting rather than AMD&rsquo;s signed certificate, and attestation reports lacked session freshness, making them replayable.</li>
<li><strong>Is it protected when it moves?</strong> TLS between client and enclave, but without a session-bound nonce in the attestation, a replayed report could redirect data to an impersonating server.</li>
</ul>
<p>Follow the data. Even inside a TEE, the questions do not change. The answers just get more technical, and the gaps get more dangerous when you skip a question.</p>
<h2 id="ai-agents-follow-the-authority">AI agents: follow the authority</h2>
<p>This is where the pattern gets most interesting.</p>
<p>In agent systems, the highest-value data is often not information but permission. When an AI agent acts on behalf of a human — submitting a purchase order, renewing a subscription, modifying an IAM policy — the thing flowing through the system is delegated authority.</p>
<p>The five questions apply directly:</p>
<ul>
<li><strong>What is the data?</strong> A delegation: &ldquo;this agent is authorized to act on behalf of this person, within these bounds.&rdquo;</li>
<li><strong>How sensitive is it?</strong> Very. A compromised or overly broad delegation token is equivalent to a stolen credential, except the agent can act faster and at higher volume than a human.</li>
<li><strong>Who can access it?</strong> The agent runtime, the governance substrate, every MCP tool the agent invokes, and every external API it calls. If the agent holds standing API keys in environment variables, then anything with access to the agent&rsquo;s process memory has access to the authority.</li>
<li><strong>What makes that access trusted?</strong> Today, most agent systems use API keys or OAuth tokens. That tells you which agent is making the request. It does not tell you who delegated the authority, what scope was granted, whether the delegation is still valid, or whether the execution environment is trustworthy. Authentication without delegation semantics, attestation, and freshness leaves the trust chain incomplete.</li>
<li><strong>Is it protected when it moves?</strong> The delegation flows from human to agent to tool to external system. At each hop, the question is: can the receiving system verify that the authority is legitimate, bounded, and fresh?</li>
</ul>
<p>This is why attested agent identity matters. This is why delegation needs to be a first-class primitive, not an afterthought. And this is why compliance-grade evidence, structured proof of what happened, who authorized it, and under what policy, is the missing layer.</p>
<p>Follow the authority through the system and the architecture requirements become obvious.</p>
<h2 id="why-this-matters-more-than-frameworks">Why this matters more than frameworks</h2>
<p>I am not against frameworks. NIST, OWASP, CIS, ISO 27001. They all serve a purpose. They give you coverage checklists and a common language.</p>
<p>But frameworks do not teach you how to think about a new problem. They teach you how to verify that you have covered known categories.</p>
<p>When you encounter something genuinely new, like a payment system without a secure element, an AI model processing encrypted messages inside a TEE, or an autonomous agent authorized to spend money, the framework has not caught up yet. The questions have.</p>
<p><strong>Follow the data</strong> is not a framework. It is a discipline. It works because it forces you to trace the asset that carries risk — whether that is information or authority — through every layer of the system, asking at each point: who can see it, what makes that access trusted, and what happens if that trust is misplaced.</p>
<p>Every security architecture I have built, from card tokenization to confidential VMs to agent trust layers, started with that question.</p>
<p>It has held up so far.</p>
<hr>
<h2 id="where-this-gets-hard">Where this gets hard</h2>
<p>This model assumes you can name the data and trace it. In sufficiently messy systems (legacy architectures, multi-party AI ecosystems, indirect flows through shared services) that is harder than it sounds. Sensitivity classification is partly art; two reasonable people can disagree on blast radius.</p>
<p>But the value is in forcing the conversation, not in guaranteeing perfect agreement. The messier the system, the harder this exercise is, and the more it matters.</p>
<h2 id="how-to-apply-this">How to apply this</h2>
<ol>
<li><strong>Pick a critical data asset</strong> — a credential, a token, a delegation, PII.</li>
<li><strong>Trace its journey</strong> through your system, hop by hop.</li>
<li><strong>Ask the five questions at each step.</strong> Write down the answers. Highlight where the answers are uncertain — that is where the gaps are.</li>
<li><strong>Identify where boundaries are weak</strong> — where access is broader than it should be, where trust is weaker than the sensitivity demands, where data is exposed in use.</li>
<li><strong>Repeat for every major flow.</strong> Make it muscle memory.</li>
</ol>
<p>If you are getting started with security architecture — or if you are a product manager or engineer who works with security teams — this is the one mental model I would recommend internalizing first. Frameworks come second. The data comes first.</p>
<p><em>Disclaimer: The views expressed here are my own and do not represent those of my employer.</em></p>
]]></content:encoded></item></channel></rss>