From 'Trust Us' to 'Verify Us': Anthropic, Confidential Inference, and the Next Trust Problem
Jason Clinton’s OC3 2026 talk on confidential computing and scaling laws pushed me to finally write about Anthropic’s Confidential Inference Systems paper, a joint publication with Irregular (formerly Pattern Labs), released June 2025. It’s one of the cleaner public treatments of what it actually means to make AI trust verifiable rather than asserted. It also points directly at a gap the industry hasn’t begun to close: trust in agentic systems. This post unpacks the paper’s core contributions, where its claims need qualification, and where I think the conversation has to go next: from confidential inference to attested agent identity. ...