Weekend Notebook #2619 – When the Chip Race Widens and AI Starts Executing

Published on LinkedIn and amitabhapte.com on 10th May 2026

Three themes this week. The semiconductor wealth that spent three years concentrating in one company is spreading. Anthropic’s compute hunger is turning unlikely infrastructure players into AI winners. And AI has quietly crossed from advising to executing, in cybersecurity, commerce, and code. The pace of change is not slowing. It is compounding.

1. The Chip Race Widens

Wall Street called it a “changing of the guard in AI.” Intel, AMD, and Micron each gained around 25% or more this week, while Nvidia rose a more modest 8%. Intel has more than doubled year-to-date. Micron has surged over 750% in the past year and crossed an $800 billion market cap. The rotation reflects a maturing thesis: AI infrastructure now requires CPUs, memory, and optical networking at scale, not just GPUs. Memory is in shortage, with Micron’s CEO noting customers are receiving only 50 to 65% of their requirements. That squeeze is driving prices up and turning a 47-year-old company in Idaho into one of the hottest trades in the market.

Nvidia is not standing still. It has committed more than $40 billion in equity investments in 2026 alone, anchored by its $30 billion stake in OpenAI and multi-billion dollar deals with Corning and data centre operator IREN. CEO Jensen Huang’s stated logic: “We don’t pick winners. We need to support everyone.” Critics call it circular. Nvidia invests in customers who buy its chips, who then generate the demand that justifies the investment. But there is a harder long-term pressure building. Hyperscalers are building their own AI chips: Google’s TPUs, Amazon’s Trainium, Meta’s MTIA silicon. Amazon’s custom chip business has already crossed a $20 billion revenue run rate. For inference workloads, purpose-built silicon is faster and cheaper than general-purpose GPUs. Nvidia’s dominance in training is secure for now. Its dominance across the full stack is not.

My PoV: The AI chip story is no longer a single-stock thesis. For technology leaders building infrastructure strategy, the question is no longer just “how much Nvidia capacity can we secure?” It is “where does custom silicon from our cloud provider give us a cost and performance advantage for inference at scale?” That is an architecture decision, and it belongs in your AI infrastructure roadmap now.

2. Compute Scarcity Is Creating Unlikely Winners

Anthropic has experienced 80x growth in annualised revenue and usage in Q1 2026. The constraint is not demand. It is compute. The company is buying capacity from every available source, and this week that search reached an unexpected destination. Akamai, a 28-year-old content delivery network founded at MIT, signed a $1.8 billion, seven-year cloud infrastructure deal with Anthropic. The largest contract in Akamai’s history. Its stock rose 27% on the news. Akamai’s 4,000-plus global network locations, built to deliver web content without latency, are being repurposed to run AI inference at the edge. The company’s cloud infrastructure revenue was already up 40% year-on-year before this deal. The Anthropic commitment gives it revenue visibility its legacy CDN business never offered.

Japan joined the story through SoftBank. The Nikkei reached a record high this week, driven in significant part by SoftBank’s AI positioning. SoftBank has committed $100 billion to AI infrastructure in the US through the Stargate joint venture with Oracle, and its domestic portfolio of AI investments has been re-rated sharply upward. Japan is no longer a spectator in the AI infrastructure cycle. Through SoftBank’s capital and government-backed semiconductor incentives, it is an active participant.

My PoV: Akamai’s transformation is the week’s most instructive story for enterprise technology leaders. A company that spent 25 years building global network infrastructure is now one of Anthropic’s most important compute partners. The lesson: AI’s compute hunger is so acute that infrastructure built for entirely different purposes, latency-optimised content delivery, is being repurposed and re-valued. Your own organisation’s infrastructure assets may have strategic value in the AI economy that your current roadmap does not account for.

3. AI Moves from Advising to Executing

OpenAI launched GPT-5.5-Cyber, a variant of its latest model specifically trained to be more permissive for security workflows, available in limited preview to vetted cybersecurity teams. It assists with vulnerability identification, malware analysis, binary reverse engineering, and patch validation. It does not write malware or steal credentials, but it removes the friction that the standard model imposes on legitimate security work. The approach contrasts with Anthropic’s Mythos: OpenAI is betting on broader, verified access through its Trusted Access for Cyber programme, scaled to thousands of individual defenders and hundreds of teams. Anthropic restricted Mythos to around 40 organisations. Two philosophies on the same underlying problem: how do you democratise AI-assisted defence without arming attackers?

In commerce, Alibaba announced the integration of its Qwen AI platform directly into Taobao, giving the agent access to over four billion products across Taobao and Tmall, plus logistics, after-sales, and Alipay checkout. The shopper asks; the agent browses, compares, applies 30-day price tracking, and completes the transaction. This is the largest agentic commerce deployment yet from any platform globally. Western equivalents, including Amazon’s Rufus and Shopify’s AI integrations, remain in the advisory lane: they help you decide, but you still transact. Alibaba’s design puts the agent in the execution seat end-to-end. That is a different operating model, and it will be studied.

In software, Google’s internal struggle to coordinate its AI coding tools is handing competitive ground to Anthropic’s Claude Code and OpenAI’s Codex. Google has world-class models and one of the largest developer communities in the world. But internal alignment on how to deploy those models for coding has lagged, and the market has noticed. Claude Code, which powers agentic multi-file coding tasks, has become the default tool for serious AI-assisted software development at thousands of enterprises. Execution, not capability, is the differentiator.

My PoV: The thread connecting cybersecurity, agentic commerce, and AI coding is the same: AI is no longer in the advisory seat. It is in the execution seat. For technology and business leaders, this changes the governance question. The relevant question is no longer “is this AI recommendation accurate?” It is “who is accountable when this AI action has consequences?” That accountability framework needs to be designed before the deployment, not after the incident.

My Takeaway This Weekend

The AI infrastructure story is maturing and broadening simultaneously. The semiconductor cycle is distributing wealth beyond the initial GPU monopoly. Compute scarcity is elevating infrastructure companies that were not in the AI conversation two years ago. And AI capability is crossing the line from insight to action across security, commerce, and software development.

Three years into the generative AI era, the competitive advantage is no longer who has access to the best model. It is who has the infrastructure to run it, the governance to deploy it responsibly, and the organisational clarity to act faster than the competition. The architecture decisions being made this quarter will set the boundaries of what is possible in 2027. That window is shorter than most planning cycles assume.