Weekend Notebook #2615 – When AI become Infrastructure, Risk and Rivalry

Published on LinkedIn and amitabhapte.com on 12th Apr 2026

This was a week where AI showed up as an infrastructure bet, systemic risk, competitive battleground, and talent story, all at once. Many stories. One consistent thread: the foundational layers of the AI economy are being built and contested simultaneously, and the institutions designed for a slower world are catching up in real time.

The Anthropic Week: Revenue, Risk, and Rivalry

Three distinct signals from one company. First, the commercial: Anthropic’s annualised revenue crossed $30 billion, up from $9 billion just four months ago. CoreWeave sealed a multi-year infrastructure deal to power Claude workloads, days after a $21 billion commitment from Meta. Nine of the ten leading AI model providers now run on CoreWeave’s platform. Infrastructure is consolidating fast.

Second, the risk signal. Anthropic introduced Mythos Preview, a model so capable at finding and exploiting software vulnerabilities that the company chose not to release it publicly. Under Project Glasswing, access is limited to Amazon, Apple, Google, Microsoft, JPMorgan, and around 40 critical infrastructure organisations. The model has already identified vulnerabilities across every major operating system and browser, including a 27-year-old flaw in OpenBSD. Treasury officials and the Federal Reserve convened an emergency meeting with Wall Street’s senior bank CEOs. The Bank of England placed Mythos on the agenda of its Cross-Market Operational Resilience Group, alongside the FCA and the National Cyber Security Centre. Canada convened its own session the same week.

Third, the investor story. OpenAI’s secondary market shares have become difficult to sell. Around $600 million of stock found very few buyers on the secondary market. Meanwhile, demand for Anthropic shares is described as almost insatiable, with $2 billion in declared buy interest and almost no sellers. OpenAI responded with an investor memo characterising Anthropic as compute-constrained. The defensiveness itself is the signal.

My PoV: Mythos is the clearest signal yet that AI safety is an operational risk category, not a philosophical one. For technology leaders, the question is not whether your organisation uses Anthropic products. It is whether your security posture has been updated for an era where AI can identify and weaponise software vulnerabilities at machine speed. On the investor story: the AI platform you build on today is not easily changed. Governance clarity and consistent product performance now matter as much as benchmark scores.

New Entrants, New Approvals

Meta’s $14.3 billion bet on Alexandr Wang delivered its first output this week. Muse Spark, the first model from Meta Superintelligence Labs, is a natively multimodal reasoning model rebuilt from the ground up over nine months. It is competitive with frontier models on several benchmarks, though not a leader across the board. More significant than the model is the strategy: Muse Spark launched as a closed, proprietary product. Meta, which built its AI identity on open-source Llama, has quietly changed its approach. With capital expenditure planned at $115 to $135 billion in 2026, nearly double last year, and three billion daily users as a distribution surface, Meta is no longer treating AI as an experiment.

Separately, the Netherlands became the first EU country to formally approve Tesla’s Full Self-Driving Supervised system, after 18 months of testing covering 1.6 million kilometres on European roads. The system is not autonomous: the driver remains legally responsible and must be ready to intervene. But the approval, under EU mutual recognition rules, opens a pathway to continent-wide rollout by mid-2026. It is the first time a physical AI system of this complexity has passed rigorous European regulatory scrutiny, and the precedent will matter well beyond vehicles.

My PoV: Meta’s shift from open to closed signals that distribution advantage, not model openness, is where the competitive moat is now being built. For enterprise leaders, the Tesla approval matters less as a driving story and more as a governance template. Physical AI systems require documented safety evidence, long evaluation windows, and ongoing reporting obligations. Build that infrastructure now, before the regulator requests it.

India’s Technical Capital Comes of Age

Two data points deserve to be read together. GitHub reported that India now has 27 million developers on its platform, 15 percent of the global total, with more than two million new joiners in 2026 alone, more than any other country. India is the world’s second largest contributor to open-source AI projects, with over 7.5 million contributions on GitHub. At the same time, TCS posted Q4 results showing $12 billion in contract value for the quarter, $40.7 billion for the year, and annualised AI revenue of $2.3 billion. Its HyperVault data centre business, targeting 1 gigawatt of capacity, has moved into commercial structuring with hyperscalers and frontier AI companies. The positioning is explicit: infrastructure to intelligence, end to end.

My PoV: India is simultaneously facing the erosion of traditional IT outsourcing as AI automates entry-level tasks, and building the technical and infrastructure base to compete in the next generation of AI deployment. A country producing 27 million GitHub developers and the world’s second largest open-source AI contributor base is not a back office. It is a source of technical capital at a scale few geographies can match. Enterprise talent strategies that are not designed to work with that pipeline are working around it at significant cost.

My Takeaway This Weekend

The model layer of AI is commoditising quickly. The infrastructure layer, physical, computational, regulatory, and human, is not. The companies and countries securing advantaged positions in those foundational layers will shape the AI decade. The ones still treating AI as a product decision will find themselves working within a landscape that others have already built.

The Mythos story, the CoreWeave deals, the Tesla approval, India’s developer numbers, Meta’s infrastructure bet: none are separate stories. They are all evidence of the same transition. Intelligence is no longer arriving as a feature. It is arriving as a structural condition. The leadership question is no longer whether to engage. It is whether your organisation is building on the right foundations before the terrain gets harder to move on.

Weekend Notebook #2613 – When AI Meets Friction

Published on LinkedIn and amitabhapte.com on 29th Mar 2026

This week, ambition collided with reality across nearly every front of the AI story. Agentic commerce promised to remove the human from the checkout. A new model leaked before it was ready to launch. The internet crossed a threshold most of us hadn’t noticed. And one logistics giant did something quietly radical: it decided to teach half a million people to work with AI rather than step aside for it. Different signals. Same underlying tension. The gap between what AI can do and what organisations are actually ready for is growing. That gap is where leadership happens.

Agentic Commerce: The First Honest Report Card

Late last year, OpenAI launched Instant Checkout, a feature that lets shoppers complete purchases directly inside ChatGPT without ever visiting a retailer’s website. Walmart signed up as the launch partner. Etsy and Shopify quickly followed. The narrative was compelling: conversational commerce had arrived. The results were not. Walmart has now disclosed that conversion rates inside ChatGPT were three times lower than for click-out experiences that redirected users to Walmart’s own site. That is not a rounding error. It is a structural finding. OpenAI has since moved on, phasing out Instant Checkout in favour of an app-based model that gives retailers more control of the transaction. Walmart is now embedding its own chatbot, Sparky, directly into ChatGPT and Google Gemini, rather than handing the checkout process to a third party.

Meanwhile, Gap has become the first major fashion retailer to launch direct checkout within Google’s Gemini platform, part of an emerging Universal Commerce Protocol that Google has been rolling out since January. The approach is different in intent: Gap is pairing the checkout integration with an AI-powered sizing tool, specifically targeting the return rate problem that plagues online apparel. Net sales were up 2% in Q4 2025, with online sales growing 5%. The CTO was clear that this is about solving real customer problems, not chasing innovation for its own sake.

My PoV: The Walmart data is the most useful signal the agentic commerce story has produced. Consumers will use AI for discovery. They are not yet ready to surrender the checkout to it. The friction of a familiar interface, seeing the full cart, entering payment details on a trusted site, provides reassurance that an embedded AI flow does not yet replicate. For enterprise and retail leaders, the lesson is architectural: own the transaction layer. Let the AI own the discovery. The retailers now embedding their own branded experiences into AI platforms, rather than ceding the whole journey, are making the smarter structural bet.

Anthropic’s Week: Values as a Growth Strategy

It has been a remarkable few months for Anthropic. A public standoff with the US Department of Defense over the use of Claude in lethal autonomous systems, followed by Super Bowl ads that went after OpenAI’s decision to serve ads to its users, has produced something unexpected: a subscriber surge. TechCrunch analysis of 28 million US consumer transactions shows paid subscriptions more than doubling since the start of 2026, with record new sign-ups in January and February. Web traffic was up 43% month-on-month in February and nearly tripled year-on-year. Most new subscribers are at the entry-level Pro tier at $20 per month. Claude Code and Claude Cowork, the developer and productivity tools released in January, have accelerated that growth further.

The same week brought a different kind of Anthropic headline. The company inadvertently exposed an internal draft blog post in a publicly searchable data store, revealing a new model under development called Claude Mythos. The draft described it as the company’s most powerful model to date, part of a new capability tier called Capybara, significantly beyond the current Opus tier. The document also described the model as posing unprecedented cybersecurity risks, specifically for its ability to identify and exploit software vulnerabilities at speed. Cybersecurity stocks fell immediately: CrowdStrike, Palo Alto Networks and Zscaler each dropped around 6%. Anthropic confirmed the model exists and is being tested with early access customers.

And underpinning all of this, a new report from Human Security found that AI and automated traffic have, for the first time, overtaken human traffic on the internet. Automated traffic grew eight times faster than human traffic in 2025. AI-driven traffic alone grew 187% across the calendar year. The internet was built on the assumption that a human being was on the other side of the screen. That assumption is no longer safe.

My PoV: Three separate but connected signals from Anthropic this week. First, values can be a growth driver. Taking a principled public position on how AI should and should not be used attracted consumers in a way that a model benchmark never could. That is worth paying attention to for any enterprise working out how to position itself in the AI market. Second, the Mythos leak reminds us that the cybersecurity stakes are rising with every capability jump. Anthropic’s plan to give cyber defenders early access before general release is the right instinct, but the gap between what AI can do and what defenders are prepared for is narrowing fast. Third, if the majority of internet traffic is now non-human, the infrastructure assumptions of most enterprise digital strategies need revisiting, from fraud detection to API design to web analytics.

AI and the Financial System: A Stress Test No One Planned For

While AI companies attract record investment, a less discussed story is developing in private credit. Shadow banking, the network of private credit funds, business development companies and non-bank lenders that has grown significantly since 2008, has been heavily exposed to software-sector loans. The concern, now surfacing in the mainstream, is that AI may be systematically undermining the value of the software companies these funds have lent against. Apollo Global Management was among the first to flag it publicly last year: “Is software dead?” is now the question private credit managers are trying to price. A closely watched index of 44 business development companies shed around $5 billion in February. The Bank of England has announced it will conduct the world’s first stress test of the shadow banking sector. Lloyd Blankfein has drawn parallels to 2005 and 2006, when hidden leverage was building quietly beneath a rising tide.

My PoV: This story rarely appears in AI newsletters. It should. The thesis is straightforward: if AI can replace significant portions of software development work, the revenue and margins of many mid-market software companies, which form the collateral base for billions in private credit, come under structural pressure. This is not a prediction of an imminent crisis. It is an observation that the financial system has not yet priced AI disruption into the sectors most exposed to it. For enterprise technology leaders, this has a practical implication: the cost, availability and terms of technology financing are likely to become more volatile. AI is not just disrupting products. It is beginning to reprice the capital structures behind them.

The Workforce Question: FedEx Bets on Learning

Against a backdrop of sector layoffs and automation anxiety, FedEx has launched what may be the largest corporate AI upskilling programme in logistics. The initiative covers more than 400,000 employees globally, with personalised, role-based modules that will update monthly. The programme is explicitly tied to internal promotion pathways. The company calls it “promotion-ready” AI training. Frontline workers are already applying for corporate roles at higher rates since the programme launched. Every C-suite executive at FedEx spent two days in Silicon Valley selecting the right technology partners before a single module was deployed.

This matters in context. UPS announced 30,000 layoffs on top of 48,000 the previous year. FedEx has also made cuts. But its strategic posture is distinctly different: use AI to make the workforce more capable, not smaller. The company measures something it calls AIQ, an AI quotient, tracking progress rather than just completion rates. Chief Data and Information Officer Vishal Talwar was direct: “We are measuring progress around AI, not necessarily just success, because it’s going to be very difficult to say this success is only attributed to AI.”

My PoV: Only 28% of organisations have embedded continuous AI learning, according to Accenture’s 2026 Pulse of Change report. FedEx is in that minority, and it is moving at a scale that few others have attempted. The principle behind the programme is the right one: AI literacy cannot be a specialist skill. It needs to be a baseline capability across every level of the organisation, from warehouse floor to boardroom. The harder question is how to measure the business value generated rather than just the learning hours logged. That measurement challenge is the next frontier of enterprise AI investment discipline.

My Takeaway This Weekend

Four stories, one shared theme: AI is not meeting reality gently. Agentic commerce stumbled at the transaction layer, where trust has always mattered most. A powerful new AI model leaked before its makers were ready, and the market reacted to the risk before the product even shipped. The internet crossed a threshold that few enterprise strategies were built for. And in logistics, one company decided to bet on its people rather than against them.

The leaders navigating this well are not those with the most ambitious AI roadmaps. They are the ones who are honest about where friction is real, where trust has not yet been earned, and where their own organisations need to build capability before deploying it. Friction is not a failure of technology. It is the system telling you where the work still needs to be done.

Weekend Notebook #2612 – when Agents become the Architecture

GTC Live 2026 Keynote Pregame – photo credit NVIDIA GTC

Published on LinkedIn and amitabhapte.com on 22nd Mar 2026

GTC 2026 drew 30,000 people to San Jose. Jensen Huang announced $1 trillion in confirmed orders for Blackwell and Vera Rubin systems through 2027, double last year’s projection. But the number was not the headline. The architecture behind it was.

From Data Centres to AI Factories

Huang reframed the data centre entirely. The new construct is the AI factory, a facility whose primary output is not storage or compute, but tokens. Every query answered, every decision supported, every automated workflow consumes them. The new efficiency metric is not uptime. It is token throughput per watt.

This changes the business case for infrastructure investment. Data centres were cost centres. AI factories are production lines. When the output has a unit price, the conversation with the business shifts fundamentally.

Nvidia’s keynote slides showed 40% of its order pipeline now coming from enterprise, sovereign AI, and industrial customers, not just hyperscalers. The enterprise wave is no longer coming. It has arrived.

My PoV: CIOs who still frame infrastructure purely as a cost management conversation are using the wrong model. Token economics and inference costs belong in your architecture discussions now. Your business leaders will ask about them within 18 months.

The Agent is a Platform

The most important slide of the keynote carried a simple title: ‘Agents: A New Computing Platform.’ Huang’s argument was precise. The PC was a platform. The smartphone was a platform. The agent is next, with its own architecture: a reasoning core connected to memory, sub-agents, tools, files, and a multi-modal prompt layer.

Nvidia made this concrete with NemoClaw, an enterprise-ready implementation of the OpenClaw agentic framework, bringing autonomous agents inside the enterprise firewall with privacy controls and policy guardrails. Huang also noted that Nvidia’s own engineers will receive annual token budgets as a productivity metric. Token consumption is becoming a measure of knowledge work output.

My PoV: If the agent is a platform, enterprise architecture must be designed around it, not retrofitted for it. The question is not which agent tool to pilot. It is what your data, security, and integration architecture looks like when agents become the primary consumers of enterprise systems.

Enterprise IT: From SaaS to Agent-as-a-Service

One slide showed two pictures. On the left: today. Data centre, SaaS software, GSI, humans on top. On the right: tomorrow. An AI factory generating tokens, software and AI providers connected by agents, and humans repositioned as enterprise information workers directing and overseeing rather than executing.

Huang called this the Enterprise IT Renaissance. Not disruption. Not replacement. Renaissance. And the implications for software were explicit: every SaaS company must become an Agentic-as-a-Service company. The subscription model built on human users logging in is giving way to a consumption model built on agents accessing capabilities programmatically.

My PoV: This is the most significant vendor landscape shift since the move to cloud. The roadmap conversations you have with your major software vendors over the next twelve months should be explicitly about their agentic strategy. If they do not have a credible one, that is a signal worth taking seriously.

My Takeaway This Weekend

GTC 2026 was not a product launch. It was an architectural declaration. The era of AI as a layer on top of existing systems is ending. What follows is AI as the foundation, with tokens as the unit of value, agents as the primary computing paradigm, and enterprise IT reborn around AI factories.

The organisations that rearchitect early will compound advantage. Those that treat this as another cycle to manage carefully will find the gap harder to close each quarter.

Weekend Notebook #2611, When AI Decides Who Works

Published on LinkedIn and amitabhapte.com, 15th March 2026

This week, AI moved from strategy decks to org charts. Companies restructured around it, economists warned about it, robots were built to replace entry-level roles on both sides of the desk, and a startup reframed the computer itself as a delegate. AI is no longer just a tool you choose to use. It is increasingly the logic by which decisions about people, capital, and infrastructure get made.

1. Meta’s Bet: Fewer People, More Agents, New Markets

Three Meta stories this week that belong together. Reuters reported that Meta is planning layoffs affecting up to 20% of its workforce of roughly 79,000 people, its largest restructuring since 2022. The rationale is twofold: offset $600B in planned data centre investment by 2028 and capture the productivity gains of AI-assisted workers. In the same week, Meta acquired Moltbook, the Reddit-style social network for AI agents, folding it into Meta Superintelligence Labs as an always-on directory for agent-to-agent coordination. And Reliance Industries restructured its AI subsidiary REIL, with Meta’s Facebook Overseas picking up a 30% stake, formalising a strategic partnership targeting enterprise AI at India’s scale.

My PoV: These are not separate decisions. Meta is deliberately rebalancing: cutting human costs, acquiring agent infrastructure, and planting equity stakes in high-growth markets, all at the same time. If you haven’t built a clear internal narrative linking your AI investment to workforce implications, you are already behind the conversation your board is having.

2. The End of Entry-Level

ServiceNow CEO Bill McDermott told CNBC that graduate unemployment could reach the mid-30s within a few years as agents absorb white-collar entry-level work. To put that in context, the Federal Reserve Bank of New York currently puts graduate unemployment at 5.7%, with underemployment at 42.5%, the highest since 2020. The same week, Travis Kalanick launched Atoms, a specialised industrial robotics company targeting food, mining, and transport. His framing was deliberately economic: “gainfully employed robots, machines best suited for the job at hand.” Where McDermott sees AI compressing white-collar entry points, Kalanick is building the physical equivalent for blue-collar work. The entry point to work is narrowing on both sides at once.

My PoV: The mid-level talent of 2028 is being shaped right now. Entry-level pipelines into software, operations, and logistics are compressing simultaneously. Review your early-career hiring and graduate development programmes, not as a cost decision, but as a strategic investment in the people who will govern and manage your AI systems over the next decade.

3. When AI Becomes the Interface

Google launched Ask Maps, a Gemini-powered conversational layer inside the world’s most-used navigation app. With 2 billion monthly users, the shift is not subtle. You no longer type a destination. You describe a situation and let the system figure it out. Perplexity went further, launching Computer, a general-purpose digital worker that operates your full software stack, breaks goals into sub-tasks, and deploys specialised agents to get them done. In enterprise testing benchmarked against McKinsey, Harvard, MIT, and BCG standards, it completed an estimated 3.25 years of work in four weeks. Both products make the same argument: the interface layer is collapsing from menus and search boxes into intent and delegation.

My PoV: Users are learning to describe outcomes rather than navigate software. That expectation does not stay in consumer apps. It is already arriving in how people interact with enterprise systems, ERP, CRM, procurement, and the rest. Conversational, intent-driven interfaces need to be on your near-term roadmap. Not your 2028 one.

4. India Builds the Stack

The Adani Group committed $100 billion to build renewable-powered hyperscale AI data centres across India by 2035, expanding from 2GW to 5GW through AdaniConneX, anchored by partnerships with Google and Microsoft. Gautam Adani put it plainly: “India will not be a mere consumer in the AI age. We will be the creators, the builders, and the exporters of intelligence.” Read alongside the Reliance-Meta JV above, India’s approach is becoming structurally distinct. It is not just attracting global capital. It is negotiating equity and infrastructure ownership in return.

My PoV: India is building a domestic AI stack with real strategic ambition behind it. For enterprises with India operations, outsourcing relationships, or supply chain exposure, the talent and compute environment there is changing faster than most roadmaps account for. Worth factoring in sooner rather than later.

My Takeaway This Weekend

The word that connects this week’s stories is delegation. Companies are delegating headcount decisions to AI economics. Robots are taking on entry-level physical tasks. Google and Perplexity are delegating the interface itself to agents. India is delegating compute sovereignty to its own industrial groups.

AI leadership in 2026 is no longer about adoption. It is about knowing what to delegate, to whom, and under what governance. The organisations that navigate this well will not be the fastest movers. They will be the ones that redesigned their decision rights before the machine made the decision for them.

Weekend Notebook #2609 – When AI becomes the risk

Published on LinkedIn and amitabhapte.com on1stMar 2026

This week, the AI story fractured, not in capability, but in confidence. Capital is still flooding in. The technology is still advancing. But disruption and doubt arrived in the same week as the deal announcements.

The OpenAI Capital Architecture

OpenAI is raising $110 billion in a landmark funding round that values the company at $840 billion, highlighting the intensity of global investment in artificial intelligence. The round is led by SoftBank, Nvidia, and Amazon, with Amazon also securing a major strategic partnership covering cloud infrastructure and custom AI chips. The deal leaves Microsoft’s position intact, with Azure remaining the exclusive cloud for OpenAI’s core APIs and products, as OpenAI moves closer to a potential IPO later this year.

My PoV: OpenAI is no longer just raising capital, it is building infrastructure leverage across competing hyper-scalers. The AI platform landscape is consolidating fast, and the enterprise partnerships you form today will be difficult to unwind. Choose with eyes open.

AI’s Social Contract is Cracking

Two signals this week pointed to the same underlying tension. Artificial intelligence is beginning to erode the economic model behind India’s IT and outsourcing boom, as tasks once offshored to millions of graduates can increasingly be done by machines. Hiring slowdowns at major firms signal that automation is arriving before mass layoffs, putting pressure on young, entry‑level workers.  Simultaneously, Block cut nearly half its workforce, explicitly naming AI as the cause, the first major corporate leader to do so at this scale.

My PoV: These are not isolated incidents. They are early signals of a structural reckoning. India is racing to become a compute power while its labour model erodes, the window to bridge that gap is narrow. And Block’s candour, intentional or not, has opened a door that will be hard to close. Regulators, boards, and workforces will now expect transparency on AI-driven headcount decisions. If you haven’t developed a clear internal narrative on this, you are already behind.

From Training to Running AI Everywhere

Nvidia is preparing a new chip platform focused on AI inference, the real‑time processing that turns trained models into fast, usable answers, signalling a shift beyond pure training dominance.
The move reflects growing pressure from customers and rivals to deliver lower‑latency, more efficient AI systems at scale, especially for consumer and enterprise applications. In the same week, Dell shares surged 22% after the company beat Q4 earnings expectations and raised guidance, driven by strong momentum in AI servers. Management expects AI server revenue to more than double to ~$50bn by 2027, even as memory shortages push up component costs across the industry.

My PoV: The first wave of AI investment was about who could train the biggest models. The next is about who can run AI economically at the point of need. Inference efficiency will define the unit economics of every enterprise AI product within 24 months. It deserves a place in your architecture conversations now, not later.

Highlight: When a Report Moved Markets

The Citrini Research 2028 Global Intelligence Crisis report became one of the most discussed AI moments of the week. Framed as an “AI doomsday” scenario, it sparked sharp market swings by sketching a future of rapid AI‑driven job losses and cascading economic disruption, briefly wiping billions off technology and financial stocks.

My PoV:. Even as many investors and economists challenged the assumptions behind the report, the reaction itself was telling. The deeper signal was not about prediction accuracy, but about sentiment: AI has shifted from a straightforward innovation story to a source of systemic uncertainty with real market consequences.

My Takeaway This Weekend

Two stories are running in parallel, and the gap is widening. One is of extraordinary investment: OpenAI near a trillion-dollar valuation, Amazon deploying large capital, Nvidia moving to own both ends of the AI stack. The other is of disruption arriving faster than the systems built to absorb it, jobs cut and named, a country’s growth model quietly hollowing out, markets rattled by a what-if scenario.

The leadership challenge is no longer proving AI’s value. It is managing the asymmetry, between deployment speed and adaptation pace, between capital market confidence and labour market anxiety. The winners won’t be those who move fastest. They’ll be those who move with enough clarity to bring their organisations with them.

Weekend Notebook #2605 – Industrialization of Intelligence

Published on LinkedIn and amitabhapte.com on1stFeb 2026

We spent the last two years treating AI like a sophisticated search bar. You ask, it answers. But the signals this week suggest we are moving past the “chatbot” phase and into something much more structural. We are moving from tools that wait for us, to systems that move without us.

The Rise of the Machine Network

Moltbook, a Reddit-style network populated entirely by AI agents recently surfaced. Whether the user numbers are real is secondary. The insight is the architecture: agents talking to agents, forming factions, and building shared memory.

  • The Shift: We are moving from “AI as a helper” to “AI as a participant.”
  • If the 2010s were about connecting people (Social), the 2020s are about connecting autonomous workflows. When software starts talking to software, the human “prompt” becomes the bottleneck.

China and the Physical S-Curve

While the West chases the “God-model” (AGI), China is winning on diffusion. They aren’t just building LLMs; they are embedding “good enough” intelligence into the physical world, ports, eVTOLs, and factories.

  • The US has the best “brains” (frontier models), but China is building the best “bodies” (embodied AI).
  • By the time we perfect the logic, they may have already locked in the logistics. It’s a classic play: don’t build the most expensive engine; build the most cars.

India’s Compute Sovereignty

India’s 20-year tax holiday for data centers is a fascinating piece of industrial policy. It’s a realization that in an AI economy, compute is the new oil, and the “refineries” (data centers) need to be local.

  • India isn’t just selling talent anymore; they are selling territory for silicon.
  • This moves India from being a “back office” to being a “power plant” for the global AI stack.

The Capital Paradox

Nvidia remains the sun around which everything orbits, but the market is starting to feel the gravity. Microsoft’s recent valuation dip and Meta’s pivot to “superintelligence” spending highlight the tension:

  • We are spending hundreds of billions on “intelligence” before we have a clear map of the “revenue.”
  • Elon Musk’s potential merger of xAI, SpaceX, (and possibly Tesla?) is the ultimate vertical integration play. It’s a bet that to win at AI, you need to own the satellites, the chips, and the robots. It’s the Carnegie Steel of the 21st century.

Software is Becoming “Vibes”

The surge in “vibe coding” (Anthropic’s Claude Code) is the ultimate unbundling of the developer. When a non-coder can build an app for $50 over a weekend, the “cost of creation” drops to zero.

  • The Catch: If everyone can build an app, the value of “having an app” disappears.
  • We are flooding the zone with software. The challenge for 2026 isn’t how to build; it’s what is worth building.

The Bottom Line

We are transitioning from AI as a Tool to AI as an Infrastructure. In the tool phase, you worry about “prompts.” In the infrastructure phase, you worry about energy, tax policy, and agent coordination. The machine is no longer waiting for us to tell it what to do; it’s busy building the world it plans to run in.

Weekend Notebook #50 – When Guardrails Drop and Hardware Stalls

Published on LinkedIn and amitabhapte.com on14thDec 2025


This week in AI – The Great Contradiction

This week, the AI story fractured. Not because progress slowed, but because it accelerated unevenly.

Policy surged ahead. Models leapt forward. Infrastructure hit resistance.

What emerged was a stark contradiction at the heart of the AI economy: intelligence is scaling at digital speed, but deployment is still bound by physical reality.

Three signals made that tension unmistakable.

First, the velocity.

The US administration signalled a decisive shift. Speed now trumps caution. President Trump’s Executive Order blocked states from regulating AI, creating a federal fast lane for Silicon Valley.

The intent was clear: remove friction, accelerate advantage.

Markets responded immediately. OpenAI released GPT-5.2, not just a smarter model, but a professional-grade, agentic system designed for autonomy rather than conversation. This is AI built to act, not assist. The guardrails are thinning, and the models are accelerating.

This wasn’t coincidence. It was causality.

Second, the stall.

While software sprinted, infrastructure stumbled. Oracle shares dropped 11 percent on deployment delays, pulling Nvidia, CoreWeave, and Micron down with them. The reaction wasn’t about earnings. It was about execution.

The reminder was blunt: the Capacity Race is harder than the Capability Race. You can ship code overnight. You cannot pour concrete, secure power, or stabilise grids at the same pace. Physics still sets the tempo.

For leaders, this matters. AI advantage is no longer constrained by algorithms. It is constrained by land, energy, and logistics.

Third, the shift.

Disney invested $1 billion in OpenAI to license its characters for Sora. While others litigate,

Disney is operationalising. By moving its IP into generative video workflows, it validated Sora as a production-grade creative engine.

This isn’t just a media story. It’s a strategic pattern. IP owners are moving from defence to deployment, from protecting archives to activating them. The future of content is not about preservation. It’s about animation at scale.


My takeaway this weekend

We are watching infinite digital ambition collide with finite physical reality.

Policy is pushing. Governments are clearing the regulatory path.
Models are pushing. GPT-5.2 is ready for autonomous work.
Physics is pushing back. Infrastructure is now the bottleneck.

The constraint has shifted.

“The bottleneck is no longer policy or software. It is concrete and power. The winner in 2026 will not simply be the company with the smartest model, but the one that can physically deploy intelligence faster than everyone else. AI leadership is becoming an execution discipline.”


Beyond AI: my mindshare – The Faces Behind the Machine

I paused this week on the cover of TIME magazine. The “Person of the Year” wasn’t a single individual, nor was it AI itself, as the “Computer” once was in 1982. It was the Architects of AI: Altman, Huang, Zuckerberg.

That choice matters.

For years, we’ve spoken about AI as if it were weather. Something inevitable. Something happening to us. By putting human faces on the cover, TIME reminded us of a grounding truth:

AI is not weather. It is architecture.

It is the result of choices. Trade-offs. Incentives. Ego. Ambition.

Seeing these builders grouped together, competitors and collaborators at once, reinforced something easy to forget amid the abstractions of silicon and scale. The most powerful operating system shaping AI’s future is still the oldest one we have.

Human nature.

“As we head into the holidays, that’s both comforting and unsettling. The machines are learning fast. But the direction they take still depends on the people building them. And that responsibility hasn’t been automated away.”

Weekend Notebook #32 – GPT-5, Early AI Winners & Losers

Published on LinkedIn and AmitabhApte.com on August 10, 2025


In spotlight this week: GPT-5 lands but not everyone’s cheering

The AI world has been holding its breath for GPT-5, the long-promised leap forward. Now it’s here. But instead of unanimous applause, the launch has landed like a blockbuster film breaking box office records while dividing critics.

OpenAI calls GPT-5 its most capable, reliable, and safe model yet, a multimodal workhorse for coding, writing, health, and complex reasoning. It’s faster, hallucinates less, remembers more, and can now work seamlessly across text, images, and code. Microsoft Copilot is already running on it, meaning millions will soon be using GPT-5 without even knowing it.

On paper, this is the AI assistant we’ve been promised:

  • Longer memory & context so it can finally act like a long-term colleague, not a one-off chatbot.
  • Multimodal fluency for integrated text, image, and code workflows.
  • Enterprise-grade reliability & safety for regulated industries and mission-critical work.

My early take? This is a strategic reset, simplifying model choices for users while pushing benchmark-beating features that play well in health, enterprise, and developer spaces. But some of the most enticing tools, like Google Calendar integration, sit behind the pricier Pro tier, risking a fragmented user experience.

And the user feedback? A mixed bag. Some love the speed and precision. Others miss GPT-4o’s personality describing GPT-5 as shorter, blunter, and less emotionally intelligent. My bet: early quirks will be ironed out. Whether GPT-5 is better for day-to-day use than GPT-4 or GPT-4o will be decided not by benchmarks, but by how it feels in the hands of real users.


Noteworthy this week: the AI fault lines widen

1. AI revenue champions

2. Strategic shifts

3. Human cost & disruption


In summary: my key takeaway this weekend

GPT-5’s debut shows the next chapter in AI: sophistication, integration, and enterprise deployment. OpenAI’s bet is to make AI the default productivity layer. But capability alone isn’t enough, user experience still wins hearts and adoption.

This week’s wider news makes the contrast sharper. AI is accelerating the rise of companies like Harvey, Palantir, and Duolingo, turning algorithms into revenue and market advantage. But it’s also rewriting the scoreboard in real time, pushing some players off the field entirely.

The lesson? In the AI era, the same force that fuels the winners can just as quickly leave others behind. The future of productivity isn’t just being built, it’s being fought for.