Weekend Notebook #2615 – When AI become Infrastructure, Risk and Rivalry

Published on LinkedIn and amitabhapte.com on 12th Apr 2026

This was a week where AI showed up as an infrastructure bet, systemic risk, competitive battleground, and talent story, all at once. Many stories. One consistent thread: the foundational layers of the AI economy are being built and contested simultaneously, and the institutions designed for a slower world are catching up in real time.

The Anthropic Week: Revenue, Risk, and Rivalry

Three distinct signals from one company. First, the commercial: Anthropic’s annualised revenue crossed $30 billion, up from $9 billion just four months ago. CoreWeave sealed a multi-year infrastructure deal to power Claude workloads, days after a $21 billion commitment from Meta. Nine of the ten leading AI model providers now run on CoreWeave’s platform. Infrastructure is consolidating fast.

Second, the risk signal. Anthropic introduced Mythos Preview, a model so capable at finding and exploiting software vulnerabilities that the company chose not to release it publicly. Under Project Glasswing, access is limited to Amazon, Apple, Google, Microsoft, JPMorgan, and around 40 critical infrastructure organisations. The model has already identified vulnerabilities across every major operating system and browser, including a 27-year-old flaw in OpenBSD. Treasury officials and the Federal Reserve convened an emergency meeting with Wall Street’s senior bank CEOs. The Bank of England placed Mythos on the agenda of its Cross-Market Operational Resilience Group, alongside the FCA and the National Cyber Security Centre. Canada convened its own session the same week.

Third, the investor story. OpenAI’s secondary market shares have become difficult to sell. Around $600 million of stock found very few buyers on the secondary market. Meanwhile, demand for Anthropic shares is described as almost insatiable, with $2 billion in declared buy interest and almost no sellers. OpenAI responded with an investor memo characterising Anthropic as compute-constrained. The defensiveness itself is the signal.

My PoV: Mythos is the clearest signal yet that AI safety is an operational risk category, not a philosophical one. For technology leaders, the question is not whether your organisation uses Anthropic products. It is whether your security posture has been updated for an era where AI can identify and weaponise software vulnerabilities at machine speed. On the investor story: the AI platform you build on today is not easily changed. Governance clarity and consistent product performance now matter as much as benchmark scores.

New Entrants, New Approvals

Meta’s $14.3 billion bet on Alexandr Wang delivered its first output this week. Muse Spark, the first model from Meta Superintelligence Labs, is a natively multimodal reasoning model rebuilt from the ground up over nine months. It is competitive with frontier models on several benchmarks, though not a leader across the board. More significant than the model is the strategy: Muse Spark launched as a closed, proprietary product. Meta, which built its AI identity on open-source Llama, has quietly changed its approach. With capital expenditure planned at $115 to $135 billion in 2026, nearly double last year, and three billion daily users as a distribution surface, Meta is no longer treating AI as an experiment.

Separately, the Netherlands became the first EU country to formally approve Tesla’s Full Self-Driving Supervised system, after 18 months of testing covering 1.6 million kilometres on European roads. The system is not autonomous: the driver remains legally responsible and must be ready to intervene. But the approval, under EU mutual recognition rules, opens a pathway to continent-wide rollout by mid-2026. It is the first time a physical AI system of this complexity has passed rigorous European regulatory scrutiny, and the precedent will matter well beyond vehicles.

My PoV: Meta’s shift from open to closed signals that distribution advantage, not model openness, is where the competitive moat is now being built. For enterprise leaders, the Tesla approval matters less as a driving story and more as a governance template. Physical AI systems require documented safety evidence, long evaluation windows, and ongoing reporting obligations. Build that infrastructure now, before the regulator requests it.

India’s Technical Capital Comes of Age

Two data points deserve to be read together. GitHub reported that India now has 27 million developers on its platform, 15 percent of the global total, with more than two million new joiners in 2026 alone, more than any other country. India is the world’s second largest contributor to open-source AI projects, with over 7.5 million contributions on GitHub. At the same time, TCS posted Q4 results showing $12 billion in contract value for the quarter, $40.7 billion for the year, and annualised AI revenue of $2.3 billion. Its HyperVault data centre business, targeting 1 gigawatt of capacity, has moved into commercial structuring with hyperscalers and frontier AI companies. The positioning is explicit: infrastructure to intelligence, end to end.

My PoV: India is simultaneously facing the erosion of traditional IT outsourcing as AI automates entry-level tasks, and building the technical and infrastructure base to compete in the next generation of AI deployment. A country producing 27 million GitHub developers and the world’s second largest open-source AI contributor base is not a back office. It is a source of technical capital at a scale few geographies can match. Enterprise talent strategies that are not designed to work with that pipeline are working around it at significant cost.

My Takeaway This Weekend

The model layer of AI is commoditising quickly. The infrastructure layer, physical, computational, regulatory, and human, is not. The companies and countries securing advantaged positions in those foundational layers will shape the AI decade. The ones still treating AI as a product decision will find themselves working within a landscape that others have already built.

The Mythos story, the CoreWeave deals, the Tesla approval, India’s developer numbers, Meta’s infrastructure bet: none are separate stories. They are all evidence of the same transition. Intelligence is no longer arriving as a feature. It is arriving as a structural condition. The leadership question is no longer whether to engage. It is whether your organisation is building on the right foundations before the terrain gets harder to move on.

Weekend Notebook #2614 – when AI sets the Terms

Published on LinkedIn and amitabhapte.com  on 5th April 2026

Three forces are running simultaneously through the AI landscape right now: a scramble for physical control of the underlying infrastructure, a repricing of how AI-driven value gets negotiated, and a fracture in how the world governs it. This week’s signals belong to all three.

1. The Race to Own the Substrate

New IDC data reviewed by Reuters shows Chinese GPU and AI chip makers captured 41% of China’s AI accelerator market in 2025. Huawei alone shipped around 812,000 chips, with Alibaba’s T-Head and Baidu’s Kunlunxin growing behind it. Nvidia still leads at 55%, but the retreat is real. 

The same instinct is driving infrastructure investment across Asia. Airtel raised $1 billion for its data centre arm Nxtra from Carlyle, Alpha Wave, and Anchorage Capital, targeting a scale-up to 1GW of capacity in India, a country that already has Google’s $15 billion data centre commitment and a 20-year tax holiday for hyperscalers. Microsoft’s $10 billion commitment to Japan through 2029 is structured similarly: AI infrastructure, national cybersecurity cooperation, and data processed inside Japan’s borders. These are sovereignty arrangements as much as commercial deals. 

2. AI Is Changing the Terms of Access

In enterprise software, ServiceNow’s CEO Bill McDermott has repositioned the company around a single argument: AI models identify problems but struggle to execute reliably across governed, auditable enterprise workflows. ServiceNow owns the last-mile execution layer, and its Now Assist product is tracking toward a $1 billion annual run rate. The model shift underneath it is structural: from per-seat licensing to outcome-based pricing, what McDermott calls digital labour. 

In capital markets, the dynamic is rawer. SpaceX is targeting a valuation above $2 trillion in what could be the largest IPO in history. According to the New York Times, Musk has required the lead banks, Morgan Stanley, Goldman Sachs, JPMorgan, Bank of America, and Citigroup, to purchase Grok subscriptions as a condition of the mandate. Some have agreed to spend tens of millions per year. AI adoption bundled into the price of deal access is a new distribution model. 

3. AI Governance Is Becoming a Procurement Reality

Several jurisdictions are now introducing procurement standards that require AI vendors to demonstrate safety and privacy safeguards before accessing public sector contracts. California’s Executive Order N-5-26 is a recent example: companies seeking state contracts must disclose safeguards against harmful content, bias, and civil rights violations, with agencies conducting their own independent assessments. The practical result is a growing patchwork of standards that enterprise technology teams will have to navigate across geographies. California’s procurement rules have a history of becoming de facto global benchmarks, much as GDPR did from a single jurisdiction. Any organisation selling AI products into regulated markets or the public sector should treat this as an active compliance question, not a future one.

4. Japan: Automation as Survival, Not Disruption

TechCrunch reported this week on Japan’s push into physical AI, driven by demographic emergency rather than efficiency ambition. With over 28% of the population above 65 and a working-age population contracting annually, Japan is deploying robots to fill positions that cannot be staffed. The government has committed $6.3 billion and is targeting 30% of the global physical AI market by 2040. The industry signal is unambiguous: customer-paid deployments, full-shift operation, measurable productivity. Japan is the clearest preview of what automation looks like when labour scarcity, not cost reduction, is the driver. 

My Takeaway This Weekend

The infrastructure layer of AI is being claimed physically and politically. The business models above it are repricing around outcomes. The governance frameworks meant to contain it are fracturing. And the labour markets it will reshape are already showing, in Japan, what comes next. AI strategy, infrastructure strategy, talent strategy, and governance strategy are now the same conversation. Running them separately is how organisations fall behind without noticing.

Weekend Notebook #2613 – When AI Meets Friction

Published on LinkedIn and amitabhapte.com on 29th Mar 2026

This week, ambition collided with reality across nearly every front of the AI story. Agentic commerce promised to remove the human from the checkout. A new model leaked before it was ready to launch. The internet crossed a threshold most of us hadn’t noticed. And one logistics giant did something quietly radical: it decided to teach half a million people to work with AI rather than step aside for it. Different signals. Same underlying tension. The gap between what AI can do and what organisations are actually ready for is growing. That gap is where leadership happens.

Agentic Commerce: The First Honest Report Card

Late last year, OpenAI launched Instant Checkout, a feature that lets shoppers complete purchases directly inside ChatGPT without ever visiting a retailer’s website. Walmart signed up as the launch partner. Etsy and Shopify quickly followed. The narrative was compelling: conversational commerce had arrived. The results were not. Walmart has now disclosed that conversion rates inside ChatGPT were three times lower than for click-out experiences that redirected users to Walmart’s own site. That is not a rounding error. It is a structural finding. OpenAI has since moved on, phasing out Instant Checkout in favour of an app-based model that gives retailers more control of the transaction. Walmart is now embedding its own chatbot, Sparky, directly into ChatGPT and Google Gemini, rather than handing the checkout process to a third party.

Meanwhile, Gap has become the first major fashion retailer to launch direct checkout within Google’s Gemini platform, part of an emerging Universal Commerce Protocol that Google has been rolling out since January. The approach is different in intent: Gap is pairing the checkout integration with an AI-powered sizing tool, specifically targeting the return rate problem that plagues online apparel. Net sales were up 2% in Q4 2025, with online sales growing 5%. The CTO was clear that this is about solving real customer problems, not chasing innovation for its own sake.

My PoV: The Walmart data is the most useful signal the agentic commerce story has produced. Consumers will use AI for discovery. They are not yet ready to surrender the checkout to it. The friction of a familiar interface, seeing the full cart, entering payment details on a trusted site, provides reassurance that an embedded AI flow does not yet replicate. For enterprise and retail leaders, the lesson is architectural: own the transaction layer. Let the AI own the discovery. The retailers now embedding their own branded experiences into AI platforms, rather than ceding the whole journey, are making the smarter structural bet.

Anthropic’s Week: Values as a Growth Strategy

It has been a remarkable few months for Anthropic. A public standoff with the US Department of Defense over the use of Claude in lethal autonomous systems, followed by Super Bowl ads that went after OpenAI’s decision to serve ads to its users, has produced something unexpected: a subscriber surge. TechCrunch analysis of 28 million US consumer transactions shows paid subscriptions more than doubling since the start of 2026, with record new sign-ups in January and February. Web traffic was up 43% month-on-month in February and nearly tripled year-on-year. Most new subscribers are at the entry-level Pro tier at $20 per month. Claude Code and Claude Cowork, the developer and productivity tools released in January, have accelerated that growth further.

The same week brought a different kind of Anthropic headline. The company inadvertently exposed an internal draft blog post in a publicly searchable data store, revealing a new model under development called Claude Mythos. The draft described it as the company’s most powerful model to date, part of a new capability tier called Capybara, significantly beyond the current Opus tier. The document also described the model as posing unprecedented cybersecurity risks, specifically for its ability to identify and exploit software vulnerabilities at speed. Cybersecurity stocks fell immediately: CrowdStrike, Palo Alto Networks and Zscaler each dropped around 6%. Anthropic confirmed the model exists and is being tested with early access customers.

And underpinning all of this, a new report from Human Security found that AI and automated traffic have, for the first time, overtaken human traffic on the internet. Automated traffic grew eight times faster than human traffic in 2025. AI-driven traffic alone grew 187% across the calendar year. The internet was built on the assumption that a human being was on the other side of the screen. That assumption is no longer safe.

My PoV: Three separate but connected signals from Anthropic this week. First, values can be a growth driver. Taking a principled public position on how AI should and should not be used attracted consumers in a way that a model benchmark never could. That is worth paying attention to for any enterprise working out how to position itself in the AI market. Second, the Mythos leak reminds us that the cybersecurity stakes are rising with every capability jump. Anthropic’s plan to give cyber defenders early access before general release is the right instinct, but the gap between what AI can do and what defenders are prepared for is narrowing fast. Third, if the majority of internet traffic is now non-human, the infrastructure assumptions of most enterprise digital strategies need revisiting, from fraud detection to API design to web analytics.

AI and the Financial System: A Stress Test No One Planned For

While AI companies attract record investment, a less discussed story is developing in private credit. Shadow banking, the network of private credit funds, business development companies and non-bank lenders that has grown significantly since 2008, has been heavily exposed to software-sector loans. The concern, now surfacing in the mainstream, is that AI may be systematically undermining the value of the software companies these funds have lent against. Apollo Global Management was among the first to flag it publicly last year: “Is software dead?” is now the question private credit managers are trying to price. A closely watched index of 44 business development companies shed around $5 billion in February. The Bank of England has announced it will conduct the world’s first stress test of the shadow banking sector. Lloyd Blankfein has drawn parallels to 2005 and 2006, when hidden leverage was building quietly beneath a rising tide.

My PoV: This story rarely appears in AI newsletters. It should. The thesis is straightforward: if AI can replace significant portions of software development work, the revenue and margins of many mid-market software companies, which form the collateral base for billions in private credit, come under structural pressure. This is not a prediction of an imminent crisis. It is an observation that the financial system has not yet priced AI disruption into the sectors most exposed to it. For enterprise technology leaders, this has a practical implication: the cost, availability and terms of technology financing are likely to become more volatile. AI is not just disrupting products. It is beginning to reprice the capital structures behind them.

The Workforce Question: FedEx Bets on Learning

Against a backdrop of sector layoffs and automation anxiety, FedEx has launched what may be the largest corporate AI upskilling programme in logistics. The initiative covers more than 400,000 employees globally, with personalised, role-based modules that will update monthly. The programme is explicitly tied to internal promotion pathways. The company calls it “promotion-ready” AI training. Frontline workers are already applying for corporate roles at higher rates since the programme launched. Every C-suite executive at FedEx spent two days in Silicon Valley selecting the right technology partners before a single module was deployed.

This matters in context. UPS announced 30,000 layoffs on top of 48,000 the previous year. FedEx has also made cuts. But its strategic posture is distinctly different: use AI to make the workforce more capable, not smaller. The company measures something it calls AIQ, an AI quotient, tracking progress rather than just completion rates. Chief Data and Information Officer Vishal Talwar was direct: “We are measuring progress around AI, not necessarily just success, because it’s going to be very difficult to say this success is only attributed to AI.”

My PoV: Only 28% of organisations have embedded continuous AI learning, according to Accenture’s 2026 Pulse of Change report. FedEx is in that minority, and it is moving at a scale that few others have attempted. The principle behind the programme is the right one: AI literacy cannot be a specialist skill. It needs to be a baseline capability across every level of the organisation, from warehouse floor to boardroom. The harder question is how to measure the business value generated rather than just the learning hours logged. That measurement challenge is the next frontier of enterprise AI investment discipline.

My Takeaway This Weekend

Four stories, one shared theme: AI is not meeting reality gently. Agentic commerce stumbled at the transaction layer, where trust has always mattered most. A powerful new AI model leaked before its makers were ready, and the market reacted to the risk before the product even shipped. The internet crossed a threshold that few enterprise strategies were built for. And in logistics, one company decided to bet on its people rather than against them.

The leaders navigating this well are not those with the most ambitious AI roadmaps. They are the ones who are honest about where friction is real, where trust has not yet been earned, and where their own organisations need to build capability before deploying it. Friction is not a failure of technology. It is the system telling you where the work still needs to be done.

Weekend Notebook #2612 – when Agents become the Architecture

GTC Live 2026 Keynote Pregame – photo credit NVIDIA GTC

Published on LinkedIn and amitabhapte.com on 22nd Mar 2026

GTC 2026 drew 30,000 people to San Jose. Jensen Huang announced $1 trillion in confirmed orders for Blackwell and Vera Rubin systems through 2027, double last year’s projection. But the number was not the headline. The architecture behind it was.

From Data Centres to AI Factories

Huang reframed the data centre entirely. The new construct is the AI factory, a facility whose primary output is not storage or compute, but tokens. Every query answered, every decision supported, every automated workflow consumes them. The new efficiency metric is not uptime. It is token throughput per watt.

This changes the business case for infrastructure investment. Data centres were cost centres. AI factories are production lines. When the output has a unit price, the conversation with the business shifts fundamentally.

Nvidia’s keynote slides showed 40% of its order pipeline now coming from enterprise, sovereign AI, and industrial customers, not just hyperscalers. The enterprise wave is no longer coming. It has arrived.

My PoV: CIOs who still frame infrastructure purely as a cost management conversation are using the wrong model. Token economics and inference costs belong in your architecture discussions now. Your business leaders will ask about them within 18 months.

The Agent is a Platform

The most important slide of the keynote carried a simple title: ‘Agents: A New Computing Platform.’ Huang’s argument was precise. The PC was a platform. The smartphone was a platform. The agent is next, with its own architecture: a reasoning core connected to memory, sub-agents, tools, files, and a multi-modal prompt layer.

Nvidia made this concrete with NemoClaw, an enterprise-ready implementation of the OpenClaw agentic framework, bringing autonomous agents inside the enterprise firewall with privacy controls and policy guardrails. Huang also noted that Nvidia’s own engineers will receive annual token budgets as a productivity metric. Token consumption is becoming a measure of knowledge work output.

My PoV: If the agent is a platform, enterprise architecture must be designed around it, not retrofitted for it. The question is not which agent tool to pilot. It is what your data, security, and integration architecture looks like when agents become the primary consumers of enterprise systems.

Enterprise IT: From SaaS to Agent-as-a-Service

One slide showed two pictures. On the left: today. Data centre, SaaS software, GSI, humans on top. On the right: tomorrow. An AI factory generating tokens, software and AI providers connected by agents, and humans repositioned as enterprise information workers directing and overseeing rather than executing.

Huang called this the Enterprise IT Renaissance. Not disruption. Not replacement. Renaissance. And the implications for software were explicit: every SaaS company must become an Agentic-as-a-Service company. The subscription model built on human users logging in is giving way to a consumption model built on agents accessing capabilities programmatically.

My PoV: This is the most significant vendor landscape shift since the move to cloud. The roadmap conversations you have with your major software vendors over the next twelve months should be explicitly about their agentic strategy. If they do not have a credible one, that is a signal worth taking seriously.

My Takeaway This Weekend

GTC 2026 was not a product launch. It was an architectural declaration. The era of AI as a layer on top of existing systems is ending. What follows is AI as the foundation, with tokens as the unit of value, agents as the primary computing paradigm, and enterprise IT reborn around AI factories.

The organisations that rearchitect early will compound advantage. Those that treat this as another cycle to manage carefully will find the gap harder to close each quarter.

Weekend Notebook #2611, When AI Decides Who Works

Published on LinkedIn and amitabhapte.com, 15th March 2026

This week, AI moved from strategy decks to org charts. Companies restructured around it, economists warned about it, robots were built to replace entry-level roles on both sides of the desk, and a startup reframed the computer itself as a delegate. AI is no longer just a tool you choose to use. It is increasingly the logic by which decisions about people, capital, and infrastructure get made.

1. Meta’s Bet: Fewer People, More Agents, New Markets

Three Meta stories this week that belong together. Reuters reported that Meta is planning layoffs affecting up to 20% of its workforce of roughly 79,000 people, its largest restructuring since 2022. The rationale is twofold: offset $600B in planned data centre investment by 2028 and capture the productivity gains of AI-assisted workers. In the same week, Meta acquired Moltbook, the Reddit-style social network for AI agents, folding it into Meta Superintelligence Labs as an always-on directory for agent-to-agent coordination. And Reliance Industries restructured its AI subsidiary REIL, with Meta’s Facebook Overseas picking up a 30% stake, formalising a strategic partnership targeting enterprise AI at India’s scale.

My PoV: These are not separate decisions. Meta is deliberately rebalancing: cutting human costs, acquiring agent infrastructure, and planting equity stakes in high-growth markets, all at the same time. If you haven’t built a clear internal narrative linking your AI investment to workforce implications, you are already behind the conversation your board is having.

2. The End of Entry-Level

ServiceNow CEO Bill McDermott told CNBC that graduate unemployment could reach the mid-30s within a few years as agents absorb white-collar entry-level work. To put that in context, the Federal Reserve Bank of New York currently puts graduate unemployment at 5.7%, with underemployment at 42.5%, the highest since 2020. The same week, Travis Kalanick launched Atoms, a specialised industrial robotics company targeting food, mining, and transport. His framing was deliberately economic: “gainfully employed robots, machines best suited for the job at hand.” Where McDermott sees AI compressing white-collar entry points, Kalanick is building the physical equivalent for blue-collar work. The entry point to work is narrowing on both sides at once.

My PoV: The mid-level talent of 2028 is being shaped right now. Entry-level pipelines into software, operations, and logistics are compressing simultaneously. Review your early-career hiring and graduate development programmes, not as a cost decision, but as a strategic investment in the people who will govern and manage your AI systems over the next decade.

3. When AI Becomes the Interface

Google launched Ask Maps, a Gemini-powered conversational layer inside the world’s most-used navigation app. With 2 billion monthly users, the shift is not subtle. You no longer type a destination. You describe a situation and let the system figure it out. Perplexity went further, launching Computer, a general-purpose digital worker that operates your full software stack, breaks goals into sub-tasks, and deploys specialised agents to get them done. In enterprise testing benchmarked against McKinsey, Harvard, MIT, and BCG standards, it completed an estimated 3.25 years of work in four weeks. Both products make the same argument: the interface layer is collapsing from menus and search boxes into intent and delegation.

My PoV: Users are learning to describe outcomes rather than navigate software. That expectation does not stay in consumer apps. It is already arriving in how people interact with enterprise systems, ERP, CRM, procurement, and the rest. Conversational, intent-driven interfaces need to be on your near-term roadmap. Not your 2028 one.

4. India Builds the Stack

The Adani Group committed $100 billion to build renewable-powered hyperscale AI data centres across India by 2035, expanding from 2GW to 5GW through AdaniConneX, anchored by partnerships with Google and Microsoft. Gautam Adani put it plainly: “India will not be a mere consumer in the AI age. We will be the creators, the builders, and the exporters of intelligence.” Read alongside the Reliance-Meta JV above, India’s approach is becoming structurally distinct. It is not just attracting global capital. It is negotiating equity and infrastructure ownership in return.

My PoV: India is building a domestic AI stack with real strategic ambition behind it. For enterprises with India operations, outsourcing relationships, or supply chain exposure, the talent and compute environment there is changing faster than most roadmaps account for. Worth factoring in sooner rather than later.

My Takeaway This Weekend

The word that connects this week’s stories is delegation. Companies are delegating headcount decisions to AI economics. Robots are taking on entry-level physical tasks. Google and Perplexity are delegating the interface itself to agents. India is delegating compute sovereignty to its own industrial groups.

AI leadership in 2026 is no longer about adoption. It is about knowing what to delegate, to whom, and under what governance. The organisations that navigate this well will not be the fastest movers. They will be the ones that redesigned their decision rights before the machine made the decision for them.

Weekend Notebook #2610 – When Intelligence goes Mainstream

MWC 2026, Barcelona. Photo credit GSMA

Published on LinkedIn and amitabhapte.com on8th Mar 2026

Three signals this week. Barcelona’s biggest mobile show. Apple’s biggest product week in years. And the most honest labour economics report the AI industry has produced. Different stages, same underlying story: intelligence is arriving everywhere at once, and the gap between capability and consequence is widening.

MWC Barcelona 2026: AI Moves into the Pipe

The GSMA’s theme this year was “The IQ Era.” For once, the branding matched the floor. MWC 2026 wasn’t about device launches. It was about AI embedding into network infrastructure itself.

The most consequential announcements came from operators, not handset makers. The GSMA launched Open Telco AI, a collective industry effort to weave AI into carrier operations. Qualcomm’s new X105 modem embeds an AI processor directly in the chip, a 6G stepping stone that will shape OEM roadmaps for 2027 devices. Deutsche Telekom debuted an AI call assistant that lives in the network, not in an app. And AWS committed €33 billion to Spain, explicitly framing the country as its European AI epicentre.

My PoV: Telecom providers are quietly becoming AI infrastructure providers. When intelligence is embedded at the carrier layer, every device on that network gains capability it didn’t ship with. Your connectivity strategy and your AI strategy are now the same strategy. Most enterprise roadmaps haven’t caught up to that yet.

Apple’s Big Week: Intelligence at $599

Apple launched seven products in three days. Two stand out.

The iPhone 17e brings the A19 chip, 256GB base storage, and full Apple Intelligence to the $599 price point. The story isn’t the device, it’s the distribution. Apple’s AI stack just reached a much larger addressable base.

The MacBook Neo is more significant. A $599 laptop running an A18 Pro chip, the same silicon as the iPhone 16 Pro, with Apple claiming 3x faster on-device AI performance than comparable Intel machines. It is the first Mac powered by an iPhone chip. The architectural wall between Apple’s phone and laptop lines has come down.

My PoV: This week wasn’t about hardware. It was about what happens when AI-capable silicon reaches commodity pricing across every form factor. Combined with agentic coding tools like Claude Code, the barrier to building functional software has effectively hit zero. The question for technology leaders is no longer which devices to provision, it’s how to govern what a workforce of accidental developers builds with them.

The Anthropic Labour Report: The Gap Between Fear and Fact

Anthropic published “Labour Market Impacts of AI: A New Measure and Early Evidence” this week. It is worth reading carefully.

The paper introduces “observed exposure”, measuring what AI is actually being used for at work, not what it theoretically could do. The gap is stark: Computer and Math roles have 94% theoretical AI exposure but only 33% actual usage coverage today. Legal sits at 80% theoretical, 15% actual. The wave is real. The timeline is slower than the headlines suggest.

Computer programmers top the “actually happening now” list at 75% task coverage, followed by customer service at 70% and data entry at 67%. Yet unemployment in exposed occupations has not meaningfully risen since ChatGPT’s 2022 launch. The one signal worth watching: hiring of workers aged 22–25 into exposed roles has quietly slowed.

My PoV: That entry-level hiring signal matters more than aggregate unemployment data. The mid-level talent of 2028 is being shaped right now. Workers who are not hired into exposed roles today don’t disappear, they redirect. But the pipeline compresses. For enterprise leaders, the implication is concrete: talent acquisition strategies in software development, customer operations, and financial analysis need to account for a structurally thinner entry cohort arriving in the next two to three years.

My Takeaway This Weekend

MWC confirmed AI is now infrastructure, in the network, not on top of it. Apple confirmed AI silicon is now a commodity, at $599 in both your pocket and on your desk. Anthropic confirmed the labour disruption is real but the clock is slower than feared, for now.

The word “yet” is doing heavy lifting across all three stories. The period between “not yet” and “already happened” is consistently shorter than organisations plan for. The question is not whether these shifts are coming. It is whether your architecture, your talent pipeline, and your operating model are being built for the right horizon.

Weekend Notebook #2609 – When AI becomes the risk

Published on LinkedIn and amitabhapte.com on1stMar 2026

This week, the AI story fractured, not in capability, but in confidence. Capital is still flooding in. The technology is still advancing. But disruption and doubt arrived in the same week as the deal announcements.

The OpenAI Capital Architecture

OpenAI is raising $110 billion in a landmark funding round that values the company at $840 billion, highlighting the intensity of global investment in artificial intelligence. The round is led by SoftBank, Nvidia, and Amazon, with Amazon also securing a major strategic partnership covering cloud infrastructure and custom AI chips. The deal leaves Microsoft’s position intact, with Azure remaining the exclusive cloud for OpenAI’s core APIs and products, as OpenAI moves closer to a potential IPO later this year.

My PoV: OpenAI is no longer just raising capital, it is building infrastructure leverage across competing hyper-scalers. The AI platform landscape is consolidating fast, and the enterprise partnerships you form today will be difficult to unwind. Choose with eyes open.

AI’s Social Contract is Cracking

Two signals this week pointed to the same underlying tension. Artificial intelligence is beginning to erode the economic model behind India’s IT and outsourcing boom, as tasks once offshored to millions of graduates can increasingly be done by machines. Hiring slowdowns at major firms signal that automation is arriving before mass layoffs, putting pressure on young, entry‑level workers.  Simultaneously, Block cut nearly half its workforce, explicitly naming AI as the cause, the first major corporate leader to do so at this scale.

My PoV: These are not isolated incidents. They are early signals of a structural reckoning. India is racing to become a compute power while its labour model erodes, the window to bridge that gap is narrow. And Block’s candour, intentional or not, has opened a door that will be hard to close. Regulators, boards, and workforces will now expect transparency on AI-driven headcount decisions. If you haven’t developed a clear internal narrative on this, you are already behind.

From Training to Running AI Everywhere

Nvidia is preparing a new chip platform focused on AI inference, the real‑time processing that turns trained models into fast, usable answers, signalling a shift beyond pure training dominance.
The move reflects growing pressure from customers and rivals to deliver lower‑latency, more efficient AI systems at scale, especially for consumer and enterprise applications. In the same week, Dell shares surged 22% after the company beat Q4 earnings expectations and raised guidance, driven by strong momentum in AI servers. Management expects AI server revenue to more than double to ~$50bn by 2027, even as memory shortages push up component costs across the industry.

My PoV: The first wave of AI investment was about who could train the biggest models. The next is about who can run AI economically at the point of need. Inference efficiency will define the unit economics of every enterprise AI product within 24 months. It deserves a place in your architecture conversations now, not later.

Highlight: When a Report Moved Markets

The Citrini Research 2028 Global Intelligence Crisis report became one of the most discussed AI moments of the week. Framed as an “AI doomsday” scenario, it sparked sharp market swings by sketching a future of rapid AI‑driven job losses and cascading economic disruption, briefly wiping billions off technology and financial stocks.

My PoV:. Even as many investors and economists challenged the assumptions behind the report, the reaction itself was telling. The deeper signal was not about prediction accuracy, but about sentiment: AI has shifted from a straightforward innovation story to a source of systemic uncertainty with real market consequences.

My Takeaway This Weekend

Two stories are running in parallel, and the gap is widening. One is of extraordinary investment: OpenAI near a trillion-dollar valuation, Amazon deploying large capital, Nvidia moving to own both ends of the AI stack. The other is of disruption arriving faster than the systems built to absorb it, jobs cut and named, a country’s growth model quietly hollowing out, markets rattled by a what-if scenario.

The leadership challenge is no longer proving AI’s value. It is managing the asymmetry, between deployment speed and adaptation pace, between capital market confidence and labour market anxiety. The winners won’t be those who move fastest. They’ll be those who move with enough clarity to bring their organisations with them.

Weekend Notebook #2608 – India’s AI Moment: Capital, Compute, Confidence

PM in a group photograph along with global tech leaders at the Opening Ceremony of India AI Impact Summit – 2026 at Bharat Mandapam, in New Delhi on February 19, 2026.

Published on LinkedIn and amitabhapte.com — 22nd February 2026

This Week in AI — India Moves from Talk to Build

Most global AI events feel like the same conversation, recycled. The India AI Impact Summit, from the coverage and announcements this week read differently.

Less vision decks. More committed capital. Less safety debate. More infrastructure.

Five days at Bharat Mandapam in New Delhi. Over half a million visitors. Twenty-plus heads of state. Nearly every major AI CEO in the world, Altman, Pichai, Amodei, Hassabis, in the same room. And a wave of announcements specific enough to take seriously.

The scale is worth stating upfront. Hyperscalers globally are on track to deploy $700 billion in AI capex this year. India pulled a significant share of that attention. Reliance announced $110 billion for data centres and infrastructure over seven years. Adani committed $100 billion for renewable-energy AI data centres by 2035. US tech added its own layer on top.

This was the fourth in the global AI summit series, following Bletchley, Seoul, and Paris. The previous three were dominated by safety debates. India changed the register deliberately. The theme: impact. Access. The Global South. That shift matters, I’ll come back to it.

What They Announced

Google committed $15 billion to build a full-stack AI hub in Visakhapatnam, gigawatt-scale compute plus a new subsea cable gateway to the US. Pichai framed it as becoming a “full-stack partner”, not a cloud vendor. Partnerships with Reliance Jio on a dedicated cloud region and with Indian research institutions on agriculture and climate were also confirmed.

Microsoft arrived with $50 billion earmarked for the Global South, India central to the plan. Its President Brad Smith told CNBC that India could develop its own frontier AI, in specific domains, and that there will be “a variety of different DeepSeek moments” to come, some of them from India. Its India President offered the sharpest line of the week: “AI will not kill jobs. AI will unbundle jobs.” Microsoft research shows 92% of Indian knowledge workers already use AI, with 77% using it daily.

OpenAI opened two new offices in Bengaluru and Mumbai, also partnered with Tata Group to deploy 100MW of AI compute under the HyperVault brand, scaling to 1GW. OpenAI is the first anchor tenant of TCS’s new data centre business. Altman confirmed 100 million weekly active ChatGPT users in India, second only to the US, and called India a potential “full-stack AI leader.”

Anthropic opened its first India office in Bengaluru and partnered with Infosys to deploy Claude into Indian enterprises, starting with a telecom Centre of Excellence. Cognizant is rolling Claude Code to 350,000 employees globally. Air India is using it to build custom software. Dario Amodei confirmed India is Claude’s second-largest market and noted that the “technical intensity of usage here is even more extreme” than elsewhere.

Nvidia expanded partnerships with Indian venture capital firms to deepen exposure to the startup ecosystem. Larsen & Toubro separately unveiled a gigawatt-scale AI factory built on Nvidia GPU infrastructure across Chennai and Mumbai. AMD and TCS are building rack-scale AI infrastructure on AMD’s Helios platform.

One geopolitical detail that deserves more attention: the US and India signed the Pax Silica agreement at the summit, a Trump administration initiative to secure the global supply chain for silicon-based technologies. India has also approved $18 billion in chip manufacturing projects. Compute sovereignty is being treated as a national security matter, not just an infrastructure one.

None of this is coincidental timing. India now sits in the top two markets for both OpenAI and Anthropic. Without being home turf for either.

What the Government Is Building

The corporate announcements got the headlines. The IndiaAI Mission story is the more durable one.

India’s national compute base of 38,000 GPUs is being expanded by a further 20,000 in the near term. The tech minister set a target of $200 billion in AI infrastructure investment over two years. The government-backed BharatGen consortium released Param 2, a 17-billion-parameter model covering 22 Indian languages, built for governance and citizen-service use cases.

One of the most significant knowledge outputs from the week was the release of the AI Impact Casebooks. Developed in collaboration with global partners like the WHO, IEA, and UN Women, these six thematic compendiums document over 170 real-world, scalable AI deployments across Healthcare, Energy, Agriculture, Education, Gender Empowerment, and Accessibility. Rather than focusing on theoretical pilots, these casebooks serve as a “Global South Playbook,” offering a first-of-its-kind consolidated repository for policymakers to replicate proven models, such as AI-driven crop planning and early disease diagnosis in their own regions.

India is not just building for itself. That is new.

Alongside these, the AI Impact Startup Book was launched to map India’s deep-tech ecosystem, highlighting that nearly 70% of India’s growth-stage AI ventures are already operating internationally.

The Domestic Model Stack

One thread that got less coverage than it deserved: India is building its own model layer, not just deploying someone else’s.

Sarvam AI released Sarvam 30B and Sarvam 105B, open-source, mixture-of-experts models built for Indian languages, alongside a full speech stack and Sarvam Kaze, smart glasses with on-device speech and vision. The underlying architecture is the point: intelligence that doesn’t require cloud connectivity, designed for the 800 million people at the edge of India’s network.

Cohere Labs launched multilingual open-weight models supporting 70+ languages, runnable on local devices. Gnani released Vachana, a zero-shot voice-cloning model across 12 languages. Cartesia partnered with Blue Machines on enterprise voice with local data residency. A distinct stack is forming, open-weight models tuned for Indian languages, speech infrastructure for multilingual contact, edge-first deployment for a population where the smartphone is the primary compute device.

This is not a replica of what OpenAI or Anthropic are building. It is a complement. And potentially an export product for Asia and Africa.

The Structural Advantage

India is not trying to outspend the US. Nor replicate China’s state-led model. Its advantage runs differently.

Aadhaar. UPI. ONDC. These are not pilots. They are population-scale systems, proven across linguistic, economic, and connectivity diversity. AI layered on top changes the arithmetic. For Instance, ONDC (Open Network for Digital Commerce), is the “final frontier” of India’s Digital Public Infrastructure (DPI). If Aadhaar solved for Identity and UPI solved for Payments, ONDC is solving for Market Access.

Fifty million pending court cases. Adalat AI launched a WhatsApp helpline this week, instant case updates and legal translation in native languages, built on Claude. AI-powered weather forecasts reached millions of Indian farmers last year through a Google DeepMind collaboration with the government. These are structural problems meeting capable tools at the right moment.

My Point of View

I grew up in India. I now lead global technology transformation programmes. This week’s summit signals land differently when you hold both perspectives.

India built its IT leadership on services excellence, reliable delivery, cost advantage, process discipline. That model is under direct pressure from agentic AI, and the people in this sector know it. CEO’s of large Indian IT firms may focus on profitability rather than job creation, in a way reflecting what is already happening to the $280 billion IT services industry.

The counter-signal is the startup layer. Emergent, an Indian vibe-coding platform announced $100 million in ARR and a new mobile app this week. That pace of scale, from a country where Anthropic had a single employee eighteen months ago, is the real signal about what the next generation of Indian technology companies looks like.

If India limits itself to fine-tuning global models cheaply, it remains a participant. If it builds sector-specific AI systems, invests in public datasets, and scales AI-native enterprises, it becomes an architect.

The intent is visible. The hard part starts now.

The Governance Shift Worth Watching

Bletchley was about safety. Seoul built on it. Paris tilted toward action. India reframed the whole conversation around impact, accessible, multilingual, public-good AI rather than frontier-lab debates.

A Leaders’ Declaration with 70+ signatories is being finalised. The UK-India bilateral AI showcase ran alongside, reinforcing cooperation on standards and commercialisation. The Pax Silica agreement with the US on silicon supply chains signals that AI governance and trade policy are now the same conversation.

For countries across Asia and Africa that have been observers in the Bletchley-to-Paris sequence, India is offering a different frame and a different set of partners. Whether that translates into durable architecture, or remains a positioning story, is the test over the next few years.

My Takeaway This Weekend

The India AI Impact Summit was not about demos.

The commitments are large and layered. $700 billion in global hyperscaler capex this year. $210 billion from Reliance and Adani alone. $200 billion in infrastructure investment targeted over two years by the government. Sovereign GPU capacity being expanded. Domestic foundation models in 22 languages. Global AI companies choosing India as their second home. A startup ecosystem generating nine-figure ARR.

For global technology leaders, one reframe is overdue. India does not belong in the AI strategy slide under “cost optimisation.” It belongs under innovation, deployment, and market creation. The question is no longer whether India is serious. It is whether your strategy is.

Weekend Notebook #2607 – The SaaSpocalypse

Published on LinkedIn and amitabhapte.com on 15th Feb 2026

One word defined markets this week (and to be fair, last week too), SaaSpocalypse.

Coined by Jefferies traders as software stocks entered freefall, the term captures Wall Street’s sudden realization that an entire industry’s business model might have become obsolete. Then, just as quickly, the narrative reversed. By week’s end, the same analysts were calling it overdone.

The Week Markets Cracked

On January 30, Anthropic released 11 open-source plugins for Claude Cowork, an AI assistant that can read files, organize folders, and draft documents. The plugins targeted legal, finance, sales, marketing, and data analytics. The release was framed as a minor product update.

By February 4, nearly $300 billion in market value had evaporated from software stocks.

Thomson Reuters plunged 16% in a single day, its worst drop on record. LegalZoom sank 20%. London’s RELX fell 14%. The software industry ETF had its worst day since April, falling 5.7%. The S&P North American Software Index hit valuation levels not seen since its creation.

We call it the SaaSpocalypse,” said Jeffrey Favuzza at Jefferies. “Trading is very much ‘get me out’ style selling.”

Days later, Anthropic released Claude Opus 4.6, capable of coordinating teams of AI agents and excelling at financial analysis and market intelligence. Markets trembled again. This wasn’t a one-time event. This was systematic replacement.

Then Wall Street Blinked

By February 11, the narrative had shifted entirely.

JPMorgan released a note calling the selloff excessive, citing “overly bearish outlook on AI disruption and solid fundamentals.” The firm identified 10-14 software stocks as resilient, including Microsoft, ServiceNow, CrowdStrike, and Snowflake.

Goldman Sachs CEO David Solomon said the selloff was “too broad.” Bank of America called it “illogical.”

Jefferies analysis found that 42% of software stocks were trading at or near historical low valuations. The S&P North American Software Index had fallen below 20x forward earnings for the first time ever. The sector’s Relative Strength Index hit 18—the most oversold reading since 1990.

Suddenly, the narrative wasn’t “software is dead.” It was “buy the dip.”

Yet even as analysts reversed course, the fundamental question remained unanswered: what changed?

What Actually Changed

Claude Cowork isn’t a chatbot. It’s an AI agent with permissions to act. It can review contracts, draft legal summaries, compile compliance workflows, screen financial data, conduct due diligence, and synthesize market intelligence—tasks that currently generate billions in software subscription revenue.

Thomas Shipp at LPL Financial captured the investor anxiety: “Why do I need to pay for software if internal development now takes developers less time with AI? With Claude Cowork, fewer technical users are now empowered to replace existing workflows.”

The business model shift is clear. AI companies are no longer just selling models. They’re owning workflows directly. That’s what spooked markets.

Then on February 12, Anthropic raised $30 billion at a $380 billion valuation—the largest venture deal of 2026. Revenue now exceeds $14 billion run-rate. Microsoft and Nvidia participated. The signal was unmistakable: AI infrastructure spending isn’t slowing. It’s accelerating.

The Paradox

Markets are pricing two contradictory scenarios simultaneously:

Software is dying because AI will replace it. Yet hyperscalers and AI companies are raising and deploying record capital—Meta broke ground on a $10 billion data center this week, Samsung shipped HBM4 memory samples, and Applied Materials reported continued strength in AI semiconductor spending.

If AI is powerful enough to destroy software, the infrastructure supporting it cannot simultaneously be failing. Both cannot be true.

The software sector is expected to deliver 14.1% earnings growth in 2026. Not collapse. Growth. Slower than semiconductors, yes. But growth nonetheless.

What Leaders Should Do Now

The SaaSpocalypse revealed something more important than market volatility. It exposed how unprepared most organizations are for the shift from software-as-tool to AI-as-workflow.

Three questions every CIO and technology leader should answer this quarter:

First: Which software subscriptions are at immediate risk?

Legal research, financial screening, data synthesis, document drafting, basic analytics—these workflows are directly exposed. Don’t wait for renewal cycles to make decisions. Budget now for the transition, whether that means renegotiating contracts, piloting AI alternatives, or accepting that per-seat pricing will shift to outcome-based models.

Second: Where is your defensible moat?

Generic workflows are vulnerable. Mission-critical systems integrated with proprietary enterprise data are defensible. The companies surviving this transition won’t be those with the best interfaces. They’ll be those whose value lies in irreplaceable data and deeply embedded processes that cannot be easily replicated by AI agents.

If your current software vendor’s primary value is the interface rather than the data beneath it, that vendor is at risk. Plan accordingly.

Third: Are you building AI capability or waiting for it to arrive?

The organizations moving now—deploying agents, experimenting with workflow automation, piloting AI-native tools—will have a 12-24 month advantage over those waiting for their existing vendors to integrate AI features.

This isn’t about abandoning enterprise software overnight. It’s about understanding that the next purchasing cycle will look fundamentally different from the last one. Seat-based pricing is ending. Outcome-based pricing is beginning. The transition period is now.

My Takeaway This Weekend

The SaaSpocalypse wasn’t about one week of market panic. It was about the moment Wall Street recognized that a twenty-year business model is entering its final phase.

The analysts calling the selloff overdone aren’t wrong. Software isn’t dying. Many companies will adapt. Cybersecurity firms, infrastructure platforms, and businesses with genuine data moats will survive and thrive.

But they’re also not addressing the deeper truth: the economics are shifting. From seats to outcomes. From tools to autonomous execution. From helping humans work to replacing work entirely.

The volatility will continue. Markets will swing between “AI destroys everything” and “nothing has changed.” Both narratives miss the point.

What matters isn’t the market’s mood. What matters is whether your organization is prepared for the transition. Because while Wall Street debates valuations, the technology is already here. Anthropic just raised $30 billion. Meta is building gigawatt-scale data centers. AI agents are executing workflows that used to require software subscriptions.

The question isn’t whether this shift is real. The question is whether you’re moving before the market forces you to move.

Weekend Notebook #2606 — When Infrastructure Inflates and Software Deflates

Published on LinkedIn and amitabhapte.com on 8th Feb 2026

This week, the AI economy revealed its deepest contradiction. Not through a single event, but through the violent collision of two opposing forces: infrastructure inflation and software deflation. What emerged was a market in the midst of repricing who wins, who loses, and what value actually means in an agent-first world.

The Capital Paradox: $600 Billion in Bets, $1 Trillion in Doubts

Big Tech will spend $600-650 billion on AI infrastructure in 2026. Alphabet, Amazon, Meta, and Microsoft collectively commit more capital than most nations’ GDP. That’s $50 billion above analyst expectations. The scale is industrial, not digital.

At the same time, those same companies lost over $1 trillion in market value in a single week as investors questioned whether AI revenue will arrive fast enough to justify the spending. The fear isn’t that AI won’t work. It’s that returns may take years, not quarters.

Then came the counterpoint. Anthropic is closing a $20+ billion funding round at a $350 billion valuation, double its initial target, just five months after raising $13 billion. Excess demand. Compressed timelines. This is capital moving at venture velocity into infrastructure-scale deployments.

And Elon Musk merged xAI with SpaceX, creating a $1.25 trillion entity focused on orbital data centers. His pitch: solve Earth’s energy constraints by moving AI compute into space. Whether Wall Street buys it remains to be seen, but the ambition is unmistakable.

We are witnessing a structural inversion. Software, historically the high-margin layer, is commoditizing. Infrastructure, historically the low-margin layer, is becoming the strategic moat. AI advantage no longer comes from just model selection. It comes from infrastructure access: power contracts, compute capacity, and geographic diversification. The organizations that secure long-term energy and compute will have operational leverage others won’t.

This isn’t a technology decision anymore. It’s a supply chain decision. And it belongs in boardroom conversations about resilience, not IT roadmaps about features.

The Software Displacement Moment: From SaaS to Agents

While infrastructure inflates, software deflates, violently.

Anthropic’s Claude Cowork plugins triggered what markets are calling the “SaaSpocalypse” as $1 trillion wiped from enterprise software and data analytics stocks. Thomson Reuters fell 15%. LegalZoom dropped 20%. Intuit, Salesforce, and ServiceNow all took double-digit hits.

The catalyst wasn’t a better chatbot. It was the realization that AI agents can perform tasks previously sold as per-seat software subscriptions. Legal research. Financial analysis. Document review. Compliance checks. These aren’t enhancements. They’re substitutes.

Goldman Sachs made that shift explicit this week. After six months of embedded collaboration, Goldman is deploying Anthropic’s Claude agents to automate accounting, compliance, and client onboarding. Not as decision-support tools. As digital co-workers.

Nvidia CEO Jensen Huang called the panic “illogical”, arguing AI will enhance enterprise software rather than replace it. But analysts noted the real risk: even if software survives, pricing power and margins won’t. If AI reduces the need for human seats, seat-based licensing collapses.

The displacement isn’t hypothetical. It’s financial. And it’s forcing an uncomfortable audit across every enterprise software stack. The question technology leaders need to answer now: which tools in our portfolio are defensible, and which are vulnerable to agentic substitution?

Defensible software has deep workflow integration, regulatory moats, or proprietary data that agents can’t easily replicate. Vulnerable software is anything that automates retrieval, summarization, or basic decision logic, tasks agents now do natively.

This isn’t about cutting costs. It’s about redefining what “software” even means in an agent-first architecture. The winners will be platforms that orchestrate agents, not replace them. The losers will be tools that agents simply bypass.

India’s Strategic Positioning: From Back Office to AI Power Plant

While markets panic and capital concentrates, India made two quiet but decisive moves this week.

Indiadoubled the startup recognition period for deep tech companies to 20 years, and raised revenue thresholds to ₹300 crore. The acknowledgement: space, semiconductors, biotech, and AI infrastructure require longer R&D cycles than software ever did. Policy is finally catching up to physics.

Then came the bigger signal. The India-US interim trade agreement will significantly increase access to advanced GPUs and data center equipment addressing longstanding import duty barriers (20-28%) while positioning India as a trusted AI infrastructure hub. Combined with tax breaks extending to 2047, India is no longer just selling talent. It’s selling sovereignty.

India is playing the long game while others chase quarterly results. By extending deep tech timelines, India acknowledges: foundational innovation takes time. By securing GPU access and offering tax certainty, India is positioning itself as the geographically diversified alternative precisely when Western supply chains need resilience.

For global enterprises, this creates optionality. As AI workloads scale and energy constraints tighten in traditional markets, India will offer compute capacity, regulatory stability, and talent density in a single package. The strategic lesson: watch where infrastructure policy aligns with industrial ambition. That’s where the next decade of AI deployment will compound.

My Takeaway This Weekend

This was the week the AI economy stopped being theoretical and became structural. Infrastructure is inflating. Software is deflating. And the line between them is now a repricing event playing out in real time across global markets.

The winners won’t be those with the smartest models or the shiniest demos. They’ll be those who secure resilient infrastructure, redesign software procurement for an agent-first world, and build operational leverage where others see only cost.

AI leadership is no longer about adoption velocity. It’s about infrastructure resilience, software defensibility, and the judgment to know which bets compound and which simply burn capital.