India’s AI Durbar
Notes on India's AI Impact Summit
Are you Diwan-i-aam or Diwan-i-khas? That was Anmol’s shorthand for the two summits happening simultaneously. The public court at Bharat Mandapam, buzzing with hundreds of thousands of curious Indians. The select court, scattered across closed-door sessions and side events around the city. I spent the week shuttling between the two and stuck in traffic for what felt like the remainder. The energy was unmistakable. So were the concerns that people were more open to voicing in private than public.
I. The Energy
The first thing that hit me entering Bharat Mandapam was the sheer number of people. On the opening day, something like 250,000 showed up. Over the course of the week, attendance reportedly crossed 5 lakh. Bharat Mandapam felt, at times, like a family picnic. Young couples walking hand in hand past expo stalls, students crowding around demos, a palpable, genuine curiosity about this technology that is going to reshape their lives. Multiple global AI leaders — Dario Amodei, Demis Hassabis, Sam Altman — commented on the positivity and enthusiasm they encountered, which stands in notable contrast to the anxious, even hostile reception AI often gets in parts of the West.1
There has been criticism of this open for all approach. Too many people, too much of a mela, too exhibitionist. I think the critics are wrong, and broadly agree with Subbarao Kambhapati’s tweet below. We are a very young country with a massive working-age population, and AI is going to have a profound impact on our jobs, our services, our governance. Opening the summit up to this many people matters. It is, in fact, a meaningful departure from how India has historically engaged with technology. Whether it was Nehru’s philosophy of the state as gatekeeper of which technologies citizens should access (”appropriate technologies”) or George Fernandes leading protests against IBM in the 1970s, India has not always treated technological curiosity as something to be encouraged at scale.
Think back to the Startup India summit in 2016. That too was dismissed as a PR exercise. Be that as it may, it had a genuinely catalytic effect on how Indians perceive entrepreneurship, helping make it a more acceptable choice in a deeply risk-averse society. If this summit does even a fraction of that for how people engage with AI, that is a net positive.
The most thrilling thing at the summit, for me, was Sarvam AI. They launched their Sarvam-3 series of open-source models trained for 22 Indic languages, at 30B and 105B parameter sizes, using a mixture-of-experts architecture alongside a full stack of use cases spanning speech-to-text, text-to-speech, translation, and reasoning.2 The demos were genuinely impressive, and the energy at the launch and their booth was electric. There were also new sovereign models from Gnani.ai and BharatGen.3 For the first time, it felt like India had something concrete to point to in terms of actual models, actual products, actual capability.
Then there were the investment numbers. Over $200 billion in pledges over the course of the week.4 These are big numbers that are responding to a genuine and massive need. India currently hosts only about 3% of global data center capacity while generating nearly 20% of the world’s data. It’s a staggering mismatch between where data is produced and where it is processed. Current installed capacity sits at roughly 1.5 GW. The Economic Survey 2025-26 projects a need to reach 8 GW by 2030, more than a fivefold increase. Industry projections suggest a supply shortfall exceeding 1,500 MW by 2033 even with the current pipeline. India’s Petabyte-per-MW ratio stands at 13.2, compared to China’s 4.5. This means that each unit of Indian data center capacity is servicing nearly three times the data load. And this is before the explosion in AI workloads, which are far more compute-intensive than traditional cloud. As Cisco’s Jeetu Patel put it at the summit: infrastructure is oxygen for AI, and India is gasping.5
And perhaps most significantly, India used the summit to make a genuinely important geopolitical argument. This was the first major global AI summit hosted by a Global South nation, and India positioned it as a continuation of the agenda it set at the G20: the idea that AI cannot remain a conversation that positions the Global South as passive consumers. The New Delhi Frontier AI Commitments that emerged from the summit are voluntary and non-binding, but the framing itself is a contribution.
While India also signed the US-led Pax Silica declaration, joining a coalition to secure semiconductor and critical mineral supply chains.6 The signing was not really the main story. The US push to build on the American tech stack drew more attention. White House AI adviser Sriram Krishnan told summit attendees that ‘we want the American AI stack to be the bedrock that everyone builds on’.7 It was a line that drew enough backlash online and bristled enough feathers behind closed doors to make clear that the US-India relationship, while functional, remains bruised. India will do business, but guardedly.
So far, so good. The energy was real, the need is real, the ambition is real. Now for the harder questions.
II. The Gap
Start with those $200 billion in pledges. As the substack author Kakashii pointed out in a sharp note about Yotta specifically,8this is a company that made very similar announcements in 2024 and even attempted to go public via SPAC on the back of those claims. The broader pattern of conglomerates making eye-popping investment pledges only for the actual capital deployed to look quite different years later, is well-established enough to warrant a default posture of healthy skepticism. This is not just an Indian phenomenon.9 Many of the India summit pledges were actually reiterations of previous announcements. Microsoft’s $17.5 billion was first announced in early 2025; Google’s $15 billion is cumulative across prior commitments. How much of the $200 billion is genuinely new versus repackaged capex already in existing plans? How much is contingent on government subsidies, tax holidays, and land allocations that may or may not come through? How much will actually be deployed on the timelines announced? India desperately needs this infrastructure, that much is clear. The question is whether we’ll get it, and the track record suggests caution.
Then there’s Sarvam. The demos were impressive, but impressive demos and commercial adoption are different things. Sarvam’s 105B model is impressive as a research artifact, but the 30B is likely to see more production use, and the real test is cost-per-token at inference, not headline parameter count. How many enterprises will actually adopt Sarvam when the frontier labs offer better performance on most benchmarks? Even in voice specifically, which is where Sarvam’s pitch is strongest (India is undeniably a voice-first market) Sarvam is playing catch-up with ElevenLabs. ElevenLabs already does several multiples of Sarvam’s revenue in India alone, and is doubling down on price competitiveness. Anecdotally, portfolio companies we work with speak very highly of ElevenLabs vis-à-vis Sarvam on voice quality.
Sarvam and others have leaned into the geopolitics: sovereign AI, the imperative of not being dependent on foreign models, the IndiaAI Mission funding they’ve received. I understand the strategic argument and take it seriously. But for most commercial organizations, the imperative is still to use the best model, not the most sovereign model. If Sarvam is to succeed, it has to compete at global benchmark levels, not just on a sovereignty pitch.
There was also a lot of talk during the summit about how frontier labs are “using Indian data” with the implication that this is somehow extractive. I think this framing deserves pushback. These labs aren’t accessing proprietary data; they’re training on what’s publicly available on the internet. Unless India plans to go down the Chinese route of a closed data ecosystem, the fact that we produce a lot of data doesn’t confer a particular competitive advantage. That data is accessible to everyone. Yes, high-quality Indic language data is genuinely scarce on the open web, which is exactly why Sarvam’s work has real value. But scarcity of training data is a moat only if the models trained on it are genuinely better for Indian use cases. That’s a commercial question.
The most viral moment of the summit, the Galgotias University robot dog, was funny, but it was also the single most revealing thing that happened all week. A professor of communications told state broadcaster DD News that a robotic dog named “Orion” had been “developed by the Centre of Excellence at Galgotias University.” Social media identified it within hours as a Unitree Go2, a commercially available Chinese product retailing for about $1,600. The university was ordered to vacate its stall.
It would be easy to dismiss this as one embarrassing incident. I think it’s symptomatic of a deep problem in India’s innovation ecosystem: the chasm between our stated ambitions and our actual research capacity. India’s universities — notable exceptions aside — are not producing cutting-edge R&D in AI or robotics. The private university system, which has expanded enormously, too often prioritizes enrollment over research output. We have a culture that rewards the appearance of innovation more than the painstaking, unglamorous work of actually building things from scratch. If India’s sovereign AI ambitions are to be real, they need institutions that can produce genuine research, not institutions that rebrand commercially available Chinese hardware as indigenous innovation.
Which brings me to the quality of the Diwani-e-Aam itself, because the main stage was a missed opportunity to do something about exactly this problem. The closed-door sessions I attended were solid. You need some privacy for that kind of honesty. But the main stage at Bharat Mandapam had largely become a gabfest. Too many panels, too many speakers, too many sessions that were variations on the same theme: governance frameworks, responsible AI, consultant-speak (I say this as a consultant who ran a panel myself!) A punishing signal-to-noise ratio. The summit website offered no filtering by theme, no way to distinguish technical sessions from panel discussions, and coupled with side events across the city and the traffic, it was nearly impossible to navigate.
Given that the Mandapam was opened up to hundreds of thousands of people, including many trying to understand what AI actually is, I’d have loved to see an agenda built around that ambition. How does AI work? How does voice AI work? What is a large language model? Technical primers that demystify the technology for a curious public, alongside substantive debates about where the frontier is moving and what that means for India. Instead, we got an agenda that felt more designed for LinkedIn posts than for learning.
There were bright spots. Sarah Hooker’s keynote covering topics like the death of scaling, data space optimization and model steering was excellent. Yann LeCun lambasting LLMs was a reminder that the AI we work with today is just one manifestation of the technology and possibly not the most important one. Some of the research frontier sessions were terrific. And these are precisely the kinds of sessions that should have been front and center. When someone like Hooker walks an audience through what problems frontier AI labs are grappling with that does something no governance panel can: it opens up the imagination. For the school and university students who were at the Mandapam in huge numbers, those are the sessions that plant the seed. That help a twenty-year-old in Lucknow or Coimbatore see not just what AI is, but where the white spaces are, what’s worth building, where India might genuinely leapfrog rather than imitate.
III. The Execution Test
All of the gaps above point to the same underlying problem. And that problem was, ironically, on fullest display in the most mundane aspect of the summit: getting around the Mandapam and Delhi.
The initial chaos on Day 1 was acknowledged by Minister Ashwini Vaishnaw himself, and to be fair, entry improved significantly on Days 2 and 3, but that was the lowest common denominator of what should be expected. Basic signage to exits were absent or contradictory. One evening I left through a gate and walked to Supreme Court metro station; the next day, that gate was closed with no notice. Volunteers couldn’t tell you which exits were operational.
And then there was Delhi itself. I have never experienced traffic chaos at this level, and this is the capital city that has hosted heads of state and major international conferences before, including the recent G20. The irony was not lost on anyone that the city was hosting an AI summit while failing to use even basic technology for traffic management. We talk endlessly about AI improving governance and solving age-old problems. Dynamic traffic routing, real-time information dissemination, VIP movement optimization that doesn’t paralyze a city of 20 million: none of this is technologically difficult. It is lack of state capacity, governance failure, and a disregard of the citizenry. All factors that have repeatedly thwarted our ambition.
There is one more execution risk worth flagging, because it is classically Indian. There is a strand of commentary gaining traction in our public discourse that imports environmental concerns about AI wholesale from Western debates about data centers guzzling power, the carbon cost of training models and applies them to India without any adjustment for context. This fits a pattern that Shruti Rajagopalan has written about compellingly: “premature imitation,” the tendency of Indian elites to import policy positions from developed countries before those positions are appropriate to India’s stage of development.10 The environmental arguments about AI’s resource consumption have real weight in a country debating whether to build its 500th data center in Virginia. They have no business being used as an argument against building foundational digital infrastructure in a country where 1.4 billion people are served by 3% of global data center capacity. India needs more clean energy and more data centers, simultaneously.11 Devesh Kapur has argued, we have a long history of being precocious in our political battles12, and the AI-environment discourse risks becoming another instance of this. One that could, if it gains policy traction, actively impede the infrastructure buildout India desperately needs.
Closing the Distance
The India AI Impact Summit was, in the end, a remarkably faithful microcosm of India itself. Extraordinary energy and genuine ambition, coexisting with deep execution gaps, both on full display at the same venue in the same week. A young, curious population eager to engage with the future, navigating an institutional apparatus that too often lets them down. Huge investment commitments serving a genuine need. A geopolitical positioning that the world actually needs. All shadowed by a track record that counsels patience.
Whether the ambition that filled Bharat Mandapam can be matched by the institutional capacity, the research depth, the governance quality, and the honest self-assessment needed to turn that ambition into something durable is the question India faces.
Pew Research Center, “How People Around the World View AI” (Oct 2025) found roughly half of Americans more concerned than excited about AI in daily life, up from 37% in 2021. The US, Italy, Australia, and Greece led global concern rankings. A YouGov survey (June 2025) found Americans expecting negative societal impact from AI rose from 34% to 47% in six months. Pew survey; YouGov survey.
Sarvam launched two models: Vikram 30B (32K context window) and a 105B mixture-of-experts model (128K context). Trained from scratch on 16 trillion tokens across Indic languages, funded by the IndiaAI Mission with Yotta infrastructure and Nvidia support. The company also announced Sarvam Kaze AI glasses, targeted for May 2026 launch. TechCrunch; Bloomberg.
Reliance committed roughly $110 billion over seven years to expand AI infrastructure. Adani pledged $100 billion for AI data centers powered by renewable energy by 2035. Google announced $15 billion including a full-stack AI hub in Visakhapatnam. Microsoft reiterated $17.5 billion. Tata Group announced a partnership with OpenAI to build 100 MW of AI compute infrastructure, with plans to scale to a gigawatt. Yotta unveiled a $2 billion deployment of over 20,000 Nvidia Blackwell Ultra GPUs. Anthropic partnered with Infosys and opened a Bangalore office. The government announced the addition of 20,000 GPUs to the IndiaAI Mission’s existing 38,000, and earmarked $1.1 billion for a new AI venture capital fund.
Jeetu Patel, Cisco President, at the India AI Impact Summit, Feb 20, 2026: “The first is infrastructure. There’s just not enough power, compute, and network bandwidth in the world. Infrastructure is oxygen for AI.” Tribune India.
Pax Silica was launched by the US Department of State in December 2025 as a coalition to secure semiconductor, critical mineral, and AI supply chains. Members include Australia, Greece, Israel, Japan, Qatar, South Korea, Singapore, UAE, UK, and now India. PIB press release; Business Today.
Sriram Krishnan, Senior White House Policy Advisor on AI, at a side session: “Indian companies will need to bring in local language support... But at the end of the day, we want the American AI stack to be the bedrock that everyone builds on.” Business Standard; NBC News.
Note the Stargate delays: Oracle pushed several data center deliveries from 2027 to 2028 due to labor and material shortages. SoftBank paused a $50B acquisition of Switch in January 2026. Internal disagreements between SoftBank and OpenAI over site scale and energy supply caused delays in mid-2025. Tom’s Hardware on Oracle delays; Bloomberg on Stargate friction.
Shruti Rajagopalan’s work on “premature imitation” — the adoption of regulatory and policy frameworks from developed countries before they are appropriate to India’s institutional capacity and stage of development. See also Rajagopalan’s broader work on Indian state capacity constraints.








Enjoyed reading this. I think you're right to point out the premature imitation problem but may be misapplying it to concerns about the environmental costs of AI, which seem far more salient to a nation that regularly faces shortages of water and energy than they are to Virginia. Instead, isn't there premature imitation in the rush to find uses for the shiny new tool before attempting to grapple with the mundane challenges of execution you identify? Won't those same problems of execution plague the adoption of AI as well and undermine any attempts to leapfrog?
Very insightful.