AI · COMPUTE · POLICY

IndiaAI Compute: Who Gets the Next 3.8k GPUs? Onboarding, Quotas & Hurdles

Onboarding flow for startups & academia, pricing/subsidy rules, and the practical hurdles (power, cooling, datasets) — with an “apply” checklist.
By bataSutra Editorial · September 7, 2025
In this piece:
  • The short: what’s new in the ~3.8k GPU tranche
  • Onboarding flow & thresholds (who gets auto-OK vs PMEC review)
  • Pricing, subsidy & the new voucher model
  • Practical hurdles: power, cooling, network, datasets
  • “Apply” checklist you can copy-paste

The short

  • Fresh capacity: IndiaAI’s third tender adds ~3,850 GPUs (first tranche to include Google Trillium TPUs) on top of a pool that crossed 34k GPUs by end-May.
  • Pay-as-you-go: Access via the Compute Portal from ~₹67/GPU-hour with up to 40% subsidy for eligible users.
  • What’s next: Government has indicated a coupon/voucher-style mechanism alongside a larger secured pool reported at ~38k units.

What’s new in this tranche

Round-3 adds ~3,850 GPUs and the first inclusion of ~1,050 Trillium TPUs, alongside earlier H100/H200/L-class additions. Treat mix as evolving until commercial close & delivery schedules firm up.

SignalWhat it means
Trillium TPUs join poolTraining/inference options beyond Nvidia/AMD; check framework support & quotas per project.
Pool scale (public)~34k+ GPUs by late May; more recent reporting points to ~38k secured capacity.
Portal pricingBaseline ~₹67/GPU-hr; subsidy up to 40% for priority users.

Onboarding flow (startups, academia, gov) — how approvals work

  1. Register on the IndiaAI Compute Portal with org/institute email + mobile → OTP verification.
  2. Submit IDs (auto-verification where possible):
    • Students APAAR ID
    • Startups DPIIT recognition; MSMEs Udyam ID
    • Researchers/Faculty Google Scholar / ORCID / Scopus author IDs
  3. Project proposal (problem, novelty, national impact, team track). Institute verifiers (students/research) and MeitY Startup Hub CoEs (startups) review.
  4. Monthly window: Requests collated 1st–25th; PMEC meets end/start of month; results published by the 10th.
  5. Thresholds that trigger PMEC: requests > 5,000 GPU-hours or > 50 GPUs; below that, category verifiers can approve.

Auto-OK paths Researchers meeting simple h-index/citation bars, and IndiaAI Fellowship awardees, typically see faster allocation post-verification.

Pricing, subsidy & vouchers

  • Rack rate: ~₹67/GPU-hour via the portal (roughly one-third typical global rates).
  • Subsidy: PMEC may award up to 40% subsidy on approved project BOMs; paid to providers quarterly.
  • Voucher/coupon model: Announced to simplify access and cap user costs; integration underway.

Practical hurdles you’ll still hit

Infra frictions

  • Power/cooling DC slots depend on local MW & thermal capacity; expect staged delivery.
  • Network Egress can dwarf compute costs; budget for object-store and inter-DC traffic.

Data & platform

  • Datasets AIKosha (IndiaAI datasets) is live; check licensing & privacy flags.
  • Heterogeneous silicon Trillium/Nvidia/AMD/Gaudi mixes → validate framework support early.

Apply checklist (copy–paste)

  • Account: Org/institute email + mobile ready.
  • ID docs: APAAR (students), DPIIT (startups), Udyam (MSMEs), Scholar/ORCID/Scopus (researchers).
  • One-pager: Problem, novelty, target users, milestones, expected GPU-hours, models/tooling stack.
  • Team bios: brief CVs + links (code/pubs/patents).
  • Data plan: sources, licensing, storage/egress budget; any AIKosha sets.
  • Budget: rack at ₹67/GPU-hr; request subsidy with justification; note if you exceed 5,000 hours / 50 GPUs.