C2i Peak XV: Innovations in AI Data Center Power Limits

Table of Contents


C2i aims to reduce data center energy waste

  • Peak XV Partners led a $15 million Series A in India’s C2i Semiconductors, bringing total funding to $19 million.
  • The bottleneck for AI data centers is increasingly power, not compute—especially the inefficiency of converting electricity inside facilities.
  • C2i says it can cut end-to-end power losses by about 10%, roughly 100 kW saved per 1 MW consumed.
  • External forecasts point to steep growth in data-center electricity demand through 2030–2035, raising the stakes for efficiency gains.

Reducing Power Losses for AI
Power is becoming the “hard ceiling” for AI expansion: the grid connection and the facility’s power-delivery chain often limit how many GPUs you can run, even when you can buy the servers.
C2i’s pitch is to improve the grid-to-GPU path where losses accumulate. If its ~10% end-to-end loss reduction holds in operator validation, the benefit is not just a lower electricity bill—it can also ease thermal constraints (less heat to remove) and effectively free up capacity inside a fixed power envelope.

Investment in C2i Semiconductors by Peak XV Partners

Peak XV Partners is betting that the next constraint on AI infrastructure won’t be the availability of GPUs, but the ability to feed them power efficiently and economically. The venture firm led a $15 million Series A round in C2i Semiconductors, an Indian startup building what it describes as plug-and-play, system-level power solutions for AI data centers. Yali Deeptech and TDK Ventures also participated.

The premise is straightforward: once a data center has made the upfront capital investment in servers and facilities, electricity becomes the dominant ongoing expense. Peak XV managing director Rajan Anandan framed the opportunity in terms of operating leverage—small percentage improvements in energy efficiency can translate into outsized savings at hyperscale.

“If you can reduce energy costs by, call it, 10 to 30%, that’s like a huge number. You’re talking about tens of billions of dollars.”
Rajan Anandan, managing director, Peak XV Partners

Hyperscale Energy Savings Impact
What Peak XV is underwriting here is the idea that “single-digit” efficiency gains can be economically massive at hyperscale:
– Investor logic (in Anandan’s words): after servers/facilities are bought, energy becomes the dominant ongoing cost.
– Claimed magnitude: “10 to 30%” energy-cost reduction.
– Implied outcome: at fleet scale, that translates into “tens of billions of dollars,” which is why power-delivery innovation can justify deep-tech timelines.

That focus reflects a broader shift in how the industry talks about scaling AI. For years, the narrative centered on compute: faster accelerators, denser racks, and more capacity. Now, power delivery—how electricity is converted and distributed from the grid all the way to the processor—has become a strategic choke point. Peak XV’s investment is a wager that a startup can meaningfully improve that layer despite long qualification cycles and entrenched incumbents.

C2i’s near-term timeline also matters to investors. The company expects its first two silicon designs to return from fabrication between April and June, with validation planned alongside data-center operators and hyperscalers. Anandan suggested the feedback loop will be relatively short for a semiconductor bet: “We’ll know in the next six months,” he said, pointing to silicon results and early customer validation.

C2i’s Total Funding and Financial Growth

With the Series A, C2i’s total funding stands at $19 million, a notable sum for a two-year-old semiconductor startup attempting an end-to-end redesign of power delivery. The round structure also signals the kind of capital profile required for this category: system-level power solutions demand not only chip design, but packaging and architecture work that must align with how data centers are built and operated.

Funding and Next Milestones

Funding item Amount Lead / participants (as reported) Total to date What it enables next
Series A $15M Led by Peak XV Partners; participation from Yali Deeptech and TDK Ventures $19M First silicon returning from fabrication (April–June) and early validation with operators/hyperscalers
Total funding (all rounds) $19M Team build-out (~65 engineers) and customer-facing operations in the U.S. and Taiwan

The company’s growth is not described in revenue terms, but in execution milestones and organizational build-out—often the more relevant indicators at this stage for deep-tech hardware. C2i is based in Bengaluru and has assembled a team of about 65 engineers. It is also setting up customer-facing operations in the U.S. and Taiwan, positioning itself closer to the center of data-center procurement and to key parts of the semiconductor supply chain.

The funding arrives at a moment when the market’s urgency is rising. Data-center operators are under pressure to expand AI capacity, yet face constraints that are as much electrical and thermal as they are computational. In that environment, a company that can credibly reduce conversion losses can argue for a direct line to total cost of ownership: less wasted power can mean less heat to remove, lower cooling demand, and potentially better utilization of expensive compute.

Still, the financial story is inseparable from the adoption challenge. Power delivery is among the most entrenched parts of the data-center stack, dominated by large incumbents and long qualification cycles. C2i’s approach—coordinating silicon, packaging, and system architecture—can be capital-intensive and may take years to prove in production environments. The company’s funding to date suggests it has enough runway to reach first silicon and early validation, but the next phases—qualification, deployment, and scaling—are where hardware startups typically face their steepest costs and longest timelines.

Projected Electricity Consumption in Data Centers

Forecasts cited around C2i’s raise underscore why power has become the headline constraint.

In this case, the projections referenced are from BloombergNEF (December 2025) and Goldman Sachs Research, as cited in reporting on C2i’s funding and positioning. BloombergNEF projected in a December 2025 report that electricity consumption from data centers is expected to nearly triple by 2035. That kind of growth would force difficult choices across the ecosystem: where to build, how to secure power, and how to keep operating costs from overwhelming the economics of AI services.

Operational Impact of Load Growth
A practical way to read “nearly triple by 2035” (BloombergNEF, Dec 2025) is to separate the number from what it changes operationally:
1) Source + horizon: BloombergNEF projection → 2035.
2) What scales: more total facility load (MW) and higher rack densities.
3) What becomes scarce first: grid interconnect capacity, on-site distribution capacity, and cooling headroom.
4) What efficiency buys you: more usable compute within the same power envelope (or the same compute with less power), which can delay expensive electrical upgrades.

The implication is not merely that data centers will consume more electricity, but that the marginal cost of adding AI capacity will increasingly be shaped by energy availability and efficiency. When demand rises that quickly, the industry’s traditional playbook—add more servers, upgrade networking, expand floor space—runs into a physical limit: the ability to deliver and convert power safely and efficiently at higher voltages and higher densities.

This is where the “grid-to-GPU” framing becomes important. AI workloads concentrate power draw in the accelerator layer, and the path from incoming electricity to usable low-voltage power at the GPU involves multiple conversion steps. Each step introduces losses, and at scale those losses become material. As data-center consumption grows, the value of reducing waste compounds: saving a fraction of a megawatt in one facility becomes saving many megawatts across a fleet.

The projections also help explain investor interest in infrastructure-adjacent semiconductor startups. If data-center electricity use is on a trajectory to nearly triple by 2035, then efficiency improvements are not a niche optimization—they become a capacity enabler. In that context, the question is less whether the industry wants efficiency, and more whether new entrants can deliver it in a form that operators can adopt without redesigning entire facilities.

Goldman Sachs Research on Data-Center Power Demand

Goldman Sachs Research has put a sharper, nearer-term number on the surge: data-center power demand could rise 175% by 2030 from 2023 levels. The firm characterized that increase as equivalent to adding another top-10 power-consuming country—an analogy that captures both the scale and the geopolitical reality of energy competition.

Data Center Power Surge by 2030
Goldman Sachs Research estimate (as cited in reporting on C2i):
– Claim: data-center power demand could rise 175% by 2030 vs. 2023.
– Plain-language translation: that’s roughly 2.75× the 2023 level.
– Why the “top-10 country” analogy matters: it signals the increase is big enough to show up at national-grid scale, not just inside individual facilities.

For AI infrastructure, the significance of the 2030 horizon is that it aligns with current build cycles. Data centers planned today will still be operating then, and many will be expanded or retrofitted. If demand rises as steeply as Goldman’s estimate suggests, operators will be forced to treat power as a first-order design constraint, not an afterthought.

That shift changes what “performance” means. It’s no longer only about how many tokens per second a cluster can generate or how quickly a model can be trained. It’s also about how efficiently electricity can be converted and delivered to the compute layer, and how much of the incoming power ends up as usable work rather than heat.

The Goldman framing also helps explain why investors like Peak XV are drawn to efficiency claims that might sound incremental in isolation. In a world where power demand nearly triples in under a decade, a 10% improvement in end-to-end efficiency is not marginal—it is a way to stretch constrained capacity further. It can also influence siting decisions, because the ability to do more with the same power envelope can reduce the pressure to secure additional grid connections.

At the same time, the research highlights the risk: if power demand is rising that quickly, the industry will not wait indefinitely for new solutions to qualify. Hardware that touches the power path must prove reliability and compatibility, and it must do so on timelines that match hyperscalers’ expansion plans. That is why C2i’s upcoming silicon and validation window is being watched closely.

C2i’s Energy Loss Reduction Goals

C2i’s central claim is that it can reduce end-to-end power losses by around 10% by treating power conversion, control, and packaging as an integrated platform rather than a collection of discrete components. The company describes its approach as a single, plug-and-play “grid-to-GPU” system spanning the data-center bus to the processor itself.

The problem it targets is rooted in the physics and architecture of modern data centers. High-voltage power enters a facility and must be stepped down repeatedly before it reaches GPUs. According to C2i co-founder and CTO Preetam Tadeparthy, that conversion chain currently wastes about 15% to 20% of energy. In other words, a meaningful portion of purchased electricity never reaches the compute as usable power.

C2i’s estimate—about 100 kilowatts saved for every megawatt consumed—translates the percentage into an operator-friendly metric. At scale, that kind of reduction can have secondary effects beyond the electricity bill. Less wasted power means less heat generated, which can lower cooling requirements. It can also affect GPU utilization and overall data-center economics, because thermal and power constraints often determine how hard hardware can be driven in practice.

“All that translates directly to total cost of ownership, revenue, and profitability.”
Preetam Tadeparthy, co-founder and CTO, C2i Semiconductors

Proving End-to-End Loss Reduction
How “end-to-end loss reduction” typically gets proven (and where it can fail):
1) Define boundaries: specify the measurement path (facility bus → intermediate conversion stages → point-of-load near GPU).
2) Establish baselines: measure efficiency/thermals of the incumbent setup under comparable load profiles.
3) Lab characterization: validate silicon efficiency curves across load, temperature, and transient conditions.
4) System integration check: confirm packaging, controls, and protection behavior don’t introduce new losses or instability.
5) Operator pilot: run in a real rack/pod with production-like workloads; track efficiency, temperatures, and fault behavior.
6) Reliability gates: demonstrate stable operation across power events and long-duration runs—often the step that extends timelines.

C2i is also designing for a world of rising voltages. Tadeparthy noted that what used to be 400 volts has already moved to 800 volts and will likely go higher. Higher distribution voltages can reduce losses in some parts of the system, but they also raise the complexity of conversion and control closer to the load. That is part of why C2i is emphasizing system-level design rather than isolated component improvements.

The company’s goals will be tested quickly. It plans to validate performance with data-center operators and hyperscalers. In power delivery, credibility is earned in measured efficiency, thermals, and reliability under real workloads—not in lab demos alone—so those early results will shape whether the 10% loss-reduction claim holds up in the environments that matter.

Founding Background of C2i Semiconductors

C2i Semiconductors was founded in 2024 by a group with deep roots in power electronics: former Texas Instruments power executives Ram Anant, Vikram Gakhar, Preetam Tadeparthy, and Dattatreya Suryanarayana, along with Harsha S. B and Muthusubramanian N. V. The company name—C2i—stands for control conversion and intelligence, a shorthand for the layers it is trying to unify.

Proven Data-Center Power Expertise
Why the “former Texas Instruments power executives” detail matters in this niche:
Data-center power is a conservative, reliability-first domain. Teams that have shipped power silicon at scale tend to be fluent in the constraints operators care about—efficiency across real load ranges, protection behavior, thermal margins, manufacturability, and the long qualification cycles that can make or break adoption.

That founding mix matters because power delivery is not a greenfield domain. It is a mature, conservative part of the data-center stack where reliability expectations are unforgiving and where incumbents have decades of field experience. A team that has lived inside power semiconductor roadmaps is better positioned to navigate the tradeoffs between efficiency, cost, manufacturability, and qualification.

C2i’s strategy is also shaped by where it is being built. Peak XV’s Anandan argued that India’s semiconductor design ecosystem has matured, with a growing share of global chip designers based in the country. He also pointed to government-backed design-linked incentives that reduce the cost and risk of tape-outs, making it more viable for startups to build globally competitive semiconductor products from India rather than operate only as captive design centers.

“The way you should look at semiconductors in India is, this is like 2008 e-commerce. It’s just getting started.”
Rajan Anandan, managing director, Peak XV Partners

C2i’s operational footprint reflects an ambition to be global from the outset. While engineering is centered in Bengaluru, the company is setting up customer-facing operations in the U.S. and Taiwan—two geographies that matter for hyperscaler relationships, supply chain coordination, and the practical work of getting a new power architecture evaluated.

The near-term milestone is tangible: first silicon returning from fabrication, followed by customer validation. In semiconductors, that step is where a founding story becomes an execution story—where design intent meets manufacturing realities and where early adopters decide whether a new approach can fit into their qualification pipelines.

Current Energy Waste in Data Centers

The energy waste C2i is targeting is not primarily about generating electricity; it is about what happens after electricity arrives at the data center. Inside, high-voltage power must be converted and stepped down thousands of times before it reaches GPUs. Each conversion stage introduces inefficiency, and across a facility those losses add up.

Conversion Losses Across Power Path
Where conversion-related “15%–20% waste” can show up in a grid-to-GPU path:
– Facility intake → UPS / power conditioning: conversion losses + heat.
– Distribution (higher-voltage bus) → rack-level conversion: stepping down for racks/blades.
– Rack → board-level regulators: additional conversion close to accelerators.
– Point-of-load near GPU: final regulation for low-voltage, high-current rails.
Each stage sheds some energy as heat; that heat then increases cooling load, which can further tighten the effective power budget.

Tadeparthy put today’s waste at roughly 15% to 20% in the conversion process. That figure is crucial because it reframes the scaling challenge. If a data center is already constrained by how much power it can draw, then wasting a fifth of that power in conversion is effectively leaving capacity on the table—capacity that could otherwise be used for compute.

The industry’s move to higher voltages illustrates both progress and pressure. Tadeparthy said distribution voltages have moved from 400 volts to 800 volts and will likely go higher. Higher voltages can help with distribution efficiency, but they also demand more sophisticated conversion closer to the load. As AI racks grow more power-dense, the conversion chain becomes a more prominent source of both losses and heat.

Those losses have a compounding operational impact. Wasted electrical energy becomes heat, and heat must be removed. That means cooling systems work harder, consuming additional energy and potentially limiting how densely compute can be packed. In practice, power and cooling constraints can reduce GPU utilization—an expensive outcome when accelerators are among the most capital-intensive assets in the stack.

C2i’s pitch is that it can reduce losses and improve the economics of large-scale AI infrastructure. Whether that system-level approach can be adopted quickly is the open question, because power delivery is deeply embedded in data-center design and procurement. But the underlying waste is real, and as demand projections steepen, the incentive to reclaim even a portion of that lost energy grows stronger.

The Role of Innovation in Energy Efficiency

The numbers driving this conversation—nearly tripling data-center electricity consumption by 2035 (BloombergNEF) and a 175% surge in power demand by 2030 from 2023 levels (Goldman Sachs Research)—make clear that efficiency is no longer a “nice to have.” It is becoming a prerequisite for scaling.

C2i’s approach sits in a specific, often overlooked layer: conversion losses inside the facility. If 15% to 20% of energy is being lost before it reaches GPUs, then innovation in power electronics can function like a capacity expansion without building new generation. The industry’s shift from 400 volts to 800 volts—and likely beyond—also suggests that the power stack is already in flux, creating openings for new architectures if they can prove reliability and compatibility.

The hard part is not identifying the inefficiency; it is delivering a solution that data-center operators can qualify and deploy. Power delivery is conservative for good reason: failures are costly, and qualification cycles are long. That is why C2i’s near-term silicon return and validation plans matter as much as its efficiency claims.

Balancing Plug-and-Play Power Redesign
What has to balance out for a “plug-and-play” power redesign to win in real data centers:
– Efficiency upside vs. qualification time: better conversion is valuable, but operators may need months (or longer) of reliability evidence.
– Integration effort vs. adoption speed: system-level changes can touch mechanical, thermal, and controls interfaces—even when marketed as drop-in.
– Reliability risk vs. performance gains: power-path failures are high-impact; conservative derating can reduce the headline efficiency benefit.
– Supply-chain readiness vs. prototype success: first silicon working is necessary, but consistent manufacturing and packaging yields are what enable deployment.

C2i’s Impact on the AI Infrastructure Landscape

Peak XV’s investment frames C2i as a test case for a broader thesis: that India can produce globally competitive semiconductor startups targeting foundational infrastructure problems, not just application-layer software. The company’s Bengaluru engineering base, combined with customer-facing operations in the U.S. and Taiwan, reflects an attempt to bridge talent, market access, and supply chain realities.

If C2i can demonstrate its claimed ~10% end-to-end loss reduction—about 100 kW saved per 1 MW consumed—the impact would extend beyond the electricity bill. Lower losses can reduce cooling demand and improve the practical utilization of GPUs, tightening the link between energy efficiency and AI economics.

For now, the story remains one of execution under time pressure. Hyperscalers and operators are expanding quickly, and the industry’s power constraints are intensifying. C2i’s upcoming silicon results and early customer validation will determine whether its “grid-to-GPU” platform becomes a meaningful lever in the race to scale AI—or a reminder of how difficult it is to change the most entrenched layers of the data-center stack.

This analysis is written from the perspective of Martin Weidemann (weidemann.tech), a builder focused on the economics and execution realities of complex, regulated infrastructure—where small efficiency deltas can materially change operating constraints and total cost of ownership.

This piece reflects publicly available information and third-party projections available at the time of writing. Forecasts are inherently uncertain, and outcomes may differ based on deployment pace, grid constraints, and operator qualification timelines. Product performance expectations depend on measured results in real operator environments and may change as additional validation data becomes available.

Scroll to Top