Rahul Dhar, President – Global Datacenter Operations & Enterprise Delivery, CtrlS discusses adapting global SLAs to India’s climate and industrializing AI data centers
You spent a decade at Microsoft demanding flawless SLAs from operators. Now that you are on the other side of the table at CtrlS, what is the one hyperscale demand you used to make that you now realize is incredibly difficult to deliver in the Indian context? Is there a gap between what the Global CIO expects and what Indian physics/infrastructure can actually support at 45°C?
At Microsoft, SLAs were clearly defined to ensure consistency, predictability, and minimal disruption, through close collaboration with operators. Transitioning to the operator side has deepened my awareness of how local conditions, such as summer heat, grid fluctuations, and water limitations, influence engineering decisions to meet these SLAs reliably. The standards remain constant, the implementation must adapt to the environment in which you operate.

There is no disconnect between the goals of global CIOs and what can be realistically achieved in India. We agree with these expectations and consider them essential. The key is to develop the right strategy to achieve them. At CtrlS Datacenters, we have already integrated captive renewable energy into our portfolio and are continuously expanding this area as part of our long-term energy strategy. We adopt a practical, phased approach by deploying high-efficiency air cooling when suitable, preparing high-density modules for liquid or immersion cooling, and prioritizing power and thermal resilience from the start. We design for high AI densities with power provisions supporting up to 100 kW per rack, while backend infrastructure can be scaled to higher levels. This is why our cooling and water strategies are planned at the campus level. This approach guarantees consistent SLA delivery while integrating well-defined modifications and sustainability considerations.
CtrlS has built its brand on being ‘Rated-4’. But one can argue that fault tolerance is tested not when things are calm, but when the grid fails at 2 PM in May. With your new ‘Central Command Center’ initiative, are you seeing thermal events becoming the primary cause of incident tickets? Does the definition of ‘Rated-4’ need to evolve to include thermal resilience (essentially, can the building survive if the water supply for the chillers is cut for 4 hours?)
Rated-4 has always focused on proven performance under stress, not just the design intent. At CtrlS Datacenters, this involves phased redundancy across power, cooling, and connectivity, supported by systematic operational processes that keep mission-critical workloads running even during grid instability or peak summer conditions. Fault tolerance is tested in real-world scenarios, and our engineering approach accounts for these stress conditions from the start rather than treating them as rare cases.
The Central Command Center has improved our visibility and response speed. In India’s climate, water strategy is crucial for cooling resilience, which is why recycling and efficiency are integrated into availability planning. We are increasingly designing facilities with multiple cooling pathways to avoid reliance on a single thermal strategy as densities increase. Our advanced monitoring and smart cooling controls adapt to changing thermal loads, enabling early detection and quicker operator response before deviations affect system service. We also recognize that prolonged extreme heat or water shortages require thoughtful design safeguards and regular validation. Rated-4 capability consistently shows resilience in power and thermal performance to adapt to environmental changes.
You oversaw Microsoft’s massive cloud infrastructure in India. Now, you are leading CtrlS’ expansion into Tier-2 cities. Is it harder to build a massive hyperscale campus in Mumbai, or a small, reliable Edge facility in Lucknow? Do we underestimate the operational friction of running high-tech infrastructure in low-tech geographies?
Having experience with hyperscale cloud infrastructure in India reveals a key insight: scale alone doesn’t determine complexity. Major metro campuses require extensive long-term planning, land aggregation, phased power solutions, fiber integration, water resource management, and regulatory coordination at scale. But in more emerging geographies, utility reliability itself becomes a design variable, often prompting power and infrastructure planning before construction. While engineering principles such as resilient design, predictable utilities, and systematic operations remain consistent, the challenges and variables differ.

Building a large hyperscale campus in Mumbai or a reliable edge facility in Lucknow both come with their own set of opportunities and challenges. Large campuses require careful planning for density and future growth, while smaller edge sites need stricter standardisation due to the lack of scale to buffer variability. As AI workloads get closer to end users, edge sites can no longer be viewed as mere extensions, they must be AI-ready from the outset. Operational consistency in Tier-2 markets is achieved through repeatability using standard design templates, disciplined delivery processes, and early utility alignment. Smaller does not mean simpler, it just shifts the complexity. With strategic planning for utilities, supply chains, and local workforce development, these sites can evolve into important hubs that complement hyperscale campuses and deliver improved, low-latency edge services.
In your previous life, standardization was perhaps key to speed. But AI clusters are custom beasts; some want liquid immersion, some want rear-door cooling. As President of Operations, how do you prevent your data centers from becoming bespoke engineering projects? How do you industrialize the AI Data Center so it doesn’t take 18 months to build?
AI workloads introduce variability across different densities and cooling preferences, but not every deployment needs to become a custom engineering project. My perspective is to standardize the core foundation while modularizing the flexible components. Key aspects like power architecture, structural design, network ingress, and commissioning frameworks should be consistent and validated. Variability is permitted within specific high-density zones designed to be liquid-ready, such as rear-door heat exchangers, direct-to-chip configurations, or immersion environments. By planning for flexibility early, you can avoid customization delays throughout the build process.
CtrlS Datacenters is already adopting AI-compatible campuses and high-density environments that support advanced cooling strategies without requiring reengineering the entire facility each time. The wider industry trend is clear. Prefabrication, POD-based construction, and repeatable design templates are becoming crucial for shortening timelines as AI demand grows. Industrialising the AI datacenter is less about enforcing uniformity and more about establishing a structured delivery model, modular units, liquid-ready infrastructure, and disciplined execution to keep build cycles predictable even as technical requirements change.

