Selecting Liquid Cooling Technologies and Equipment: Decision Criteria for Critical Installations
The problem behind the choice
When a data center operator evaluates liquid cooling, the question is not which technology is best in absolute terms. The right question is which technology best solves the specific problem of that installation: its thermal load density, its existing infrastructure, its budget, and the operational capacity of its team.
The market offers four distinct technology families: rear-door heat exchangers (RDHx), direct-to-chip cooling (D2C), single-phase immersion, and two-phase immersion. Each operates within different density ranges, demands different levels of capital investment, and carries operational implications that extend across the entire service life of the installation.
Cold plate liquid cooling accounts for more than 55% of the market by value — exceeding USD 3.1 billion in 2026 according to Persistence Market Research — while two-phase immersion is recording the fastest growth rate, driven by its energy efficiency (PUE 1.02-1.03) and capacity to support extreme thermal densities.
Reaclima has worked with hybrid cooling configurations in installations such as Foxconn GDL Vesta 8 and Amazon AWS Querétaro. In both cases, technology selection was the result of a rigorous technical and economic analysis, not a market trend.
Rear-Door Heat Exchangers (RDHx): the entry point to liquid cooling
Rear-door heat exchangers are the lowest-friction adoption option. They mount on the rear door of a standard rack and capture the heat that servers expel into the hot aisle, extracting it through a water-to-air heat exchanger before it disperses into the room.
The operating principle is straightforward: hot air passes through a coil carrying chilled water (18-25°C). Air temperature drops between 10°C and 15°C before returning to the hot aisle. The water returns to the chiller or CDU (Coolant Distribution Unit) with a temperature rise (Delta T) of 8-12°C.
The primary advantage is that no modifications to existing servers or internal rack cabling are required. This eliminates warranty risk and enables a phased rollout. By removing between 60% and 80% of the heat generated, RDHx also reduces pressure on room-level air conditioning systems — which in many cases allows thermal density to increase without expanding the existing cooling plant.
The relevant technical limitation appears above 25-30 kW per rack: the thermal gradient available between server air and cooling water becomes insufficient to transfer the full load. At that point, the RDHx has served its purpose as a transitional solution and the analysis must move toward direct-contact technologies.
The most established vendors in this segment include Vertiv (Liebert CRV Chilled Water, 25-35 kW per rack), Schneider Electric (EcoBreeze, passive and active configuration), and Rittal (LCP Rack Door with variable flow control).
Direct-to-Chip Cooling (D2C): thermal precision at the component level
Direct-to-chip cooling via cold plates is currently the most widely deployed technology in high-density installations. Unlike RDHx, which works on the rack's exhaust air, D2C brings the coolant directly to the component generating the heat: CPU, GPU, DIMM memory modules, and voltage regulators (VRM).
A cold plate is a thermal contact base — copper or aluminum — with internal channels through which the coolant flows. The channel design determines transfer capacity: microchannels (0.5-2 mm in diameter) increase contact surface area and achieve coefficients of up to 25 W/cm²·K (Energy and Built Environment, 2024). More specialized designs such as jet impingement or vapor chambers extend that capacity for loads with localized hot spots.
The bond between chip and cold plate is achieved through thermal interface materials (TIM): high-conductivity thermal compounds (>5 W/m·K), graphite pads, or — in maximum-performance applications — liquid metal (gallium-indium).
The cold plates across a rack connect through the TCS (Technology Cooling System, the rack-level coolant distribution system), which integrates manifolds with quick-disconnect fittings. These connectors allow server hot-swaps without draining the entire system — valves automatically seal both sides upon disconnection — and include temperature and flow sensors that monitor Delta T per server in real time.
For the coolant itself, the most common choice is deionized water with glycol (30-50%), combining good thermal conductivity (>0.5 W/m·K) with corrosion protection and freeze resistance. In applications where incidental contact with electrical components is a concern, engineered dielectric fluids such as 3M Novec 7000 series or Chemours Opteon are used; they are non-conductive but require higher volumetric flow rates due to their lower thermal conductivity (~0.06-0.08 W/m·K).
D2C systems remove between 70% and 80% of the server's total thermal load directly at the highest heat-generating components. This allows rack ambient air temperatures of up to 35-40°C and PUE values between 1.08 and 1.15. In the 40-80 kW per rack range, D2C typically presents a lower TCO (Total Cost of Ownership) than immersion, given its use of standard servers and less specialized maintenance requirements.
The vendor ecosystem has matured considerably. CoolIT Systems offers CDUs of up to 2 MW and its ChilledDoor solution integrates rear-door and D2C into a compact 100 kW per rack unit. Asetek developed AI-based monitoring for predictive flow optimization, with certified integrations on Dell, HPE, and Lenovo equipment. Schneider Electric, following its acquisition of Motivair Corporation in February 2025, launched CDUs of 2.5 MW integrated with its EcoStruxure platform. Boyd Thermal, incorporated into Eaton in 2025, leads the design of custom cold plates with vapor chambers for GPU clusters.
This market consolidation through M&A — Eaton-Boyd ($9.5 billion), Trane-Stellar Energy Digital, Daikin-Chilldyne — has a practical implication: it drives component standardization, which improves parts availability and long-term technical support.
Immersion Cooling: maximum density, maximum complexity
Immersion cooling removes the air layer entirely: servers are submerged directly in a bath of non-conductive dielectric fluid. No fans, no cold plates, no heat transfer to room air. The conceptual simplicity stands in contrast with the operational complexity its adoption demands.
There are two modalities with distinct thermal logic.
In single-phase immersion, the fluid remains in liquid state throughout the cycle. Servers are submerged in tanks holding 100-500 liters of fluid. Heat transfers to the fluid through natural or forced convection (via internal agitation or pumping), and the warmed fluid is pumped to an external heat exchanger or CDU where it releases energy to the facility's chilled water circuit. The absence of mechanical vibration and the uniform bath temperature reduce thermal stress on solder joints and connections — installations have historically reported extended component service life as a result. The primary limitation is the low thermal conductivity of dielectric fluids: fluids such as 3M Novec 7100 (~0.06 W/m·K) require active agitation to prevent hot spots, and in passive configuration practical density is limited to 50-60 kW per rack, reaching 80-100 kW with forced convection.
Two-phase immersion (2PIC) operates on a different principle: the dielectric fluid is selected for a low boiling point (45-65°C), so component heat drives it to boil. The generated vapor rises to a condenser at the top of the tank, cools, condenses, and returns to the bath by gravity. This phase-change cycle — evaporation and condensation — transfers heat with far greater efficiency than liquid-phase convection, enabling PUE values of 1.02-1.03 and rack densities exceeding 150 kW.
The price of that efficiency is multifold. Fluid cost ranges from $50 to $150 USD per liter: a 200-liter tank represents an initial investment of $10,000-$30,000 USD in fluid alone, before equipment. Tanks must maintain pressure slightly above atmospheric to prevent moisture ingress, requiring volume compensation systems and pressure relief valves. Server maintenance requires tank extraction and specialized cleaning protocols. And while recent formulations such as 3M Novec 649 have reduced Global Warming Potential (GWP) to below 10 — versus values above 1,000 in earlier fluids — fluid environmental management remains a relevant operational variable.
In the immersion segment, Submer Technologies leads in Europe with 200 kW SmartPods per unit and gigawatt-scale projects in development in India. GRC (Green Revolution Cooling) operates in cryptocurrency mining and HPC cluster environments with its ElectroSafe solution. LiquidStack, acquired by Lenovo, develops two-phase systems for hyperscale data centers with active pilot projects at Microsoft Azure. Asperitas focuses on modular immersion solutions for edge computing and compact industrial installations.
Reading the trade-off between technologies
The four technologies do not compete in the same space: they operate across different density ranges, budget profiles, and operational maturity requirements, and each offers a distinct efficiency-versus-complexity profile.
RDHx works without touching the server or modifying the room infrastructure, making it the only truly retrofittable option for installations with moderate densities. Its PUE — typically between 1.3 and 1.5 — does not compete with direct-contact technologies, but its implementation speed (weeks rather than months) and lower relative CapEx make it relevant when the objective is to quickly relieve an existing thermal constraint.
D2C introduces installation complexity and requires servers to be compatible with cold plates, but delivers a substantial efficiency gain (PUE 1.08-1.15) and can scale within the same rack as density grows. It is the technology with the most consolidated vendor ecosystem today, which simplifies procurement and reduces obsolescence risk.
Single-phase immersion is an intermediate solution that trades away the simplicity of RDHx and some of the efficiency of two-phase, but eliminates mechanical noise and fan dependency entirely. Its adoption has been slower than D2C's, partly because it requires server modifications (removing fans and rotational storage) and partly because maintenance protocols are not yet standardized across most data centers.
Two-phase immersion sits at the far end of the spectrum: maximum efficiency, maximum density, and maximum upfront investment. Its operational complexity demands technically trained teams and reliable fluid supply chains. It remains an early-adoption technology in the Latin American market, where local technical support is still limited.
What none of these variables can resolve in isolation is long-term interoperability. This is where open standards carry increasing weight in critical infrastructure decisions.
Open standards: OCP and Google Project Deschutes
Single-vendor dependency in mission-critical infrastructure represents an operational and financial risk that compounds over time: discontinued components, rising spare parts pricing, technical support conditioned on exclusive service contracts.
Google released the specifications of Project Deschutes — its proprietary CDU design — as an open standard in 2024. The project defines standardized quick-connect fittings with a common sealing protocol, monitoring interfaces based on Redfish API, and modular CDU dimensions compatible with 19" racks and aisle configurations. The Open Compute Project (OCP) published its Advanced Cooling Solutions specifications (v2.1) in 2025, including reference designs validated for direct liquid cooling and immersion cooling by multiple manufacturers.
Adopting these standards is not a minor technical detail: it reduces custom engineering costs, accelerates procurement by opening the field to more compatible vendors, and enables component substitution without system redesign. For installations with 10-15 year operating horizons, compatibility with open standards is a selection criterion with direct impact on TCO.
Conclusion
Liquid cooling technology selection defines parameters that extend across the entire service life of the installation: energy consumption, capacity to scale thermal density, maintenance complexity, and resilience against changes in the vendor ecosystem. There is no universal answer, but there is a method: the technical and economic analysis that simultaneously considers initial CapEx, projected 10-15 year TCO, and the operational maturity of the team that will manage the system.
In projects such as Foxconn GDL Vesta 8 and Amazon AWS Querétaro, Reaclima has supported that analysis process from thermal load assessment through system commissioning. The right technology selection is not improvised at procurement time: it is built on data, on controlled proof-of-concept testing, and on a clear understanding of the trade-offs each technology implies.
Do you have an installation with growing thermal densities or a project in the design phase? Let's talk about the technical variables relevant to your specific case.
References
- Persistence Market Research. (2026). Data Center Liquid Cooling Market Size & Forecast 2026-2033.
- Energy and Built Environment. (2024). Liquid cooling of data centers: A necessity facing challenges. ScienceDirect.
- Data Center Dynamics. (2025). Chilling out in 2025: A year in data center cooling.
- MarketsandMarkets. (2025). Data Center Liquid Cooling Market Report 2026-2033.
- AIRSYS North America. (2026). Data Center Trends & Cooling Strategies to Watch in 2026.
- Open Compute Project. (2025). Advanced Cooling Solutions Specifications v2.1.