Every year, we build a system that feels impossible to top. And the next year, we somehow manage to prove ourselves wrong.
A flagship Threadripper platform. Four RTX 5090s. Two Power Supplies. Thirty-six fans. A price tag of roughly ₹20 lakh as of December 2025. This is not a system you encounter casually. It is the first of its kind in India and one of a very few globally.
At the core sits a Threadripper 9970X with 32 cores and 64 threads, paired with a Gigabyte TRX50 motherboard. This GPU configuration cannot function on consumer platforms like Ryzen 9 or Intel Ultra 9. Those platforms lack the PCIe lanes and memory architecture required to sustain four flagship GPUs without compromise.
The system runs 4x64GB of ECC RDIMM memory, which in today’s market costs nearly as much as a single RTX 5090. Then come the GPUs themselves: four RTX 5090s, each operating at full bandwidth. At this scale, PCIe bandwidth becomes the primary constraint. Threadripper here is not a good-to-have. It is the minimum requirement.
Power Delivery & Cooling
Power delivery is where theoretical builds usually collapse. This system uses a dual-PSU configuration because no single power supply can reliably handle a Threadripper CPU alongside the sustained and transient spikes of four RTX 5090s.
The build uses thirty-six fans across two massive radiators with soft tubing throughout. This is not about visual excess. It is about maintaining thermal headroom under continuous, real-world load.
Four GPUs isn’t really new.
To understand why this system exists, context matters.
In 2017, a four-GPU build would not have been surprising. Multi-GPU systems were common. We built machines with four GTX 1080 Tis, dual Titans, and many more. This was not limited to enterprises. Even enthusiasts would add a second identical GPU instead of upgrading generations. AMD went further, enabling mixed-GPU configurations. The logic was simple: add hardware, gain performance.
Then NVIDIA phased out SLI and consumer multi-GPU support, part of the reason being that developers struggled to optimize games. Applications shifted toward single-GPU efficiency. Over time, multi-GPU disappeared from the mainstream.
Where Multi-GPU Still Makes Sense
Multi-GPU is effectively dead for gaming. It is very much alive for compute.
This system exists for GPU-bound workloads, specifically 3D rendering. Engines like OctaneRender, Redshift, V-Ray, and Blender Cycles scale in parallel by design. Offline rendering does not require tight synchronization. One GPU can render the top half of a frame while another renders the bottom. They only need to communicate at the end.
That architectural difference removes the core limitation that killed SLI. In some engines, scaling is close to linear. If one RTX 5090 takes ten minutes, four can bring that down to roughly two and a half. For animation studios, that collapses timelines from weeks to days. When time saved directly translates to revenue, the investment becomes logical.
The Reality of Building a System Like This
Designing it on paper is easy, building it most definitely isn’t.
The first challenge was GPU compatibility. Water blocks are not available for every RTX 5090 variant. We had to lock in reference PCB designs. Even then, the blocks were not available locally and had to be imported.
Once importing became unavoidable, we imported everything tied to the loop: blocks,tubing, quick disconnects, radiators, fittings. More than half the system is sourced internationally. That adds delay and complexity.
On paper, push-pull radiator setups at the front and top worked. In reality, tolerances didn’t align. The radiators are extremely thick. Once assembled, one section physically could not fit due to interference with a top fan. That is why a visible fan gap exists. It is not missing. It is impossible to place.
When Even the Screws Are a Problem
The CPU block presented another issue. The manufacturer shipped mounting hardware for every socket except Threadripper TRX50. There is no off-the-shelf replacement for those bolts.
The solution involved taking a Xeon screw from the same kit, machining it down at a local shop, and threading it precisely. Only then could proper mounting pressure and thermal contact be achieved.
This is not trivial work. You are opening graphics cards worth over three lakh rupees and modifying cooling hardware. Errors are expensive.
Loop Design and Thermal Results
The loop layout itself went through multiple revisions for clarity and aesthetics. Manifolds were repositioned to keep the CPU visible and the system readable.
The final configuration uses extensive quick disconnects, allowing sections of the loop to be isolated without draining everything.
There are two loops:
- One loop for the CPU and one GPU
- One loop for the remaining three GPUs
Each loop is cooled by thick 480 mm radiators in push-pull. The results are exceptional. GPU idle temperatures below 30°C are rare at this power level. Threadripper still runs hot by nature, but with this much cooling capacity, it remains well within safe limits.
Sure, Four 5090s are not for everyone, but if you have a workload that may benefit from multiple GPUs,We build custom workstations tailored to exactly what you need.Whether you’re rendering, developing games, or even just editing videos, we’ll configure a system that fits your workflow (not just throw specs at you).
Check out our workstation builds on our website and see what’s possible when you match the right hardware to the right problem.





