AIHardware

AMD and Meta Introduce Open Rack Design for Next-Generation AI Computing

Meta has unveiled an open rack specification for AI infrastructure at the OCP Global Summit, with AMD’s Helios system demonstrating how it harnesses MI400 Series GPUs. The collaboration aims to provide scalable, high-performance computing solutions without proprietary lock-in, supporting trillion-parameter AI models efficiently.

Open Standards Drive AI Infrastructure Innovation

At the Open Compute Project (OCP) Global Summit in San Jose, Meta introduced specifications for an open rack architecture designed to enhance artificial intelligence systems, according to reports. The Open Rack Wide (ORW) design, based on open standards, serves as the foundation for AMD’s Helios rack-scale reference system, which aims to improve scalability and efficiency in large-scale AI data centers.

ComputingHardware

Data Center Industry Shifts to 21-Inch Open Rack Standard for AI Infrastructure

The data center industry is undergoing a significant transformation as cloud providers and server manufacturers adopt 21-inch Open Rack designs. According to new research, this wider format better accommodates AI server requirements and could dominate rack shipments by 2030.

Industry Transition Accelerates

The data center industry is shifting away from traditional 19-inch rack standards toward wider 21-inch Open Rack designs, according to reports from leading research firms. This transition is gaining momentum as demand for AI infrastructure continues to surge among major cloud providers and technology companies.

Arts and EntertainmentAssistive TechnologyEnergy

NVIDIA AI Server Power Consumption Surges 100x, Raising Global Energy Grid Concerns

NVIDIA’s AI server platforms have experienced a staggering 100-fold increase in power consumption between generations, according to industry analysis. The International Energy Agency projects AI alone could double global electricity demand by 2030, raising serious concerns about grid capacity and sustainability.

NVIDIA’s AI Servers See Exponential Power Demand Growth

NVIDIA’s AI server platforms have experienced a dramatic 100-fold increase in power consumption between generations, according to analysis shared by industry expert Ray Wang. The transition from Ampere to Kyber architecture marks one of the most significant power requirement jumps in artificial intelligence computing history, raising fundamental questions about the sustainability of current growth trajectories.

Arts and EntertainmentBusiness Acquisition

Is the AI Investment Bubble About to Burst? Examining the Circular Economy

With AI investments driving 40% of US GDP growth, recent circular deals between tech giants raise questions about sustainable growth. Industry analysts examine whether this represents genuine innovation or financial round-tripping.

The American economy has become a massive bet on artificial intelligence, with recent analysis showing AI investments accounting for approximately 40% of United States GDP growth projections for 2025. According to Morgan Stanley investor Ruchir Sharma, AI companies are now responsible for 80% of growth in American stocks, creating what some experts are calling an AI investment conveyor belt that may be showing signs of strain.

The Circular Deal Phenomenon in AI

Assistive TechnologyComputer Hardware

Accelsius MR250 CDU Liquid Cooling System Now Generally Available

Accelsius has launched the NeuCool MR250 coolant distribution unit, providing 250kW of liquid cooling capacity per rack. The system supports high facility water temperatures and multiple refrigerants, with deployments expanding through 2026. This marks a significant advancement in scalable data center cooling technology.

Accelsius has announced the general availability of the NeuCool MR250, the company’s first row-based coolant distribution unit (CDU) that delivers 250kW of liquid cooling capacity per rack. This two-phase, direct-to-chip liquid cooling technology represents a major step forward in data center thermal management, offering flexible configurations of either 1 x 250 kW or 2 x 125 kW per rack according to recent analysis of cooling system capabilities.

Advanced Cooling Technology for Modern Data Centers