Unlike traditional AI server deployments that use general-purpose GPUs, Meta is committing to custom application-specific integrated circuits developed closely with Broadcom.
These chips are tailored for the unique demands of Meta’s large-scale AI models, promising superior performance and energy efficiency over existing solutions.
The partnership with Quanta, a prominent Taiwanese server manufacturer, ensures these highly specialized servers are brought to market at scale, meeting Meta’s aggressive roadmap.
The new Santa Barbara servers represent a leap in both scale and technical complexity. Their thermal design power will exceed 180 kilowatts per rack, necessitating advanced liquid cooling and a bespoke rack architecture just to manage the heat and power requirements.
This marks a notable escalation from Meta’s previous generation, codenamed ‘Minerva,’ as the company looks to future-proof its infrastructure against the growing resource demands of advanced AI research and applications.
Meta’s pivot to in-house silicon does not stop at hardware. The company is also ramping up its general-purpose server fleet, largely based on AMD chips, and has doubled down on its own AI accelerator chip, MTIA, with shipments expected to double by 2026.
This is part of a wider strategy evidenced by Meta’s plan to increase capital expenditures to between $66 and $72 billion in 2025, with even more spending anticipated in 2026 as it seeks to recruit top AI talent and complete its data center build-outs.
For the broader industry, Meta’s move underlines a shift toward bespoke AI computing environments built for efficiency and scale.
As leading US cloud service providers continue to invest in AI server infrastructure, suppliers with the expertise and capacity to deliver custom solutions—like Broadcom for ASIC chips and Quanta for high-density server assembly—find themselves in an increasingly pivotal position.
The scale of the Santa Barbara project and Meta’s multi-billion dollar investments signal not only an intensification of competition in AI, but also set fresh benchmarks for what is possible in cloud data center engineering.





