In a powerful stride toward advancing artificial intelligence (AI) infrastructure, Enfabrica Corporation announced at Supercomputing 2024 (SC24) the closing of an impressive $115 million Series C funding round, alongside the upcoming launch of its industry-first, 3.2 Terabit per second (Tbps) Accelerated Compute Fabric (ACF) SuperNIC chip. This groundbreaking announcement highlights Enfabrica’s growing influence in the AI and high-performance computing (HPC) sectors, marking it as a leading innovator in scalable AI networking solutions.
The oversubscribed Series C financing was led by Spark Capital, with contributions from new investors Maverick Silicon and VentureTech Alliance. Existing investors, including Atreides Management, Alumni Ventures, Liberty Global Ventures, Sutter Hill Ventures, and Valor Equity Partners, also took part in the round, underscoring widespread confidence in Enfabrica’s vision and products. This latest capital injection follows Enfabrica’s Series B funding of $125 million in September 2023, highlighting the rapid growth and sustained investor interest in the company.
“This Series C fundraise fuels the next stage of growth for Enfabrica as a leading AI networking chip and software provider,” said Rochan Sankar, CEO of Enfabrica. “We were the first to draw up the concept of a high-bandwidth network interface controller chip optimized for accelerated computing clusters. And we are grateful to the incredible syndicate of investors who are supporting our journey. Their participation in this round speaks to the commercial viability and value of our ACF SuperNIC silicon. We’re well positioned to advance the state of the art in networking for the age of GenAI.”
The funding will be allocated to support the volume production ramp of Enfabrica’s ACF SuperNIC chip, expand the company’s global R&D team, and further develop Enfabrica’s product line, with the goal of transforming AI data centers worldwide. This funding provides the means to accelerate product and team growth at a pivotal moment in AI networking, as demand for scalable, high-bandwidth networking solutions in AI and HPC markets is rising steeply.
What Is a GPU and Why Is Networking Important?
A GPU, or Graphics Processing Unit, is a specialized electronic circuit designed to speed up the processing of images, video, and complex computations. Unlike traditional Central Processing Units (CPUs) that handle sequential processing tasks, GPUs are built for parallel processing, making them highly effective in training AI models, performing scientific computations, and processing high-volume datasets. These properties make GPUs a fundamental tool in AI, enabling the training of large-scale models that power technologies such as natural language processing, computer vision, and other GenAI applications.
In data centers, GPUs are deployed in vast arrays to handle massive computational workloads. However, for AI clusters to perform at scale, these GPUs require a robust, high-bandwidth networking solution to ensure efficient data transfer between each other and with other components. Enfabrica’s ACF SuperNIC chip addresses this challenge by providing unprecedented connectivity, enabling seamless integration and communication across large GPU clusters.
Breakthrough Capabilities of Enfabrica’s ACF SuperNIC
The newly introduced ACF SuperNIC offers groundbreaking performance with a throughput of 3.2 Tbps, delivering multi-port 800-Gigabit Ethernet connectivity. This connectivity provides four times the bandwidth and multipath resiliency of any other GPU-attached network interface controller (NIC) on the market, establishing Enfabrica as a frontrunner in advanced AI networking. The SuperNIC enables a high-radix, high-bandwidth network design that supports PCIe/Ethernet multipathing and data mover capabilities, allowing data centers to scale up to 500,000 GPUs while maintaining low latency and high performance.
The ACF SuperNIC is the first of its kind to introduce a “software-defined networking” approach to AI networking, giving data center operators full-stack control and programmability over their network infrastructure. This ability to customize and fine-tune network performance is vital for managing large AI clusters, which require highly efficient data movement to avoid bottlenecks and maximize computational efficiency.
“Today is a watershed moment for Enfabrica. We successfully closed a major Series C fundraise and our ACF SuperNIC silicon will be available for customer consumption and ramp in early 2025,” said Sankar. “With a software and hardware co-design approach from day one, our purpose has been to build category-defining AI networking silicon that our customers love, to the delight of system architects and software engineers alike. These are the people responsible for designing, deploying and efficiently maintaining AI compute clusters at scale, and who will decide the future direction of AI infrastructure.”
Unique Features Driving the ACF SuperNIC
Enfabrica’s ACF SuperNIC chip incorporates several pioneering features designed to meet the unique demands of AI data centers. Key features include:
- High-Bandwidth Connectivity: Supports 800, 400, and 100 Gigabit Ethernet interfaces, with up to 32 network ports and 160 PCIe lanes. This connectivity enables efficient and low-latency communication across a vast array of GPUs, crucial for large-scale AI applications.
- Resilient Message Multipathing (RMM): Enfabrica’s RMM technology eliminates network interruptions and AI job stalls by rerouting data in case of network failures, enhancing resiliency, and ensuring higher GPU utilization rates. This feature is especially critical in maintaining uptime and serviceability in AI data centers where continuous operation is vital.
- Software Defined RDMA Networking: By implementing Remote Direct Memory Access (RDMA) networking, the ACF SuperNIC offers direct memory transfers between devices without CPU intervention, significantly reducing latency. This feature enhances the performance of AI applications that require rapid data access across GPUs.
- Collective Memory Zoning: This technology optimizes data movement and memory management across CPU, GPU, and CXL 2.0-based endpoints attached to the ACF-S chip. The result is more efficient memory utilization and higher Floating Point Operations per Second (FLOPs) for GPU server clusters, boosting overall AI cluster performance.
The ACF SuperNIC’s hardware and software capabilities enable high-throughput, low-latency connectivity across GPUs, CPUs, and other components, setting a new benchmark in AI infrastructure.
Availability and Future Impact
Enfabrica’s ACF SuperNIC will be available in initial quantities in Q1 of 2025, with full-scale commercial availability anticipated through its partnerships with OEM and ODM systems in 2025. This launch, backed by substantial investor confidence and capital, places Enfabrica at the forefront of next-generation AI data center networking, an area of technology critical for supporting the exponential growth of AI applications globally.
With these advancements, Enfabrica is set to redefine the landscape of AI infrastructure, providing AI clusters with unmatched efficiency, resiliency, and scalability. By combining cutting-edge hardware with software-defined networking, the ACF SuperNIC paves the way for unprecedented growth in AI data centers, offering a solution tailored to meet the demands of the world’s most intensive computing applications.
Credit: Source link