Results for ""
Intel Corporation on Monday announced that it has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center for approximately $2 billion. The combination strengthens Intel’s artificial intelligence (AI) portfolio and accelerates its efforts in the nascent, fast-growing AI silicon market, which Intel expects to be greater than $25 billion by 20241.
“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”
As a result, Intel is expecting to generate over $3.5 billion in AI-driven revenue, up more than 20 percent year-over-year. According to Intel, Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies. This combination gives Habana access to Intel AI capabilities, including significant resources built over the last three years with deep expertise in AI software, algorithms and research that will help Habana scale and accelerate.
“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”
Israeli company Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers, while its Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope.