Intel fails at data center silicon acquisitions.
That is a rough statement, but it is hard to feel otherwise when looking back over the past decade. An absolutely crazy part about that statement is that Intel has done a decent job identifying major industry trends and areas where high-value silicon is needed. The other trend has been that the teams winning at chasing those market trends are not Intel’s acquired team.
Let us take five key acquisitions in key data center silicon TAM expansion areas.
AI Silicon - This is, of course, a big one.
Habana Labs - Intel purchased this for $2B in December 2019. By 2024, it was generating under $500M of revenue.
Nervana Systems - Intel’s August 2016 $350M to $400M purchase was shuttered for Habana Labs
Networking Switch Silicon - This has also been growing rapidly and will accelerate with the AI build-out
Barefoot Networks - Intel purchased this for the P4 programmable Tofino 2 after losing the Mellanox bidding war with NVIDIA
A few years outside of our decade window, Intel also acquired Fulcrum Networks as well as buying the QLogic InfiniBand business, which led to Omni-Path
Programmable Silicon
Altera - From #1 in the FPGA market to a spin-off. Most would say AMD-Xilinx fared better than Altera over this time period
eASIC - This one has not been trumpeted as a massive success but is probably not a total failure.
Whether we can call these acquisitions data center acquisitions is debatable, but the rationale for buying Altera and eASIC was predicated mainly on data center forecasts.
Let us take an alternative view. Starting with the AI space.
Intel’s $55B+ AI Silicon Acquisition Blunders
In 2017 and 2018, Nervana had Naveen Rao leading the company and Intel’s AI efforts and an interesting architecture that may have done well.
In 2016, NVIDIA was pushing deep learning on its GPUs. AlexNet was in 2012, and I can remember having meetings in 2014 where folks at NVIDIA and Mellanox were talking regularly about what was happening in the space.
NVIDIA may have been pushing the NVIDIA Tesla P100, but realistically, people were building (for the time) substantial machine-learning clusters with the NVIDIA GeForce GTX 1080 Ti. A revisionist version of history may say that the P100 drove AI back then, but the GTX 1080 Ti and the Titan XP (also the Jedi version) were installed all over in AI/ deep learning clusters that we saw in Silicon Valley data centers, on trips to China, and even in Europe. The first time we started seeing folks deploy even thousands of GPUs specifically deep learning, it was using GeForce consumer cards. The payback period for the deep learning era was something like 3 months of using consumer GeForce GPUs versus AWS data center accelerators as Andrej Karpathy noted on an STH article in 2017. It was such a problem that NVIDIA had to change its EULA to force companies onto its higher-ASP data center GPUs.
These changes were important because, combined with hardware segmentation over time, it drove accelerator ASP from the cost of a gaming GPU to tens of thousands of dollars each. That ASP growth has led to significant investments in AI accelerators. Not only did they identify the trend when it was commonplace to use consumer GPUs, Intel made a bet before the idea of a dedicated data center AI accelerator really took off.
Intel actually was on-trend here with Nervana. The company had an AI Day in 2016 in San Francisco. Naveen Rao took the stage and shared his neuroscience background and how that translated into his and Intel’s vision for neural networks. After the Intel Data Centric Day 2018 I had the chance to chat with Naveen for some time outside, and he got it. High memory bandwidth, new numerics for computation, low latency interconnects scaling to many accelerators, and programmers working in higher-level languages instead of CUDA. I think that Naveen Rao and Raja Koduri, if unleashed, could have legitimately built something to challenge NVIDIA, at least from a hardware perspective. They both articulated the challenges closely to what NVIDIA’s roadmap has tackled.
By 2019, it was clear that NVIDIA had a leadership position with CUDA and its GPUs in the Deep Learning space. In 2019, deep learning was still not the mainstream AI push that we have going on in 2025. It was still early. Instead, there was also a big focus on areas like recommendation engines. Instead of doubling down on its existing architectures, it was seemingly spooked by a hyper-scale procurement process.
Facebook (now Meta) was looking at the Habana Labs solution during a 2019 bake-off but ultimately decided against deploying it as its go-forward training design. Facebook/ Meta apparently liked some of the Nervana solutions but did not like the Habana Labs solution enough to go all-in on the technology. It seems like Intel bought Habana Labs without a commitment from Facebook/ Meta that it would buy huge streams of parts if it acquired Habana.
Taking a step back, that was almost unimaginable at the time. In 2019, the AMD EPYC 7002 “Rome” launched, and AMD had a part that could easily 2x what Intel had for the next two years or so, and it would take Intel another five years to get competitive again on the server CPU side. Even knowing these roadmaps, Facebook (Meta) was fiercely an Intel deployer in that era. Yet when it came to Intel buying an AI training startup, Facebook was not buying huge quantities of AI training chips from its big silicon partner of the day.
Perhaps the strangest part was that in early 2021, I distinctly remember having drinks with the Intel-Habana folks in Dallas, Texas, and they felt like they had a competitive product but had been largely sidelined within Intel. Imagine being inside Intel, having a product for a super hot market that was part of a $2B acquisition, and feeling like Intel’s management was not behind you. The folks from Habana Labs will not say this publicly, but after my conversations, I almost feel like Intel bought Habana, gave up on Nervana, and then pushed Habana Labs into slow gear instead of enabling that team to beat or even keep pace with NVIDIA.
What if Intel just funneled money into NVIDIA stock instead of betting on being able to acquire talent and cultivate products inside Intel?