At CES in Las Vegas, Nvidia CEO Jensen Huang unveiled the company’s next AI computing platform, Vera Rubin, and highlighted that a significant part of its performance depends on technology developed in Israel.
Rubin is presented as the next step after Nvidia’s Blackwell chips. But Huang’s bigger point was that the next leap in AI won’t come only from a faster GPU. It will come from designing the whole system together — the compute chips, the networking, and the data-handling parts that keep everything running at full speed.
Nvidia says the Vera Rubin platform combines six new chips: the Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch.
Several of these are closely tied to Nvidia’s Israeli teams, especially the networking work that grew out of its Mellanox acquisition. That networking is the system’s “high-speed connections” — what determines whether huge clusters of GPUs work smoothly or get slowed down waiting for data.
This matters because in modern AI training, the problem is often not a lack of computing power. It’s that thousands of chips need to share data and coordinate constantly. Faster, smarter connections can make a real difference to speed and cost.
Why Nvidia is talking about “systems,” not just chips
Huang’s pitch reflects what customers are building: giant AI data centres (“AI factories”) that train models and run AI services at scale. These setups depend heavily on fast communication between machines.
Nvidia’s argument is that tighter integration across the whole stack can shorten training time and reduce the cost of running AI — which is exactly what big cloud companies and governments care about as budgets and electricity use become constraints.
Huang also emphasised “open models,” saying 2025 was a breakout year for them. Nvidia’s approach is to sell the hardware and software — while supporting a wider model ecosystem that keeps demand growing. As part of that, Nvidia pointed to support for LTX-2, a video model from Israeli company Lightricks.
Rubin is already in production, but broader commercial availability is expected in the second half of 2026. Huang also discussed Nvidia’s autonomous vehicle push, including Alpamayo, and linked ongoing work with Mercedes-Benz to vehicles expected to launch in early 2026.
The key takeaway: Nvidia is treating Israel as a core contributor to the technology that makes large-scale AI possible — especially the networking and system parts that keep massive AI computers running efficiently. That’s a central role in the global AI supply chain, not a side project.