Skip to main content

SambaNova Unveils Fastest Chip for Agentic AI, Collaborates with Intel, and Raises $350M+

  • New SN50 chip boasts a max speed of 5X faster than competitive chips [1]
  • Run agentic AI at a 3X lower cost than GPUs — slashing inference costs and maximizing margins [2]
  • SoftBank Corp. will be the first customer to deploy SN50 within its next‑generation AI data centers in Japan
  • SambaNova, Intel plan multi-year strategic collaboration to deliver cloud-scale AI inference to unlock multi-billion-dollar market opportunity
  • $350 million in strategic Series E financing to expand manufacturing and cloud capacity; new investors include Vista Equity Partners, Cambium Capital, Intel Capital, Battery Ventures, and accounts advised by T. Rowe Price Associates, Inc.

SambaNova today introduced their SN50 AI chip, which boasts a max speed that’s 5X faster than competitive chips. The company also announced a planned collaboration with Intel to deliver high‑performance, cost‑efficient AI inference solutions, and more than $350M in investment from new and existing investors.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260224971025/en/

SambaNova SN50 chip boasts a max speed of 5X faster than competitive chips. Runs agentic AI at a 3X lower cost than GPUs —  slashing inference costs and maximizing margins.

SambaNova SN50 chip boasts a max speed of 5X faster than competitive chips. Runs agentic AI at a 3X lower cost than GPUs — slashing inference costs and maximizing margins.

Positioned as the most efficient chip for agentic AI, the SN50 chip offers enterprises a 3X lower total cost of ownership — a powerful foundation to scale fast inference and bring autonomous AI agents into full production. The SN50 will be shipping to customers later this year.

To quickly scale and distribute SN50, SambaNova is collaborating with Intel, and has obtained $350 million in strategic Series E financing to expand manufacturing and cloud capacity.

“AI is no longer a contest to build the biggest model,” said Rodrigo Liang, co‑founder and CEO of SambaNova. “With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”

“Customers are asking for more choice and more efficient ways to scale AI,” said Kevork Kechichian, EVP, General Manager, Data Center Group, Intel. “By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”

The SN50 delivers five times more compute per accelerator and four times more network bandwidth than the previous generation. It links up to 256 accelerators over a multi‑terabyte‑per‑second interconnect, cutting time‑to‑first‑token and supporting larger batch sizes. The result: Enterprises can deploy bigger, longer‑context AI models with higher throughput and responsiveness — while keeping performance high and costs and latency under control.

“AI is moving from a software story to an infrastructure story,” said Landon Downs, co-founder and managing partner at Cambium Capital. “SN50 is engineered for the real-world latency and economic requirements that will determine who successfully deploys agentic AI at scale.”

The news follows SambaNova’s record bookings and revenue as they closed out 2025, reflecting accelerating demand for production-ready AI systems across financial services, telecommunications, energy, and sovereign deployments worldwide.

Built for Agentic Production

Built on SambaNova’s Reconfigurable Data Unit (RDU) architecture, SN50 delivers:

  • Instant AI Experiences — Ultra‑low latency delivers real‑time responsiveness for next‑gen enterprise apps like voice assistants.
  • Unmatched Scale and Concurrency — Power thousands of simultaneous AI sessions with consistent high performance.
  • Breakthrough Model Capacity — Three‑tier memory architecture unlocks 10T+ parameter models and 10M+ context lengths for deeper reasoning and richer outputs.
  • Maximum Efficiency at Scale — Higher hardware utilization lowers cost‑per‑token, driving greater performance and ROI.
  • Smarter Memory, Smarter Efficiency — Resident multi‑model memory and agentic caching optimize the three‑tier architecture, cutting infrastructure costs for enterprise‑scale AI deployments.

“The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale. By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game,” said Peter Rutten, Research Vice-President Performance Intensive Computing at IDC.

SoftBank Deploys SN50 within its AI Data Centers in Japan

SoftBank Corp. will be the first customer to deploy SN50 within its next‑generation AI data centers in Japan. The deployment will power low‑latency inference services for sovereign and enterprise customers across Asia‑Pacific, supporting both open‑source and proprietary frontier models with aggressive latency and throughput requirements.

“With SN50, we are building an AI inference fabric for Japan that can serve our customers and partners with the speed, resiliency and sovereignty they expect from SoftBank,” said Hironobu Tamba, Vice President and Head of the Data Platform Strategy Division of the Technology Unit at SoftBank Corp. “By standardizing on SN50, we gain the ability to deliver world‑class AI services on our own terms — with the performance of the best GPU clusters, but with far better economics and control.”

The SN50 deployment deepens SambaNova’s existing relationship with SoftBank Corp., which already hosts SambaCloud to provide ultra‑fast inference for developers in the region. By anchoring its newest clusters on SN50, SoftBank positions SambaNova as the inference backbone for its sovereign AI initiatives and future large‑scale agentic services.

SambaNova and Intel Plan Multi‑Year Collaboration

SambaNova and Intel have entered into a planned multi‑year strategic collaboration to deliver high‑performance, cost‑efficient AI inference solutions for AI‑native companies, model providers, enterprises, and government organizations around the world. The collaboration will give customers a powerful alternative to GPU‑centric solutions, offering optimized performance for leading open‑source models with predictable throughput and total cost of ownership.

As part of the collaboration, Intel plans to make a strategic investment in SambaNova to accelerate the rollout of an Intel‑powered AI cloud. The collaboration is expected to span three key areas:

  • AI Cloud Expansion — Scaling SambaNova’s vertically integrated AI cloud, built on Intel Xeon‑based infrastructure and optimized for large language and multimodal models. The platform will deliver low‑latency, high‑throughput AI services, supported by reference architectures, deployment blueprints, and partnerships with system integrators and software vendors.
  • Integrated AI Infrastructure — Combining SambaNova’s systems with Intel’s CPUs, accelerators, and networking technologies to power scalable, production‑ready inference for reasoning, code generation, multimodal applications, and agentic workflows.
  • Go‑to‑Market Execution — Joint co‑selling and co‑marketing through Intel’s global enterprise, cloud, and partner channels to accelerate adoption across the AI ecosystem.

Together, SambaNova and Intel aim to shape the next generation of heterogeneous AI data centers — integrating Intel Xeon processors, Intel GPUs, Intel networking and storage, and SambaNova systems — to unlock a multi‑billion‑dollar inference market opportunity.

Raises $350M+, led by Vista and Cambium

The oversubscribed Series E round was led by Vista Equity Partners and Cambium Capital, with strong participation from Intel Capital.

New investors joining the round include: Assam Ventures, Battery Ventures, Gulf Energy, Mayfield Capital, Saudi First Data, Seligman Ventures, and accounts advised by T. Rowe Price Associates, Inc. Existing investors participating include: A&E, 8Square, Atlantic Bridge, BlackRock, GV, Nepenthe, Nuri Capital, and Redline Capital.

As agentic workloads expand, enterprises are discovering that infrastructure optimized for training struggles to meet production latency and cost requirements: “We’re proud to be investing in SambaNova at such a pivotal time in the company’s growth,” said Monti Saroya, Partner at Vista Capital. “SN50 is engineered for agentic AI systems that orchestrate multiple models and process requests in near real-time, and more efficiently than traditional GPU-centric systems.”

Proceeds will be used to expand SN50 production, scale SambaCloud, and deepen enterprise software integrations.

About SambaNova

SambaNova is a leader in next‑generation AI infrastructure, providing a full stack platform that powers the fastest, most efficient AI inference for enterprises, NeoClouds, AI labs and service providers, and sovereign AI initiatives worldwide. Founded in 2017 and headquartered in San Jose, Calif., SambaNova delivers chips, systems and cloud services that enable customers to deploy state‑of‑the‑art models with superior performance, lower total cost of ownership and rapid time to value.

For more information, visit sambanova.ai or follow SambaNova on X and LinkedIn.

[1] SemiAnalysis InferenceX - Llama 3.3 70B max speed on Nvidia B200 at FP8 and 1K input/1k output - 184 tokens per second per user. Llama 3.3 70B max speed on SN50 at FP8 and 1K input/1k output - 895 tokens per second per user.

[2] SemiAnalysis InferenceX - Llama 3.3 70B throughput per chip on Nvidia B200 at FP8 and 1K input/1k output - across a range of configurations, total throughput per GPU versus total throughput per RDU moves from ~1X (at 33 tokens per second per user) to ~25X advantage for RDUs (at 184 tokens per second per user). 3X is derived as the average throughput advantage for SN50 across Llama 70B, GPT-OSS 120B and DeepSeek 671B, assuming a latency budget.

SambaNova introduced their SN50 AI chip, which boasts a max speed that’s 5X faster than competitive chips. The company also announced a planned collaboration with Intel and more than $350M in investment.

Contacts

Recent Quotes

View More
Symbol Price Change (%)
AMZN  208.56
+3.29 (1.60%)
AAPL  272.14
+5.96 (2.24%)
AMD  213.84
+17.24 (8.77%)
BAC  50.41
-0.66 (-1.29%)
GOOG  310.92
-0.77 (-0.25%)
META  639.30
+2.05 (0.32%)
MSFT  389.00
+4.53 (1.18%)
NVDA  192.85
+1.30 (0.68%)
ORCL  146.14
+4.83 (3.42%)
TSLA  409.38
+9.55 (2.39%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.