Intel, Cadence Expand Partnership to Enable Best-in-Class SoC Design on Intel’s Advanced Processes
February 21, 2024 | Cadence Design Systems, Inc.Estimated reading time: 1 minute
Intel Foundry Services (IFS) and Cadence Design Systems, Inc. (Nasdaq: CDNS) announced they have expanded their partnership and entered into a multiyear strategic agreement to jointly develop a portfolio of key customized IP, optimized design flows and techniques for Intel 18A technology featuring RibbonFET gate-all-around transistors and PowerVia backside power delivery. Joint customers will be able to accelerate their SoC project schedules on process nodes from Intel 18A and beyond while optimizing for performance, power, area, bandwidth and latency for demanding AI, HPC and premium mobile applications.
“We furthered our partnership with Intel Foundry Services through a significant strategic multiyear agreement to provide design software and leading IP at multiple Intel advanced nodes, thereby advancing Intel’s IDM 2.0 strategy and accelerating mutual customer success,” said Anirudh Devgan, president and chief executive officer at Cadence.
“We’re very excited to expand our partnership with Cadence to grow the IP ecosystem for IFS and provide choice for customers,” said Stuart Pann, Intel senior vice president and general manager of IFS. “We will leverage Cadence’s world-class portfolio of leading IP and advanced design solutions to enable our customers to deliver high-volume, high-performance and power-efficient SoCs on Intel’s leading-edge process technologies.”
Fast-growing market segments, such as AI/ML, HPC and premium mobile computing, require the latest standards in IP to take advantage of advanced packaging and silicon process technologies. Cadence’s leading-edge implementations of trailblazing standards, such as advanced memory protocols, PCI Express, UCI Express and others for these key segments, enable joint customers to achieve scalable, high-performance designs that accelerate their time to market in IFS’ most advanced silicon technologies and 3D-IC packaging capabilities.
Building a world-class foundry business is key to Intel’s IDM 2.0 strategy, and this agreement strengthens IFS’ offerings by making an additional portfolio of essential design tools, flows and interface IP available for foundry customers. It builds on Intel’s engagement with other industry-leading IP providers as it continues to grow the IP ecosystem for IFS customers.
Suggested Items
US Department of Defense Selects Intel Foundry for Phase Three of RAMP-C
04/23/2024 | IntelThe U.S. Department of Defense (DoD) has awarded Intel Foundry Phase Three of its Rapid Assured Microelectronics Prototypes - Commercial (RAMP-C) program.
Intel Gaudi, Xeon and AI PC Accelerate Meta Llama 3 GenAI Workloads
04/22/2024 | Intel CorporationMeta launched Meta Llama 3, its next-generation large language model (LLM). Effective on launch day, Intel has validated its AI product portfolio for the first Llama 3 8B and 70B models across Intel® Gaudi® accelerators, Intel® Xeon® processors, Intel® Core™ Ultra processors and Intel® Arc™ graphics.
Intel Brings AI-Platform Innovation to Life at the Olympic Games
04/18/2024 | BUSINESS WIREIntel announced its plans for the Olympic and Paralympic Games Paris 2024. Bringing AI Everywhere, Intel will implement artificial intelligence technology powered by Intel processors on the world’s biggest stage.
SMT Prospects and Perspectives: AI Opportunities, Challenges, and Possibilities, Part 1
04/17/2024 | Dr. Jennie Hwang -- Column: SMT Perspectives and ProspectsIn this installment of my artificial intelligence (AI) series, I will touch on the key foundational technologies that propel and drive the development and deployment of AI, with special consideration of electronics packaging and assembly.
Intel Breaks Down Proprietary Walls to Bring Choice to Enterprise GenAI Market
04/10/2024 | IntelAt Intel Vision, Intel introduces the Intel® Gaudi® 3 AI accelerator, which delivers 4x AI compute for BF16, 1.5x increase in memory bandwidth, and 2x networking bandwidth for massive system scale out compared to its predecessor – a significant leap in performance and productivity for AI training and inference on popular large language models (LLMs) and multimodal models.