South Korean semiconductor startup FuriosaAI has officially signed a supply contract with LG Electronics to provide its proprietary AI inference accelerator RNGD (Renegade) for LG’s large language model EXAONE 3.5, the company announced on July 22.
The deal follows a rigorous 7-month validation process conducted by LG AI Research, assessing RNGD's performance and efficiency in enterprise-scale AI workloads. With this agreement, FuriosaAI becomes a direct challenger to NVIDIA, Groq, and SambaNova Systems in the LLM acceleration space.
LG Partnership Seals Furiosa’s First Major Commercial Win
The integration of RNGD into LG’s EXAONE LLM marks FuriosaAI’s first large-scale commercial deployment. The company revealed that RNGD offers 2.25 times higher inference performance per watt compared to conventional GPUs, addressing key issues in AI infrastructure like excessive power draw, heat generation, and high TCO (total cost of ownership).
The deployment will also extend to LG’s enterprise AI assistant ChatEXAONE, with plans to expand services beyond internal use to external enterprise clients.
Rejecting Meta, Focusing on Innovation
Founded in 2017 and led by CEO Baek Seung-woo, a former engineer at Samsung and AMD, FuriosaAI previously drew headlines after rejecting an $800 million acquisition offer from Meta earlier this year. The company instead chose to pursue independent growth, banking on its own silicon innovation.
With LG Electronics on board, FuriosaAI is now positioning RNGD as a more efficient and scalable alternative to NVIDIA GPUs for AI inference in data centers. Industry insiders view the LG partnership as a major validation of the startup’s technology maturity and strategic potential.
Eyeing Global Expansion
Following the LG deal, FuriosaAI plans to expand into North America, Southeast Asia, and the Middle East, with new supply agreements anticipated in the second half of 2025.
The RNGD-powered EXAONE servers are expected to support various industries—from electronics to finance—by significantly reducing power usage while maintaining high throughput in LLM operations.
댓글
댓글 쓰기