It starts like any other session: the video game loads, the action begins. But this time, you don’t put on a headset. You sit back, and the audio immerses you. Not through a surround system. Not through headphones. But through adaptive beams of sound, directed precisely to your ears. As you move, the sound adapts, keeping you in the listening sweet spot. It’s like wearing invisible headphones — all the precision of a headset with no ear fatigue, wires or battery charging. You're free to talk, react and play naturally, with nothing on your ears and lifelike spatial sound filling the space.
This was the experience Audioscenic set out to create. To make it real, they needed edge-ready computing, audio-focused performance and tightly integrated system support. That’s where NXP came in.
Built on our i.MX 8M applications processors and brought to life through close collaboration between Audioscenic’s engineers and NXP’s global technical teams, the result is adaptive beamforming sound technology using listener-position sensing. This reference design can be applied to monitors and soundbars to deliver immersive 3D audio and crystal-clear voice chat.
Gamers have long faced a trade-off: speakers for immersion but poor voice comms, or headsets for clear chat and private listening but reduced spatial sound and ear fatigue.
Audioscenic envisioned a new approach: a 3D spatial audio system that adjusts to steer sound to the user’s ears in real time, while providing echo-cancelled, voice-capture capabilities. To bring the initial proof of concept to life, Audioscenic turned to i.MX 8M Mini and i.MX 8M Nano applications processors, which offer low-latency computing, high-resolution audio and robust voice processing for seamless performance.
NXP’s first embedded multicore applications processor built using advanced 14LPC FinFET process technology, providing more speed and improved power efficiency
Cost-effective integration and performance for power-efficient devices requiring graphics, vision, voice control, intelligent sensing and general-purpose processing
But technology was just one part of the equation. Our teams partnered closely with Audioscenic throughout development — optimizing performance, adapting software frameworks and delivering integration-ready solutions. NXP’s Linux BSP compatibility across i.MX 8M devices allowed Audioscenic to quickly scale and optimize solutions. To accelerate development, NXP released a customized Immersiv3D Audio Framework Software tailored to the system architecture. This modular audio platform simplifies the integration of spatial audio algorithms and helps align hardware and software elements, significantly speeding up development time.
Bringing Audioscenic’s vision to life meant solving a uniquely complex challenge: tracking a listener’s head in real time and keeping directional audio precisely aligned to each ear – without missing a beat. This level of performance demands coordinated, low-latency processing of multiple algorithms in parallel.
Together, Audioscenic and NXP tackled the challenge through tight integration across the full solution stack. Built on the i.MX 8M Mini and Nano processors and enabled by NXP’s Immersiv3D audio framework, the system responds instantly as users turn, shift or lean. The head-tracking algorithm and post-processing stay in perfect sync, adjusting to maintain a seamless, immersive experience.
Enabled by the performance and flexibility of the i.MX platform, the system supports:
Everything runs entirely on-device — without any cloud processing — delivering fast response times and ultra-low latency for a truly unprecedented spatial audio experience, no matter where you move.
With NXP’s mature Linux BSP and integration-ready software, Audioscenic reduced development time while being able to scale solutions to the number of audio channels required.
Latency For Real-Time Head Tracking and Spatial Audio Alignment
On-Device Processing No Cloud Dependency, No Delays
Head-Tracking Support Enables Dynamic, Personalized 3D Audio
What started as a more immersive way to game is now paving the way for broader audio transformation. These intelligent edge platforms have the potential to not only elevate gaming, but they’ll also unlock new possibilities in how we create, mix and experience sound across industries.
Delivering these experiences takes more than just powerful technology. It takes close collaboration, as well as aligning computing, connectivity and audio intelligence to meet the specific demands of real-world environments. This is the kind of innovation Audioscenic and NXP continue to drive together. By combining intelligent edge platforms with fully integrated systems, we’re helping shape the future of spatial audio one step at a time.
Audioscenic is an audio technology innovator based in Southampton, UK, with a growing, global customer base. Audioscenic develops intensely researched audio technologies for home audio, gaming, automotive and public space applications that excite product makers and enchant listeners.
Learn more about AudioscenicTags: Consumer, Home Entertainment, Step Forward
Director IoT Segment Marketing for Home Entertainment and Hybrid Work, NXP Semiconductors
John brings 30 years of experience in the semiconductor and embedded software industries. At NXP, he supports the development of advanced solutions in audio, gaming, conferencing and computing, helping customers bring innovative products to market. He holds a Bachelor of Science in Electrical Engineering from UC San Diego and an MBA from CSU San Marcos.