Sign in to access this content and additional site features.
Large Language Models, or LLMs, are behind the widespread adoption of generative AI services that take the user’s input and generate probable outputs, based on deep learning models. Fine tuning an existing model and using retrieval-augmented generation, or RAG, can yield great context aware results with reduced resource costs.
NXP’s eIQ GenAI Flow provides the tools to bring generative AI to the edge, leveraging the eIQ Neutron NPU in i.MX applications processors. eIQ GenAI Flow uses state of the art open-source AI models optimized and maintained by NXP.
Ready to learn more? Visit nxp.com/LLM