Collecting data and training ML models are substantial investments and data ownership is a differentiator. However, a copyist can typically copy or clone a model with little effort. Besides extracting a model from a device via a memory dump, attacks have been published showing that to clone a model, access to the input-output behavior of the ML algorithm already suffices. Furthermore, copyright and patent laws are not applicable directly to most ML model IP.
To help developers in adding a piece of copyright-protected information to an ML model as to strengthen the copyright claim, and to provide a means to prove unauthorized copying, NXP has developed a method for embedding a watermark in a model. This feature is available within the eIQ Toolkit ML Software Enablement environment.
Note: For better experience, software downloads are recommended on desktop.
Quick reference to our documentation types.