It’s hard to deny that artificial intelligence has come so far so fast. It’s
working its way into our lives in ways that seem so natural that we find
ourselves taking it for granted almost immediately. It’s helping us get around
town, keeping us fit and always inventing new ways to help around the house.
The ever-increasing availability of affordable sensors and computing elements
will only accelerate this trend. Thinking of what’s next, which was once the
domain of science fiction writers, is now our day-to-day reality. In fact,
Deloitte has estimated that more than 750 million AI chips will be sold in
2020 and by 2024 they expect this will exceed 1.5 billion.
As with any useful advancement, AI will be a little bit of a mixed bag. Most
AI applications will be designed for the greater good, but there will always
be outlying cases. The increase in autonomous applications that carry with
them the potential to put humans in danger drives home the need for a
universal code of conduct for AI development. A few years ago this would have
sounded preposterous, but things are changing fast.
The industry, together with governments of several of the world’s leading
nations, are already developing policies that would govern AI, even going so
far as discussing a “code of conduct” for AI that focuses on safety and
privacy. But how does one make AI ethical? First, you have to define what is
ethical and the definition isn’t as cut and dry as we may hope. Without even
considering the vast cultural and societal differences that could impact any
such code, in practical terms, AI devices require complicated frameworks in
order to carry out their decision-making processes.
The integrity of an AI system is just as important as the ethical programming
because once a set of underlying principles is decided on, we need to be sure
they’re not compromised. Machine learning can be utilized to monitor data
streams to detect anomalies, but it can also be used by hackers to further
enhance the effectiveness of their cyberattacks. AI systems also have to
process input data without compromising privacy. Encrypting all communications
will maintain confidentiality of data, and Edge AI systems are starting to use
some of the most advanced cryptography techniques available.
Bright minds. Bright futures. NXP team members create breakthrough technologies that advance our world.
The future starts here.
Perhaps the biggest challenge is that the AI ecosystem is made up of
contributions from various creators. Accountability and levels of trust
between these contributors are not uniformly shared, and any breach could have
far reaching implications if systematic vulnerabilities are exploited.
Therefore, it’s the responsibility of the entire industry to work towards
interoperable and assessable security.
Agreement on a universal code of ethics will be difficult and some basic
provisions need to be resolved around safety and security. In the meantime,
certification of silicon, connectivity and transactions should be a focus for
stakeholders as we collaborate to build trustworthy AI systems of the future.
At NXP, we believe that upholding ethical AI principles, including
non-maleficence, human autonomy, explicability, continued attention and
vigilance, privacy and security by design is important. It is our
responsibility to encourage our customers, partners and stakeholders to
support us in this endeavor.
You can read more it in our whitepaper,
The Morals of Algorithms – A contribution to the ethics of AI systems.