Ethics must be at heart of Artificial Intelligence technology, says Lords report
Personal data protection should be a priority as AI develops, the study says.
Artificial Intelligence (AI) must never be given autonomous power to hurt, destroy or deceive humans, a parliamentary report has said.
Ethics need to be put at the centre of the development of the emerging technology, according to the House of Lords Artificial Intelligence Committee.
With Britain poised to become a world leader in the controversial technological field international safeguards need to be set in place, the study said.
Peers state that AI needs to be developed for the common good and that the “autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence”.
The report also stressed that AI should also not be used to diminish the data rights of individuals, and people “should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence”.
The report stated: “Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created.
“Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.”
Committee chairman Lord Clement-Jones said: “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.
“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.
“AI is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”
The report said transparency in the technology was needed, and the AI Council should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
The study said: “It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed.
“The committee recommend that the Law Commission investigate this issue.”