Advertisement
Inside Out | China-made AI sexbots: the next national security risk for US, EU?
- The rapid emergence of AI has sparked concerns about its attendant dangers, and few sectors exemplify that more than the production of sex robots
Reading Time:3 minutes
Why you can trust SCMP
1
When the European Union passed its Artificial Intelligence Act in March, its aim was to provide the first comprehensive legal framework intended to “foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety and ethical principles and by addressing risks of very powerful and impactful AI models”.
Advertisement
The act cuts to the heart of the challenges faced in managing the moral and ethical issues embedded in the development of AI. At the top of a four-tier risk pyramid sits those that pose an “unacceptable risk”, which should be expressly prohibited.
This includes the potential for personal harm arising from manipulative, deceptive or subliminal techniques to influence someone to make a decision they would otherwise not have made; exploitation of vulnerabilities because of age, disability or specific socioeconomic status; the use of data to categorise individuals; and harmful development of facial recognition databases.
An article in the Post last week brought such unacceptable risks sharply into focus and identified the kind of development that, if fully achieved, would reflect AI’s assumption of fully human powers. Evan Lee, CEO of Shenzhen’s Starpery Technology, says his company is “developing a next-generation sex doll that can interact vocally and physically with users”.
While Lee acknowledged that technological challenges remain to achieve realistic human interaction, his company’s aim is not just humanoid robots that provide sexual services but robots capable of household chores and providing care for those with disabilities or the aged. The news prompted the Post’s cartoonist to ask whether such robots would also be able to cook.
Even the most basic thought experiment makes it clear that Starpery’s ambitions – and those of its competitors – cannot be fully achieved without such robots acquiring fundamentally human capabilities that will run into the EU’s “unacceptable risk” category.
Advertisement