![](https://cdn.i-scmp.com/sites/default/files/styles/768x768/public/d8/images/canvas/2023/06/02/bedc3a22-988b-4ee2-a279-73ed7a5d1f66_2d3e8565.jpg?itok=Nkbe39HE&v=1685700288)
Ensuring AI is safe and beneficial to all demands more than greater regulation
- Without legally binding governance standards, the foundation of modern AI systems will remain dangerously susceptible to abuse
- However, legal means alone will be not be enough. A multipronged approach drawing from diverse fields, including ethics, is needed to bring accountability to AI systems
When a lawyer from a US law firm submitted an affidavit based on spurious legal research generated by ChatGPT, many joined the chorus of concerns about the output of generative AI tools. It’s easy to see this as primarily a legal issue. Mainstream media is quick to denounce those who failed to gauge the risks. Some might even take pleasure in seeing lawyers receive their comeuppance.
No one can truly claim to know how to wield AI-powered tools to the fullest. These tools will only continue to change as they take in increasing quantities of people’s data, reaching greater levels of intelligence and threat in the process.
Fully comprehending the impact of freewheeling AI tools might be difficult, but we can’t afford to just sit back and watch. As the development of AI accelerates, the associated risks will expand. Even those standing on the sidelines could be affected as AI widens its reach.
Decisions being left to the discretion of individual actors can sometimes be the enemy of good governance. Leaving concerns about AI unadressed makes it difficult to understand the risks of a doom loop – when one negative event triggers another, which in turn causes another, and so on – particularly as we become increasingly reliant on such tools to dictate how we work and live.
Even so, society could get caught flat-footed if it does not take proactive measures to manage the risks now. It is crucial for us to recognise the importance of adapting our legal systems and governance structures.
This is where casuistry – a philosophical approach that dates back to Aristotle – becomes relevant. According to casuistry, we cannot merely rely on outdated methodology to address contemporary challenges.
In a similar vein, a multipronged approach drawing from diverse fields such as law, ethics and technology is necessary to bring accountability to AI systems. One possible approach entails cultivating partnerships among governments, private organisations and research institutions to devise all-encompassing regulatory frameworks for AI. This would require embedding ethical norms in any new AI development while encouraging ongoing transparency and responsibility within the AI sector.
Moreover, education and public awareness campaigns should be prioritised. By raising awareness about potential bad actors, society can engage in informed discussions and help develop sensible policies that balance innovation with security. Investing in research and development would also help strengthen the safety of AI products. This should include exploring novel techniques to make AI more reliable as well as building defences against adversarial attacks and biases.
Grappling with the challenges posed by generative AI tools requires a comprehensive, forward-thinking approach that transcends traditional legal boundaries. Only then can we hope to navigate the uncertain waters of AI advancement and harness its potential for the greater good.
Adam Au is the head of legal at a Hong Kong-based healthcare group
![](https://assets-v2.i-scmp.com/production/_next/static/media/wheel-on-gray.af4a55f9.gif)