Macroscope | Ensuring AI is safe and beneficial to all demands more than greater regulation
- Without legally binding governance standards, the foundation of modern AI systems will remain dangerously susceptible to abuse
- However, legal means alone will be not be enough. A multipronged approach drawing from diverse fields, including ethics, is needed to bring accountability to AI systems
When a lawyer from a US law firm submitted an affidavit based on spurious legal research generated by ChatGPT, many joined the chorus of concerns about the output of generative AI tools. It’s easy to see this as primarily a legal issue. Mainstream media is quick to denounce those who failed to gauge the risks. Some might even take pleasure in seeing lawyers receive their comeuppance.
No one can truly claim to know how to wield AI-powered tools to the fullest. These tools will only continue to change as they take in increasing quantities of people’s data, reaching greater levels of intelligence and threat in the process.
Fully comprehending the impact of freewheeling AI tools might be difficult, but we can’t afford to just sit back and watch. As the development of AI accelerates, the associated risks will expand. Even those standing on the sidelines could be affected as AI widens its reach.