Advertisement
Advertisement
Participants of the Saxony state government’s congress in Dresden, Germany, on artificial intelligence on May 25 stand in front of an AI avatar that combines human interactions with current knowledge. Photo: DPA
Opinion
Macroscope
by Adam Au
Macroscope
by Adam Au

Ensuring AI is safe and beneficial to all demands more than greater regulation

  • Without legally binding governance standards, the foundation of modern AI systems will remain dangerously susceptible to abuse
  • However, legal means alone will be not be enough. A multipronged approach drawing from diverse fields, including ethics, is needed to bring accountability to AI systems
According to Brandolini’s law, debunking misinformation demands more effort than creating it. This internet axiom could not find a more apt manifestation than in the era of artificial intelligence.

When a lawyer from a US law firm submitted an affidavit based on spurious legal research generated by ChatGPT, many joined the chorus of concerns about the output of generative AI tools. It’s easy to see this as primarily a legal issue. Mainstream media is quick to denounce those who failed to gauge the risks. Some might even take pleasure in seeing lawyers receive their comeuppance.

It was just one more example of the flaws of ChatGPT and its peers, adding to the arguments of those who are sceptical about them. Yet it would be a mistake to view this turn of events solely through these narrow lenses as the implications of generative AI tools extend far beyond the legal profession or people’s anxieties.

No one can truly claim to know how to wield AI-powered tools to the fullest. These tools will only continue to change as they take in increasing quantities of people’s data, reaching greater levels of intelligence and threat in the process.

In one instance, an AI-powered chatbot left its programmer in shock by demonstrating what he considered a level of humanlike sentience. These advancements leave us with much to consider.

Fully comprehending the impact of freewheeling AI tools might be difficult, but we can’t afford to just sit back and watch. As the development of AI accelerates, the associated risks will expand. Even those standing on the sidelines could be affected as AI widens its reach.

01:20

Elon Musk joins experts in call for pause on AI development because of possible ‘risks to society’

Elon Musk joins experts in call for pause on AI development because of possible ‘risks to society’
Counting on developers’ self-governance and good faith could be a recipe for disaster. Without legally binding governance standards endorsed by policymakers, AI specialists, legal experts and more, the foundation of modern AI systems will remain dangerously susceptible to abuse.

Decisions being left to the discretion of individual actors can sometimes be the enemy of good governance. Leaving concerns about AI unadressed makes it difficult to understand the risks of a doom loop – when one negative event triggers another, which in turn causes another, and so on – particularly as we become increasingly reliant on such tools to dictate how we work and live.

Any attempt to freeze AI development in place won’t solve this problem. We actually owe AI inventions a debt of gratitude for improving our lives drastically in the last two decades.
How can we grapple with these dynamics? Should the government pass laws that are similar to the European Union’s Artificial Intelligence Act? Without knowing the limits of generative AI tools, how do we begin to draw boundaries around them?

Even so, society could get caught flat-footed if it does not take proactive measures to manage the risks now. It is crucial for us to recognise the importance of adapting our legal systems and governance structures.

However, the problem cannot be solved by legal means alone. By the time we cut through the bureaucratic red tape, AI will have ascended to another level of sophistication. The real challenge lies in addressing the inherent fragility of these systems and decoupling the misuse of such tools from their potential benefits.

This is where casuistry – a philosophical approach that dates back to Aristotle – becomes relevant. According to casuistry, we cannot merely rely on outdated methodology to address contemporary challenges.

One such challenge is climate change. Solving this problem requires a departure from viewing it as something that can be resolved by breakthrough inventions alone. Instead, tackling climate change calls for pioneering measures such as cross-country coordination and investment in renewable energy.

In a similar vein, a multipronged approach drawing from diverse fields such as law, ethics and technology is necessary to bring accountability to AI systems. One possible approach entails cultivating partnerships among governments, private organisations and research institutions to devise all-encompassing regulatory frameworks for AI. This would require embedding ethical norms in any new AI development while encouraging ongoing transparency and responsibility within the AI sector.

03:08

What if robots took over the world? One ‘imagines’ nightmare scenario

What if robots took over the world? One ‘imagines’ nightmare scenario

Moreover, education and public awareness campaigns should be prioritised. By raising awareness about potential bad actors, society can engage in informed discussions and help develop sensible policies that balance innovation with security. Investing in research and development would also help strengthen the safety of AI products. This should include exploring novel techniques to make AI more reliable as well as building defences against adversarial attacks and biases.

Digital misinformation is a problem that has been extensively documented, but an overabundance of information could nullify any efforts aimed at countering it. Merely having data is pointless if it cannot produce usable results.

Grappling with the challenges posed by generative AI tools requires a comprehensive, forward-thinking approach that transcends traditional legal boundaries. Only then can we hope to navigate the uncertain waters of AI advancement and harness its potential for the greater good.

Adam Au is the head of legal at a Hong Kong-based healthcare group

Post