Click to resize

05F05E67-9A66-45E7-ABE3-8D630F8A2D6A
You have 3 free articles left this month
Get to the heart of the matter with news on our city, Hong Kong
Expand your world view with China insights and our unique perspective of Asian news
Expand your world view with China insights and our unique perspective of Asian news
Subscribe
This is your last free article this month
Get to the heart of the matter with news on our city, Hong Kong
Expand your world view with China insights and our unique perspective of Asian news
Expand your world view with China insights and our unique perspective of Asian news
Subscribe

The genie is out of the bottle: time for the world to agree on AI guardrails

  • Generative AI is quickly changing our world but we still have a small window to ensure its development is aligned with our shared interests and values
Topic | Science

Published:

Updated:

In recent months, the development of artificial intelligence has accelerated considerably, with generative AI systems such as ChatGPT and Midjourney rapidly transforming a wide range of professional activities and creative processes. The window of opportunity to guide the development of this powerful technology in ways that minimise the risks and maximise the benefits is closing fast.

AI-based capabilities exist along a continuum, with generative AI systems such as GPT-4 (the latest version of ChatGPT) falling within the most advanced category. Given that such systems hold the greatest promise and can lead to the most treacherous pitfalls, they merit particularly close scrutiny by public and private stakeholders.

Virtually all technological advances have had both positive and negative effects on society. On one hand, they have bolstered economic productivity and income growth, expanded access to information and communication technologies, extended human lifespans and improved overall well-being.

On the other hand, they have led to worker displacement, wage stagnation, greater inequality and an increasing concentration of resources among individuals and corporations.

AI is no different. Generative AI systems open up abundant opportunities in areas such as product design, content creation, drug discovery and healthcare, personalised education and energy optimisation. At the same time, they may prove highly disruptive, and even harmful, to our economies and societies.

The risks already posed by advanced AI, and those that are reasonably foreseeable, are considerable. Beyond the widespread reorientation of labour markets, large-language-model systems can increase the spread of disinformation and perpetuate harmful biases.

Generative AI also threatens to exacerbate economic inequality. Such systems may even pose existential risks to humankind.

For some, this is a reason to tap the brakes on AI research. Last month, more than 1,000 AI technologists and academics, from Elon Musk to Steve Wozniak, signed an open letter recommending that AI labs “immediately pause” the training of systems more powerful than GPT-4 for at least six months. During this pause, they argue, a set of shared safety protocols – “rigorously audited and overseen by independent outside experts” – should be devised and implemented.

The open letter, and the heated debate it has triggered, underscores the urgent need for stakeholders to engage in a wide-ranging good-faith process aimed at aligning robust shared guidelines for developing and deploying advanced AI.

Such an effort must account for issues like automation and job displacement, the digital divide and the concentration of control over technological assets and resources, such as data and computing power. And a top priority must be to work to eliminate systemic biases in AI training, so that systems like ChatGPT do not end up reproducing or even exacerbating them.

Proposals for AI and digital-services governance are emerging, including in the United States and the European Union. Organisations like the World Economic Forum are also making contributions.

In 2021, the forum launched the Global Coalition for Digital Safety, which aims to unite stakeholders in tackling harmful content online and facilitate the exchange of best practices to regulate online safety. The forum subsequently created the Digital Trust Initiative, to ensure that advanced technologies like AI are developed with the public’s best interests in mind.

Now, the forum is calling for urgent public-private cooperation to address the challenges that have accompanied the emergence of generative AI and to build consensus on the next steps for developing and deploying the technology.

To facilitate progress, the forum, in partnership with AI Commons – a non-profit organisation supported by AI practitioners, academia, and NGOs focused on the common good – will hold a global summit on generative AI in San Francisco on April 26-28. Stakeholders will discuss the technology’s impact on business, society and the planet, and work together to devise ways to mitigate negative effects on third parties and deliver safer, more sustainable and more equitable outcomes.

Generative AI will change the world, whether we like it or not. At this pivotal moment in the technology’s development, a cooperative approach is essential to enable us to do everything in our power to ensure the process is aligned with our shared interests and values.

Klaus Schwab is founder and executive chairman of the World Economic Forum

Cathy Li is head of AI, data and metaverse and a member of the executive committee at the World Economic Forum

Copyright: Project Syndicate

Klaus Schwab is founder and executive chairman of the World Economic Forum.
Cathy Li is head of AI, data and metaverse and a member of the executive committee at the World Economic Forum.
Science Artificial intelligence Technology Regulation World Economic Forum China technology European Union ChatGPT and other generative AIs

Click to resize

In recent months, the development of artificial intelligence has accelerated considerably, with generative AI systems such as ChatGPT and Midjourney rapidly transforming a wide range of professional activities and creative processes. The window of opportunity to guide the development of this powerful technology in ways that minimise the risks and maximise the benefits is closing fast.

AI-based capabilities exist along a continuum, with generative AI systems such as GPT-4 (the latest version of ChatGPT) falling within the most advanced category. Given that such systems hold the greatest promise and can lead to the most treacherous pitfalls, they merit particularly close scrutiny by public and private stakeholders.


This article is only available to subscribers
Subscribe for global news with an Asian perspective
Subscribe


You have reached your free article limit.
Subscribe to the SCMP for unlimited access to our award-winning journalism
Subscribe

Sign in to unlock this article
Get 3 more free articles each month, plus enjoy exclusive offers
Ready to subscribe? Explore our plans

Click to resize

Klaus Schwab is founder and executive chairman of the World Economic Forum.
Cathy Li is head of AI, data and metaverse and a member of the executive committee at the World Economic Forum.
Science Artificial intelligence Technology Regulation World Economic Forum China technology European Union ChatGPT and other generative AIs
SCMP APP