As nations battle for AI supremacy, what about the cost of creative destruction?
- Most governments seem more interested in global AI dominance than managing the cost of disruptive innovation for society
- Wise states would make AI’s potential creative destruction a strategic issue, with funding and continuous attention from the highest levels of government
Unsurprisingly, investment in AI research and development is skyrocketing across the globe – not only from companies but also from governments and public entities.
We are also seeing more doomsday thinkers who argue that AI could be (even will be) the end of humanity. Apocalyptic visions of a future dominated by AI – one might recall scenes from The Matrix or Terminator films – are seemingly pushed upon society, instilling fear.
While creative destruction has a long history in economic theory, it is most often linked to the Austrian economist Joseph Schumpeter. Briefly, creative destruction implies that disruptive innovations are drivers of sustained economic growth, even if they destroy existing economic actors like specific industries, companies and jobs.
Think about how Apple’s iPhone conquered the world, at the cost of traditional mobile phone producers like Nokia, or how cars replaced horses and carriages.
Though not necessarily Schumpeter’s perspective, one might argue that creative destruction is a natural and inherently good phenomenon in a capitalist system – disruptive innovations come at a cost but this is outweighed by the long-term benefits.
Although Keynes said this in response to classical economists’ views about market mechanisms and government intervention, his words are very applicable to AI and the creative destruction it could bring.
The aim here is not to instil fear, but to push governments to action. Indeed, we must acknowledge what we could lose because of AI and not only what we could gain, and framing losses will spark some loss aversion, pushing policymakers to action. There is no stopping the AI revolution, and it can bring massive benefits to society.
But these benefits will come at the cost of creative destruction, and governments need to better prepare to manage that cost. Yet, most seem more interested in helping their country or region become the dominant AI player – as opposed to carefully thinking about what AI could mean for their institutions and society at large.
Strategically planning for AI’s creative destruction implies a number of necessary actions and focus points, especially at a national or regional level of government. These include:
First, making AI’s potential creative destruction a strategic issue, which means it receives continuous attention from the highest levels of government. AI should not only be about funding and becoming the dominant player, but rather becoming the player that best manages the institutional and societal transformation (and creative destruction) AI could bring.
Second, appointing an “AI strategic foresight committee”, which includes a broad range of experts particularly apt at understanding the institutional and societal impact of AI – so, not just natural scientists, engineers and computer experts, but also, for instance, legal experts, economists, psychologists, sociologists and philosophers.
This committee could develop different scenarios about the creative destruction that AI could bring about, and which strategies could help cope with the different scenarios.
Third, appointing an “AI strategic planning committee”, which should include actors from the major institutional and societal players in a nation or region. This task force needs to develop (and monitor) implementable strategies based on the recommendations and insights of the above-mentioned committee.
It should include high-level political leaders, representatives of government agencies and lower-level governments, as well as representatives of different sectors like justice, education, healthcare and manufacturing.
The above recommendations should not be considered one-off activities but should be embedded in continuous government activity. Indeed, it would be foolish to think that AI’s creative destruction is a problem we already fully understand.
New ideas or unforeseen challenges are likely to emerge during the implementation of specific policies. It is thus vital that governments are agile when tackling AI issues, which implies that an openness to learning and bottom-up initiatives, short feedback loops, and limited red tape and procedural burdens should govern these committees, with clear mandates.
Of course, these recommendations are nothing more than starting points to ensure increased policy awareness and attention to AI’s potential for creative destruction. There are many other recommendations out there for policymakers to look at.
The benefits of AI are becoming clear, especially in the minds of governments across the globe, but they also need to be clear about the costs they are willing to accept.
Bert George is an applied economist, strategic planning expert and associate professor at the Department of Public and International Affairs, City University of Hong Kong