Advertisement

Microsoft’s head of Responsible AI flags cybersecurity dangers and benefits of the new tech at HSBC summit

  • Generative AI can be used to find new types of attacks, but companies are also using the tech to assess these threats, said Microsoft’s Sarah Bird
  • The tech community is calling for more regulatory clarity, as generative AI can be applied to different industries with disparate regulations, she added

Reading Time:2 minutes
Why you can trust SCMP
0
ChatGPT, from Microsoft-backed OpenAI, has ignited a generative AI arms race, and the technology is now having cybersecurity implications. Photo: AP
The use of generative artificial intelligence (AI), the powerful tool behind OpenAI’s ChatGPT, could push the capabilities of cyberattacks to new heights while also offering new defence mechanisms, but most organisations are still learning to harness the tool, according to one of Microsoft’s leading AI experts.
“AI is an incredibly powerful technology, and so it’s unfortunately a very exciting tool, for example, in cybersecurity for threat actors,” Sarah Bird, Microsoft’s chief product officer of Responsible AI, said on Wednesday during a panel discussion at the Global Investment Summit organised by HSBC in Hong Kong.

Amid a frenzy of AI development worldwide, international technology companies are trying to speed up research and development as they push to develop their own large models in what has become a highly competitive field. But Bird warned it is also crucial to think about “how to build with the technology responsibly and safely”.

“Like any new technology … [AI] has some limitations,” Bird said.

AI can generate harmful content and code, according to Bird, possibly making systems more susceptible to new types of attacks, such as prompt injection attacks and jailbreaking, which allow attackers to bypass software restrictions.

Bird noted, though, that AI can be both the cause of and solution to tackling these new cybersecurity challenges. Microsoft is currently using AI to help security analysts assess the number of threat signals in an attack to help the company respond more effectively, according to Bird.

Advertisement