Microsoft’s head of Responsible AI flags cybersecurity dangers and benefits of the new tech at HSBC summit
- Generative AI can be used to find new types of attacks, but companies are also using the tech to assess these threats, said Microsoft’s Sarah Bird
- The tech community is calling for more regulatory clarity, as generative AI can be applied to different industries with disparate regulations, she added

Amid a frenzy of AI development worldwide, international technology companies are trying to speed up research and development as they push to develop their own large models in what has become a highly competitive field. But Bird warned it is also crucial to think about “how to build with the technology responsibly and safely”.
“Like any new technology … [AI] has some limitations,” Bird said.
AI can generate harmful content and code, according to Bird, possibly making systems more susceptible to new types of attacks, such as prompt injection attacks and jailbreaking, which allow attackers to bypass software restrictions.
Bird noted, though, that AI can be both the cause of and solution to tackling these new cybersecurity challenges. Microsoft is currently using AI to help security analysts assess the number of threat signals in an attack to help the company respond more effectively, according to Bird.