Can generative AI change security?

Can generative AI change security?

By IT Pro

Artificial intelligence (AI) has moved from being a sci-fi staple to a tool that’s in widespread use. Headlines today are dominated with news of generative AI services such as ChatGPT and Bard, the latest and greatest in large language models from firms OpenAI and Google. These promise users a crack at incredibly capable text generation and interpretation based on vast training models, using nothing more complicated than their browser and a keyboard.

As powerful tools for text generation, large language models also carry the risk of empowering threat actors. As more people publicly access these systems than ever before, security firms are assessing how chatbots could bolster malicious activity.

In this episode, Jane and Rory speak to Hanah Darley, head of threat research at cybersecurity firm Darktrace, on the potential misuse of generative AI models, and the role the technology can play as part of a wider AI defence arsenal at the enterprise level.

For more information, read the show notes here.
-
-
Heart UK
Mute/Un-mute