Skip to main content
Home

Main navigation

  • Home
  • Series
  • People
  • Depts & Colleges
  • Open Education

Main navigation

  • Home
  • Series
  • People
  • Depts & Colleges
  • Open Education

Artificial Intelligence and Health Security, managing the risks

Series
Evidence-Based Health Care
Audio Embed
Professor Karl Roberts, University of New England, NSW, Australia gives a talk on generative AI and large language models as applied to healthcare.
Dr Karl Roberts is the Head of the School of Health and Professor of Health and Wellbeing at the University of New England, NSW, Australia. Karl has over thirty years-experience working in academia at institutions in Australia, the UK and USA. He has also acted as an advisor for various international bodies and governments on issues related to wellbeing, violence prevention and professional practice. Notably, this has included working with policing agencies, developing policy and practice on suicide, stalking, and homicide prevention. Interpol developing guidance for organisational responses to deliberate events such as biological weapon use. The UK government SAGE advisory group throughout the Covid19 pandemic focusing upon security planning. The European Union advising on biological terrorism, and extremist use of AI. World Health Organisation where he worked in a unit developing policy and practice related to deliberate biological threat events.

There has been substantial recent interest in the benefits and risks of artificial intelligence (AI). This has ranged from extolling its virtues as a harmless aid to decision making, as a tool in research, and as a means of improving economic productivity. To those claiming that unchecked AI is a significant threat to human wellbeing and could be an existential threat to humanity. One area of significant recent advancement in AI has been the field of Large Language Models (LLMs). Exemplified by tools such as Chat-GPT, or DALL-E, these so-called generative AI models allow individuals to generate new outputs through interacting with the models using simple natural language inputs. Various versions of LLMs have been applied to healthcare, and have variously been shown to be useful in areas as diverse as case formulation, diagnosis, novel drug discovery, and policy development. However, as with any new technology, there is a potential 'darkside,' and it is possible to utilise these tools for nefarious purposes. This talk will give a brief introduction to generative AI and large language models as applied to healthcare. It will then discuss the potential for misuse of these models, seeking to highlight how they may be misused and how significant a threat they could pose to health security. Finally we will consider strategies for managing the risks set against the possible benefits of generative AI. This talk is based on work carried out by the author and colleagues at the World Health Organisation and the Royal United Services Institute.

More in this series

View Series
Evidence-Based Health Care

Evidence-based dentistry: The building of the Dental Fact Box repository – OHA!

An introduction to OHA!, a tool currently being developed which aims to assist dentists in accessing the most reliable evidence regarding the effectiveness of common dental treatments.
Previous
Evidence-Based Health Care

How stories shaped every aspect of our mixed methods study

Kirsten Prest discusses the 'Encompass' study on care for disabilities in Uganda and its wider application in the NHS, where narrative-driven mixed methods research shaped phases from grants to implementation
Next
Transcript Available

Episode Information

Series
Evidence-Based Health Care
People
Karl Roberts
Keywords
EMB
Evidence-Based Medicine
Primary Care
Health Sciences
EBHC
Evidence-Based Health Care
medical statistics
ai
Department: Medical Sciences Division
Date Added: 17/04/2024
Duration: 00:50:38

Subscribe

Apple Podcast Video Apple Podcast Audio Audio RSS Feed

Download

Download Audio Download Transcript

Footer

  • About
  • Accessibility
  • Contribute
  • Copyright
  • Contact
  • Privacy
'Oxford Podcasts' Twitter Account @oxfordpodcasts | MediaPub Publishing Portal for Oxford Podcast Contributors | Upcoming Talks in Oxford | © 2011-2022 The University of Oxford