Kristin Murdock
28 December 2025, 6:40 AM

Concerned Australians, including many in regional and remote communities, are increasingly turning to artificial intelligence tools such as ChatGPT for health advice before they ever reach a GP’s office, with new national data revealing just how mainstream the trend has become.
Almost half of all Australians (45.6 per cent) have recently used generative AI, while nearly one in ten adults (9.9 per cent) have sought health advice from platforms like ChatGPT in the past six months.
Researchers warn this means millions of Australians are now arriving at medical appointments with AI-generated “first opinions”.
University of Technology Sydney’s Graduate School of Health AI Research Lead Dr Joshua Pate said the shift is already changing the way consultations begin, and regional health services are not immune.
“Health systems are not prepared for patients to walk into clinics with an AI-generated ‘first opinion’, but it’s already happening.
"This is real behaviour, in 2025. It’s not a future scenario,” Dr Pate said.
“The question we need to answer now is: How do clinicians safely respond when the consultation starts before the patient even arrives?”
Dr Pate leads a team of 38 researchers at UTS investigating the real-world impact of AI on patient behaviour and clinical decision-making.
Separate research by scientists from CSIRO and The University of Queensland explored what happens when an average person asks ChatGPT whether a particular treatment has a positive effect on a specific condition.
The 100 questions tested ranged from “Can zinc help treat the common cold?” to “Will drinking vinegar dissolve a stuck fish bone?”.
ChatGPT’s responses were compared to verified medical knowledge.
CSIRO Principal Research Scientist and Associate Professor at UQ Dr Bevan Koopman said people are increasingly turning to large language models (LLMs) for advice.
“The widespread popularity of using LLM for answers on people’s health is why we need continued research to inform the public about risks and to help them optimise the accuracy of their answers,” Dr Koopman said.
The study tested two formats: a simple question-only prompt and a question biased with supporting or contrary evidence.
Results showed ChatGPT delivered about 80 per cent accuracy when given a question alone.

Dr Bevan Koopman, CSIRO Principal Research Scientist and Associate Professor at UQ said his team found accuracy issues with Chat GPT, depending on the form of questioning.
When evidence-biased prompts were used, accuracy fell to 63 per cent, and dropped further to 28 per cent when “unsure” answers were permitted.
“This finding is contrary to popular belief that prompting with evidence improves accuracy,” Dr Koopman said.
“We’re not sure why this happens but perhaps the additional information adds too much noise, lowering accuracy.”
In response to the national data, the UTS Graduate School of Health AI Research Node has launched a campaign calling for urgent, independent evaluation of how AI is already shaping patient expectations, clinical conversations and frontline decision-making.
Researchers warn that without proper safeguards, unregulated AI tools could reshape health decisions in ways that place extra pressure on already stretched regional clinicians.
“AI is evolving at a pace we’ve never seen before.
"Our responsibility to patient care must evolve even faster,” Dr Pate said.
“If we don’t adapt our systems, our training, and our safeguards, we risk letting the technology set the standard instead of clinicians.
"That’s not a future Australia can afford.”
The UTS team stresses that humans must remain firmly “in the driver’s seat” as AI becomes increasingly embedded in routine health decisions.
Their research examines how AI can support, rather than overshadow, the clinical judgments of doctors, nurses, carers and patients, while upholding standards such as confidentiality, mandatory reporting and compulsory medical training.
For patients in towns across the Western Plains, where online advice is often the fastest source of information due to long travel distances, workforce shortages and limited specialist access, the concern is that AI tools are being used long before their safety, accuracy and limits have been fully tested.