Wednesday, 09 October 2024


Australia’s quest for ethical AI

01 May 2024 | Analysis | By Ayesha Siddiqui

Australia is at the forefront of Artificial Intelligence (AI) innovation in healthcare and recognises evolving risks associated with its use. With a series of initiatives, the country is spearheading efforts to ensure the safe and ethical implementation of AI in healthcare practices.

image credit- shutterstock

image credit- shutterstock

Australia, with its ageing population and advanced healthcare systems, has embraced AI technologies to enhance patient care and improve overall health outcomes. Patients are also eager and receptive to personalised medicine and digital health technologies. 

According to recent government-commissioned survey findings, Australians are increasingly seeking personalised healthcare services that offer more choices. Approximately seven in 10 consumer respondents stated that technology provides them with ‘more control over their daily lives. Meanwhile, healthcare professionals expect digital health solutions to streamline health information access, enhance care delivery, and align with their workflows, with about eight in 10 considering these technologies imperative for healthcare providers.

In response to the growing demand and recognising the transformative potential of AI in healthcare, Australia has launched several initiatives aimed at accelerating AI adoption.

The Australian government earmarked over $1 billion in the May 2023 Federal Budget for various digital health initiatives, including modernising My Health Record, renewing the Intergovernmental Agreement on National Digital Health, and enhancing electronic prescribing capabilities.

Moreover, Australia has unveiled comprehensive strategies to drive digital health transformation. This includes Australia’s Digital Health Blueprint and Action Plan 2023-2033, and the National Digital Health Strategy 2023-2028, accompanied by a Strategy Delivery Roadmap. These strategic documents outline the Australian Government's vision for digital health over the next decade, emphasising the importance of safe patient information sharing, accurate diagnosis, and empowering patients to manage their health effectively.

As part of the National Digital Health Capability Action Plan (CAP), the Australian Digital Health Agency partnered with AIDH (Australasian Institute of Digital Health) to establish an online hub. This hub, which went live in December 2023, is designed to support both clinical and non-clinical professionals in developing their career pathways and enhancing digital health capabilities, including proficiency in AI technologies.

In addition, recognising the potential of AI in the public health system, Australia has established a new task force dedicated to guiding its use.

"In February this year, the Australian Government launched the National Digital Health Strategy, signifying its belief in the importance of digital health solutions. They note that digital solutions hold the potential to overcome some substantial healthcare challenges, such as equitable access, rising healthcare costs, and the management of chronic diseases," said Dimitry Tran, Co-founder and Deputy CEO, Annalise.ai.

 

Ethical considerations

AI is utilised across healthcare pathways, spanning from screening to diagnosis, treatments, and personalised therapeutics, and has greatly benefitted the health system. But there are legitimate concerns around data governance, safety, and ethical use. 

In an article in the Medical Journal of Australia, experts offer solutions to address gaps in Australia’s capacity to fully leverage the benefits of AI and manage evolving risks associated with the technology. “Our health system is unprepared to take advantage of AI’s benefits nor face the rapidly evolving risks,” according to experts calling for a national strategy on AI in healthcare.

Similarly, in a submission to a Department of Industry, Science and Resources discussion paper, Supporting responsible AI, the Australian Medical Association (AMA) said the key challenge with AI in Australia is it remains largely unregulated with a lack of transparency on the ethical principles of AI developers and no real governance arrangements in place. Last August, the AMA published its first position statement on AI, outlining the need for regulation to be put in place before the widespread use of the technology in healthcare.

To address these challenges, the country has launched a National Policy Roadmap for AI in Healthcare that identifies current gaps in Australia’s capability to translate AI into effective and safe clinical services. The vision of the roadmap is for an AI-enabled healthcare system delivering personalised and effective healthcare safely, ethically, and sustainably. Its mission is to achieve a fully funded national plan by 2025, designed to create an AI-enabled Australian healthcare system capable of delivering personalised healthcare safely, ethically, and sustainably. 

Prof. Karin Verspoor, Dean of the School of Computing Technologies at RMIT University, who was also on the committee of this roadmap explained this in detail. She said, “Under the leadership of the Australian Alliance for Artificial Intelligence in Healthcare, led by Prof. Enrico Coiera (Macquarie University), Dr David Hansen (Australian e-Health Research Centre at CSIRO), and me, and in consultation with representatives from stakeholders across the health sector, we developed the National Policy Roadmap for AI in Healthcare. The roadmap provides recommendations that specifically address safe and ethical use of AI in healthcare. These recommendations include the establishment of a National AI in Healthcare Council, and direct collaboration with the broader national initiatives from the Department of Industry, Science, and Resources to establish a national AI ethical framework.”

She added, “The aim is to ensure that a risk-based safety framework, and practice standards relating to the use of AI in healthcare services are put in place. Furthermore, through Standards Australia, a subcommittee of the general Health Informatics standards committee has been established to address AI in Healthcare (technical subcommittee IT-014-21), to provide guidance on best practices for the clinical use of AI. There is significant innovation already underway, and active conversations throughout the health sector as well as community groups about opportunities and use cases for AI to improve patient care, outcomes, and experiences. These initiatives seek to help ensure that this innovation truly results in positive benefits for Australians through evidence-based and safe use of these powerful technologies.”

In January 2024, the Australian government published its interim response to community feedback on its safe and responsible AI in Australia discussion paper.

“In the government’s January 2024 ‘Safe and Responsible AI in Australia’ interim response, the government noted the potential of AI to ‘uplift’ healthcare, but also the technical risks it posed. The government further observed that healthcare is an industry where existing laws may need to be updated to manage risks,” said Sidney Kung, a senior associate in Baker McKenzie’s IP and Healthcare and Life Sciences Practice in Sydney. 

His colleague Toby Patten, a partner in Baker McKenzie's Data & Technology and Healthcare and Life Sciences Practice in Melbourne echoes the same sentiments, “Ultimately, the Australian government has indicated it will take a ‘risk based’ approach to regulating AI. This is likely to mean that measures taken will be reflective of the risks posed by AI use, rather than a top-down imposition of mandatory rules for AI generally. In this way, Australia can be seen as forging a course between the highly prescriptive and centralised regulatory approach of the EU and the more relaxed, sectoral approach of the UK.”

The University of Wollongong (UOW) has also launched a project that seeks to establish a new interdisciplinary research programme at UOW that addresses the ethical, legal and social implications (ELSI) of using AI in health and social care. 

“As the government continues to formulate its position on the regulation of AI some of the specific risks they will need to consider in a healthcare setting are the risk for patient harm due to inaccurate diagnoses, recommendations and provision of erroneous results, biassed algorithms due to reliance on non-representative datasets which do not reflect the Australian population or healthcare system, and data breaches during collection, storage, or transmission of patient data and use of data without patient approval,” said Sidney. 

"Australia is a great place for doing the fundamental research crucial to ethical AI development. Our tertiary education systems and universities have extensive expertise in medical research, along with the deep research capabilities critical to ongoing AI innovation. When it comes to ethical AI, it’s crucial to first consider the problems we are trying to solve. One of the biggest problems facing healthcare systems across the globe today is that of capacity. Currently, the demand for healthcare services way outstrips supply in many parts of the world, including Australia," said Dimitry Tran.

There's no denying that AI presents tremendous opportunities for revolutionising healthcare. However, as with any technological advancement, the implementation of robust strategies is paramount for safeguarding patient privacy, maintaining data integrity, and upholding ethical standards in AI-driven healthcare initiatives. Australia has taken a step in the right direction in this regard.

 

Ayesha Siddiqui

Sign up for the editor pick and get articles like this delivered right to your inbox.

Editors Pick
+Country Code-Phone Number(xxx-xxxxxxx)


Comments

× Your session has been expired. Please click here to Sign-in or Sign-up
   New User? Create Account