Health

Healthcare AI Regulations and Ethics – 2024 Health IT Predictions

As we kick off 2024, we wanted to start the new year with a series of 2024 Health IT predictions.  We asked the Healthcare IT Today community to submit their predictions and we received a wide ranging set of responses that we grouped into a number of themes.  In fact, we got so many that we had to narrow them down to just the best and most interesting.  Check out our community’s predictions below and be sure to add your own thoughts and/or places you disagree with these predictions in the comments and on social media.

All of this year’s 2024 health IT predictions (updated as they’re shared):

And now, check out our community’s Healthcare AI Regulations and Ethics predictions.

Anika Heavener, Vice President of Innovation and Investments at The SCAN Foundation

Health equity continues to be a hot topic in 2023 but even with all the advancements in healthcare technology, older marginalized adults still aren’t getting the care they deserve. What’s contributing to the delay? The lack of quality standards for health and social data collection – and then its subsequent application – limits how healthcare technology can prioritize and elevate the needs of older adults. Representation through data is the key to practically enabling health equity. The AI/ML healthcare evolution will only achieve meaningful impact if the foundation is in data that is accurate, comprehensive, and scalable. In 2024, we need to see more investment and accountability in the use and application of data to truly serve the most vulnerable.

Christine Swisher, PhD, Chief Scientific Officer at Ronin

At Ronin, we are committed to developing and delivering safe, equitable, and effective machine learning systems and believe responsible AI use in healthcare demands a three-pronged approach:

  1. Rigorous model validation to ensure high performance to prevent foreseeable issues.
  2. Continuous performance monitoring to promptly detect changes in low-performing algorithms and drift early.
  3. Rapid issue correction triggered by changes in performance, which involves root-cause analysis, data updates, and model retraining to uphold accuracy.

This strategy has helped our company foster trust between clinician users and our AI-driven platform and holds the potential to transform clinical outcomes, patient experiences, and reduce healthcare costs. Responsible AI paves the way for a future where technology and human expertise seamlessly collaborate to enhance patient well-being.

Robert Connely, Global Market Leader for Healthcare at Pega

With AI and technology more pervasively deployed in healthcare, there will be increasing pressure on healthcare AI vendors to address organization needs for AI model auditability – particularly due to the increase in AI regulations. Healthcare organizations will prioritize tracking and understanding the operations of AI models to ensure accurate decision-making, safeguarding patient data, and maintaining full transparency and accountability. As a result, the industry will move toward a more secure, transparent, and patient-centered era of healthcare delivery.

Douglas Grimm, Partner and Health Care Practice Leader at ArentFox Schiff LLP

As AI products continue to demonstrate the potential for operational efficiencies and cost savings, there will be increased use and speedy implementation. While an overall regulatory framework for AI is still under development, the use of AI will lead to increased data privacy and security scrutiny for providers. Both HIPAA and related state laws create strict guidelines and restrictions on collecting, using, and maintaining patient-protected health information.

Healthcare providers should be mindful of how an AI product addresses data privacy and security, particularly when integrating AI into the architecture of existing information systems. Litigation between providers and AI developers that market non-compliant or less stable products will undoubtedly arise as breaches and security incidents occur. The Office for Civil Rights, the federal agency responsible for HIPAA enforcement will monitor providers’ AI information platforms and take robust enforcement actions as warranted.

Joe Ganley, VP of Government and Regulatory Affairs at athenahealth

Why 2024 AI regulations in healthcare shouldn’t be a one-size-fits-all approach: The race to leverage AI in healthcare and other industries has brought a similarly strong interest in government regulation. However, looking for a single law to govern AI is a fundamentally wrong approach. In 2024, regulators should focus on AI’s role within specific use cases – not on the technology as a whole. Rather than having a single AI law, we need to see updates to existing regulations that utilize AI.

For example, if there is bias inherent in tools being used for hiring, the EEOC will step in and change the requirements. Like with any technology, AI is complex, so future regulation of this emerging technology should balance risk and benefits and be done in a thoughtful way that involves a broad cross-section of stakeholders. If we attempt to regulate AI too quickly, we will fail, but if done right, we will prevent harm and harness the enormous potential of this technology.

Frederico Braga, Head of Digital and IT at Debiopharm

I expect 2024 will emphasize the need to consolidate AI for multiple applications within the healthcare space, including enhanced detection of disease and identification of cancer cells. Companies will integrate various capabilities to deliver more streamlined point solutions to patients. As economic pressures loom, a big focus will also be on gaining efficiencies through the use of digital health devices.

From a biopharmaceutical industry perspective, I envision we will see the biggest impacts of AI through targeted identification within pre-clinical research. However, these impacts may be delayed due to the regulated nature of the activities, such as ICH E5 guidelines. I also foresee companies becoming better at understanding human biology and genetic mutation’s impact on target identification. In the regulatory space, I anticipate a quiet but pervasive adoption of AI to support activities such as writing meeting minutes and classifying patient profiles. These are mostly productivity-related tasks but will ensure clinical development professionals can focus on value-adding activities and efficiencies.

Jason Schultz, Partner at Barnes & Thornburg LLP

Generally, healthcare providers will become increasingly interested in artificial intelligence in 2024 due to the need to drive efficiency and profit margins during a period of increasing labor costs and flat or declining healthcare reimbursement. Simultaneously, regulatory barriers to entry will likely increase for healthcare AI technology as President Biden’s executive order mandates increased research, standardization, data collection, and safety testing. In addition to the federal government’s initiatives, more states will likely independently begin to regulate AI technology which will even further complicate AI technology development in healthcare. The battle between rapid innovation and safety will be the highlight of 2024.

Alison Sloane, General Manager, Vigilance Detect at IQVIA

In the coming year, we should expect to see an increased focus on patient-centricity within all facets of patient support and engagement. This acts as a catalyst for the application of artificial intelligence (AI) to address safety risk monitoring to free up healthcare professionals (HCPs) from administrative tasks while interacting with their patients. Applying mechanisms to detect safety risks such as adverse events (AEs), product quality complaints (PQCs) and off-label use is increasingly beneficial for optimizing patient centricity. Considering the increased advancement of AI and generative AI (GenAI), technology can be leveraged to auto-identify safety risks in audio files, AI agents, and live agent chat, all of which are a source for patient safety information.

Though it may be too early to expect any form of regulation this year regarding AI from governing bodies in life sciences, we all know it is coming. Those who already apply sound and established best practices within their workflows and any application of technology will be in a good position once regulations go into effect. Themes have been emerging from initial discussions and future regulations are anticipated to include:

  • Human-led governance, accountability, and transparency: These practices would need to ensure increased transparency, detailing the use of AI, its development, and performance. It would also include measures to ensure the traceability and auditability of results procured from AI, meaning organizations would need to be able to reproduce or replicate those same results to show consistency and validity. Vendor oversight and control are also key topics such as monitoring and documenting results, corrective actions, and ensuring intended results are achieved.
  • Data quality, reliability, and representativeness standards are also impacting how companies perform validation and verification of code and train their data sets. Requirements to ensure provenance, relevance, and replicability of data, and record trails to show its origin through to present place. Ensuring adequate data and consistent results will be a requirement and will most likely be brought into legislation.

Performance of models and their development and validation practices will likely be a requirement with regulations around measuring performance to ensure reliable and consistent results from a pharmacovigilance perspective. In addition, improved performance over time will be expected with practices in place for feedback, retraining, correction, and data reviews by governance. As the trend of AI continues to enter life sciences and impact patient support workflows and pharmacovigilance, there will certainly be a continued degree of augmented intelligence added to processes, with an element of human-in-the-loop remaining alongside any automation. For example, GenAI can be leveraged for data summarization and extraction facilitating current processing practices such as the detection of safety risks upstream of pharmacovigilance processing within patient support programs and also within the downstream data entry, quality control, and medical review with human pharmacovigilance expertise and oversight remaining.

Jens-Olaf Vanggaard, General Manager, Regulatory Technology at IQVIA

Today, the regulatory industry is looking at generative artificial intelligence as a golden hammer, with every issue perceived as a nail. However, this is not universally applicable. Certain regulatory challenges demand an alternative approach; just as a screw requires twisting rather than hitting, problems may require solutions beyond generative AI. We will see this become evident in 2024 as organizations recognize the capabilities of the tools already at their disposal.

Increasingly, life science organizations are turning toward automation for their regulatory processes, but the industry will not be fully reliant on automated processes in the coming year. Mechanical aspects of the regulatory process will be automated, such as data entry and document processing, but regulatory professionals, the “human in the loop,” will be necessary for content review and finalization.

In 2024, organizations will embark on a period of exploration and testing. This will involve evaluating the effectiveness of different approaches to regulatory processes.

Erik Littlejohn, CEO at CloudWave

Navigating the opportunities, challenges, and cybersecurity risks of AI in healthcare. As the healthcare industry continues to embrace the transformative power of AI, there will be exciting opportunities and formidable challenges. CIOs and healthcare executives will need to simultaneously understand how to harness the potential of AI while safeguarding against emerging cybersecurity threats.

AI is no longer confined to a niche but has become a boardroom topic, with CIOs facing the inevitable question of their organization’s AI strategy. The more prominent players in the industry have the financial might to make substantial investments in AI, positioning themselves as early adopters, while others may find themselves relying on vendors to stay competitive. CIOs must be equipped to convey how their chosen vendors are addressing the challenges and opportunities presented by AI.

On the other hand, concerns are raised about the malicious use of AI in cyber attacks. Beyond conventional threats like ransomware, there is a new frontier where threat actors leverage AI to craft more sophisticated spear-phishing emails. The ability of AI to quickly parse stolen data and launch targeted attacks poses a significant challenge to cybersecurity efforts.

Questions about the security of AI tools used by clinicians, especially those handling protected health information (PHI), will become a key area of focus. Ensuring these tools meet stringent security standards is crucial for maintaining patient trust and compliance with privacy regulations. In 2024, healthcare organizations must adopt a multifaceted approach to AI security to navigate this evolving threat landscape. This includes safeguarding against external threats and scrutinizing the AI tools integrated into internal processes.

Ty Greenhalgh, Industry Principal, Healthcare at Claroty

Today, AI is the dumbest it will ever be. The rapid pace of AI development and adoption in the healthcare sector will leave providers extremely vulnerable to cyberattacks in 2024. If the industry does not take the proper precautions to implement robust security protocols throughout the AI adoption and deployment stages, bad actors will capitalize on these new attack surfaces. Gaining access to hospital BMS or patient care systems can impact operations, patient care, or worse — potentially putting lives at risk.

Be sure to check out all of Healthcare IT Today’s Healthcare AI Regulations and Ethics content and all of our other 2024 healthcare IT predictions.

Get Fresh Healthcare & IT Stories Delivered Daily

Join thousands of your healthcare & HealthIT peers who subscribe to our daily newsletter.