In this Help Net Security interview, Assaf Mischari, Managing Partner at Team8 Health, discusses the risks associated with GenAI healthcare innovation and its impact on patient privacy.
What are the key challenges for cybersecurity in healthcare in the context of GenAI, and how can they be effectively addressed?
The healthcare industry faces many of the same challenges other industries face due to the impact of emerging technologies, with nuances that need to be considered and addressed.
For example, differences between basic data points that you want to keep private. Comparing PII and PHI reveals that PII is broader in scope, less regulated, processed and accessed by a more comprehensive range of organizations, and (for now) easier to monetize. However, PHI is rich in content and can be used more effectively for phishing and healthcare fraud.
Healthcare providers also lag behind when it comes to modern infrastructure and cybersecurity measures. This combination makes PHI vulnerable to attackers.
When we look at how AI models are developed, they can reflect many historical and societal biases regarding race, ethnicity, and gender. Algorithmic fairness in healthcare is critical because the decisions of AI models can directly impact a patient’s health, treatment recommendations, and overall well-being. Biased or inequitable models in healthcare can lead to misdiagnosis and inappropriate treatment, further compounding the impact of algorithmic bias.
How do you think GenAI will transform healthcare operations and patient care, especially when it comes to efficiency and decision-making?
GenAI will have a major impact on healthcare professionals. Implementing these tools will reduce the administrative burden that often prevents health professionals from working “within their qualifications.”
For example, AI-powered tools can automate data entry, extraction, and analysis of electronic health records (EHRs). NLP technology can extract relevant information from unstructured clinical notes and populate structured fields in the EHR. Predictive analytics allows healthcare professionals to predict patient flow, staffing needs, and resource utilization, enabling proactive decision-making and resource allocation. Appointment scheduling and reminders, pre-authorization and billing, and workflow optimization are all ripe for digital transformation.
GenAI will eventually become the preferred diagnostic method, as AI algorithms can analyze vast amounts of patient data such as medical records, image scans, and test results. Machine learning algorithms can identify subtle patterns and correlations in data that are difficult for humans to detect, and AI models can provide consistent and objective diagnoses based on input data, reducing the potential for human bias. nature is reduced.
At this point, it's important to recognize that humans have a contextual advantage over pure data analysis when it comes to data collection during patient encounters. This may suggest that future diagnostics will be collaborative.
Given the sensitivity of medical data, what measures should be taken to ensure that GenAI innovations do not violate patient privacy?
Although PHI is largely unstructured, many measures can be used to reduce risk while enabling innovation. For example, both PII and PHI data will need to be anonymized and anonymized.
Anonymization and anonymization (data masking, tokenization, or encryption) techniques that remove personally identifiable information (PII) from medical data before use for GenAI training ensure compliance with privacy regulations such as HIPAA and GDPR This will lead to increased adoption. Reliability of GenAI tools in healthcare.
Improved privacy-preserving techniques should be leveraged to enable training of GenAI models without directly accessing or sharing raw patient data. For example, federated learning, differential privacy, and homomorphic encryption all offer security benefits.
And it goes without saying that secure data storage and access control is non-negotiable. Medical data should be stored in secure, encrypted databases with strict access controls and strict authentication methods such as multi-factor authentication (MFA).
Can you discuss the ethical implications of GenAI in healthcare, especially regarding data bias and ensuring fair treatment outcomes?
Historically, medical data has many built-in biases regarding race, ethnicity, and gender, but GenAI's biases have been incorporated into the training dataset, feature selection, data collection, labeling process, and even model architecture. It can be caused by your own biases.
The decision-making process of a GenAI model should be transparent and explainable to healthcare providers and patients. Black-box models without interpretability can hinder trust and accountability, making it difficult to identify and address bias and errors. Developing explainable AI technology and providing clear explanations for GenAI-generated recommendations can foster trust and enable informed decision-making.
How do you think the current regulatory framework needs to evolve to accommodate GenAI's rapid advances in medicine?
The healthcare industry has taken a huge leap forward, but it's not the first time. We experienced a similar giant leap many years ago with the introduction of clinical trials for devices and drugs. A standardized “ML clinical site” that does not rely on vendor-collected data will likely require a more robust infrastructure. For example, a controlled sandbox with datasets vetted for bias and tested for transparency and explainability accelerates the creation of ML models and improves their overall quality.
What steps should healthcare organizations take to build and maintain public trust using GenAI technology?
In my opinion, healthcare organizations need to be transparent about how they use GenAI and the various impacts that adopting this technology may have on patients. Prioritizing patient safety and privacy and continuously monitoring, improving, and quickly addressing AI concerns is imperative.