The healthcare industry has wholeheartedly embraced the use of AI. And the beneficial possibilities for using AI are exciting to imagine. It is important, though, for healthcare professionals and business leaders to understand that there are literally thousands of AI tools; some of which are good, and some that are buggy and inaccurate.
There are many different ways in which AI can be used to support better healthcare. GenAI tools and applications in particular are being increasingly used. In fact, National Institutes of Health research finds that healthcare pros of all types are using GenAI tools in a wide variety of ways, for all types of treatment, payment and operations activities.
To be beneficial for the wide range of healthcare activities, such as detecting threats to health and detecting unauthorized access to patient data, GenAI must be accurate, secure and protect privacy. If not, it could cause much more harm than good. It is important to ensure before choosing an AI tool that it is thoroughly vetted to ensure it is accurate, and protects privacy. It is also important for all healthcare organizations to understand that GenAI can also be used to harm healthcare organizations, as used by cybercriminals. Healthcare organizations must then take actions to mitigate such risks.
Using GenAI to detect health threats
GenAI tools that have been validated as being accurate are particularly effective when used to evaluate patterns in large patient databases to identify possible abnormalities which could represent a wide range of types of threats to patient health. For example, exploited threats to the medical devices that patients depend upon could change drug dosages, settings for life-preserving actions, and even digitally shutting off the device supporting life. Accurate GenAI tools can be used in a wide range of ways to identify such threats, and then tools utilizing AI can block the threats from disrupting digital-based healthcare treatments and connections to life-supporting devices. A few examples include the following.
-
Intrusion and Data Breach Detection and Prevention. GenAI tools are being used in intrusion detection systems (IDS), intrusion prevention systems (IPS), and for PHI breach detection and prevention to recognize abnormal patterns in network traffic and data flows. GenAI tools are additionally being used to identify specific types of data within the network that could indicate an intrusion.
-
Data Encryption and Privacy. GenAI-driven encryption systems are in the early stages of use for a variety of purposes. For example, using such encryption to ensure patient data is encrypted when there is an indication that a network intruder may be targeting PHI based on real-time risk assessment.
-
Detection of Data Access Pattern Anomalies. GenAI is being used to monitor and analyze the types of access, and access patterns, in patient health databases, and then to send alerts when detecting unusual activities.
Using GenAI to detect unauthorized access to patient data
In addition to those benefits described previously, additional possibilities for beneficially using accurate GenAI tools include identifying and preventing activities that could result in a wide range of fraud and other crimes:
-
Unauthorized access to protected health information (PHI)
-
Unexpected/unusual use of PHI
-
Attempts to exfiltrate PHI and other types of sensitive data, such as medical intellectual property (IP)
-
Cybersecurity incidents resulting from leaked IT specifications, administrative settings, etc.
-
The creation of additional attack vectors that would allow hackers to enter the healthcare organization’s digital ecosystem
-
Leaking network and system parameters, access points, etc.
-
Commercial losses from stolen, unreleased products and treatments, pricing plans, etc.
-
Violations of security and privacy legal requirements
These actions can help prevent medical identity fraud, IP violations, noncompliance with legal requirements, and a wide range of other harms to not only the associated individuals, but also to the healthcare organizations.
Using GenAI to harm healthcare activities
Cybercrooks love healthcare data, and how they can utilize that data to do a much wider range of crimes than they can accomplish with only basic, more widely collected, personal data. Cybercrooks can also sell that data at a much higher price than other types of personal data. GenAI provides health-data-loving cybercrooks another type of tool they love almost as much as the data.
Healthcare leaders, as well as cybersecurity and privacy pros, need to understand how these capabilities can impact the security and integrity of their associated healthcare digital ecosystems. Healthcare business associates (BAs) also need to stay on top of the AI driven threats and supporting tools, and to not use them with their CEs’ data that have been entrusted to them.
GenAI tools are being used by those health-data-loving cybercrooks to trick victims through the use of new and more effective social engineering (phishing) tactics added to their landscape of attack tools.
-
AI tools can impersonate quite convincingly the images and voices of healthcare leaders, such as hospital CEOs and Medical Directors, such as to impersonate the hospital CEO to direct staff to send all patient data to a specific address, website, fax number, etc. for a valid-sounding reason (e.g., a merger with another hospital system) that, in actuality, is a criminal, competitor, or other type of threat actor.
-
Cybercrooks use GenAI to find the open digital windows and unlocked digital doors in organizations’ networks, and they can do this from the other side of the world. GenAI has now made it much easier for crooks to find even more such vulnerabilities than ever before, after which the crooks can then easily exploit the vulnerabilities to load ransomware, steal patient health databases, inject malware into medical devices to cause disfunction during surgeries, and more.
-
AI tools can be used by cybercrooks to cause a wide range of harms. For example, change the patient health data that could result in physical harms to the associated patients. And, make apps and websites that are mistaken for valid healthcare software, and then do an unlimited range of harmful activities.
Thoughtful, risk-aware use of GenAI can improve threat detection and fraud mitigation
Ultimately, every healthcare organization must establish rules and policies for the use of all kinds of AI within their organization that cover both the risks and the benefits, provide training for GenAI issues, and include AI tools and use within their risk management program scope. Security leaders play a pivotal role in ensuring such actions are taken.
A final warning: Always test any AI tool that claims to be providing benefits to ensure that it
-
provides accurate results,
-
will not negatively impact the performance of the associated network,
-
does not put PHI at risk by exposing or inappropriately sharing PHI, and
-
does not violate the organization’s legal requirements for patient data.
#Healthcare #HIPAA #PHI #PrivacyRule #SecurityRule #Security #Privacy #PatientData #AI #GenAI #Compliance #RiskManagement #CyberCrime