Every cybersecurity professional faces the challenge of managing cybersecurity risks associated with emerging technologies such as artificial intelligence (AI).  Traditionally, most experts agree even the best cybersecurity solutions are only one step ahead of (or even sometimes behind) the capabilities of attackers.  This situation is especially difficult when the pace of the emerging technology adoption is high.

Life sciences and healthcare are embracing AI as a fundamental tenet of medical innovation.  And the past year has produced an unprecedented level of interest in AI applications, especially “generative AI” techniques found in solutions such as ChatGPT, GitHub Copilot, Bard, DALL-E 2, Midjourney, and other offerings.  Due to the sophisticated functionality and broad availability of these products, they are having a disruptive effect on many life sciences organizations, introducing a wide array of risks that bad actors can exploit.

Nine AI-related Cybersecurity Risks

Predictive analytics, including artificial intelligence and machine learning methods, are far from new.  Given the life-critical nature of analytical and decision support applications within life sciences and healthcare, AI risk analysis in these industry settings often focuses on the AI model’s data and resulting performance (e.g., biased or erroneous data, ethical considerations, incorrect and opaque algorithms, improperly tuned models, AI hallucinations).

For cybersecurity and risk management professionals, many emerging risks are also related to how users – both employees and external stakeholders – are adopting and exploiting these new technology capabilities.  If you think the risks are not that notable, consider that Apple, Samsung, Amazon, Verizon, Citigroup, Goldman Sachs, Wells Fargo, and Accenture among others have already restricted the use of these technologies.

What risks are they concerned about?  Broadly speaking, AI-related risks commonly fall into nine categories:

  1. Loss of Intellectual Property.  In a recent survey, approximately half of the senior executive respondents believe their company data has been inadvertently shared with ChatGPT.  Employees using generative AI tools – especially those hosted outside their corporate infrastructure – may provide copies of corporate data as required inputs to the AI system without realizing that they are disclosing confidential information to an unauthorized vendor.  Such information is then subject to further disclosures through system hacking, internal product vulnerabilities, or system use by competitors.  And when such information is governed by regulations, the risks increase further.
  2. Contractual and regulatory violations.  Disclosure of some information to AI systems may also constitute violations of contractual terms or regulatory obligations. For example, most companies have non-disclosure agreements in place with their data providers.  And data critical for the development and training of many industry-specific AI algorithms – patient data, for example – is protected by HIPAA which restricts both the use and disclosure of that data.  This issue is particularly sensitive when dealing with any data that carries privacy concerns. 
  3. Social engineering.  Generative AI tools can be leveraged by attackers to engage in social engineering attacks, tricking users into revealing sensitive information or performing malicious actions. Because the AI models can conjure highly realistic imitations of individual leaders and employees within an organization, these phishing attacks are harder to detect by the average user.  Attackers are increasingly running their email campaigns through natural language models to eliminate broken English and misspellings which have been a tell-tale sign of phishing attempts.  And with a few recorded minutes of an individual’s voice, AI voice generators can recreate any voice message desired (e.g., this is the CEO and I need a favor urgently for this afternoon’s board meeting).
  4. Copyright infringement.  Generative AI models are trained on existing information works that are usually copyrighted.  Currently, the boundaries concerning the appropriate use of copyrighted works as a foundation for new works are unclear.  If a computer generates a new work by mimicking an existing work, is the new work infringing on that existing work’s copyright?  Like all things related to copyright law, these issues are expected to take considerable time to normalize.
  5. Larger pool of stronger adversaries.  The broad and open availability of sophisticated AI tools creates more opportunities for new bad actors (e.g., hackers, malicious content creators) and software threats (e.g., malware, bots) to emerge.  Individual hackers, groups, extremists, and geopolitical / nation-state actors have more tools in their toolbox today.  These tools can increase the effectiveness of their attacks (e.g., AI-driven vulnerability and attack planning), and are increasingly easy to use by novices.
  6. Code vulnerabilities.  AI can also be used to analyze and identify vulnerabilities in 3rd party software code in web applications, websites, and network configurations.  This means that attackers can find and exploit zero-day vulnerabilities much faster.  In addition, most of the AI community leverages open-source software development tools and code libraries for developing new AI capabilities.  The continuously evolving state of these systems and code bases creates a potential exposure point for enterprises that introduce them into their infrastructure (e.g., software defects, malicious code insertion).
  7. Newer forms of viruses and malware.  Any machine infected with malicious code is obviously a threat.  As viruses and malware increasingly incorporate AI capabilities, these threats will become increasingly sophisticated, able to learn from and adapt to behaviors and new experiences.  For example, malicious code can leverage AI to change its digital fingerprint, thereby enabling it to evade detection by security software.
  8. Misinformation campaigns.  The ability of generative AI to create realistic imitations is not limited to social engineering hacks.  Competitors, activists, and disgruntled employees now have access to tools that can fabric audio and video assets that can damage an organization’s reputation and market valuation.  Though popular concerns about this issue have largely focused on political and regulatory issues, the risks are real for any business, organization, or individual.
  9. Device vulnerabilities.  Specialized computing hardware, medical devices, and other solutions that include embedded AI algorithms need to be effectively managed over their lifecycle.  AI models need to be periodically tuned.  Operating systems and firmware need to be hardened.  Application software and interfaces needs to be patched.  Organizations that deploy these assets without having a strong lifecycle management process in place introduce vulnerabilities to their operational infrastructure as these technologies age.

Addressing AI-related Cybersecurity Risks

If the list of risks above seems daunting, it need not be.  There are proven solutions and practices that help organizations address those concerns.  Key components of an effective cybersecurity approach for managing AI-related risks include the following:

  1. Cybersecurity Strategy: Do you have a strong security and privacy program today?  Are your threat operations positioned to respond to emerging risks?  For many organizations, increasing their readiness for AI starts with increasing their cybersecurity posture overall.  A well-formed cybersecurity strategy provides a solid framework for securing assets, managing operations, and responding to threats.
  2. Enterprise Architecture: Every life sciences organization needs a strong IT strategy and associated enterprise architecture.  Well-managed enterprise architectures provide a wealth of cybersecurity benefits – minimizing security perimeters, maximizing protective measures, standardizing security methods, and more – that serve to make the organization’s infrastructure more resilient to attacks.
  3. Data Mapping & Governance: What are your organization’s high-value data assets, and how are they protected?  How operationally do you mitigate the risks of accidental disclosure?  The answers to these critical questions vary widely by company, and effective answers require a deep understanding of each organization’s mission, products, operating model, customers, and regulatory requirements.
  4. Risk Management:  Countering AI cyber threats requires smart risk management.  The landscape of risk is changing each week as AI innovations continue to advance.  Until cybersecurity technologies can catch up, the best defense continues to be good risk management practices.
  5. User Awareness & Training:  The first, last, and best line of defense for AI-related risks is educated users.  Many leaders and front-line users are simply not aware of the risks associated with using generative AI tools and other emerging technologies. Though traditional cybersecurity awareness training does not cover AI, organizations like CREO offer specific training covering new attack vectors presented by AI and how users can help the organization manage the evolving new threats.

Finally, it is also worth remembering that AI is not just a source of risks – it’s also a highly valuable defense tool.  A modernized arsenal of cybersecurity technologies includes AI-powered capabilities in real-time risk assessment and quantification; real-world security telemetry, detection, and response; self-healing systems; endpoint intelligence; zero-trust asset management; and intelligent vulnerability prioritization and management.  When coupled with the practices above, life sciences and healthcare organizations can control the risks introduced by AI while also reaping the benefits of these powerful technologies.