Cybersecurity Services

Risks Lurking in the “Shadows”: Shadow IT and Shadow AI

Written by Jill Martucci | Sep 4, 2024 6:55:59 PM

You may have heard the saying: “Change is the only constant in life.” This is certainly true of the information technology industry, which in turn, has a ripple effect on the technology, services, risk, and regulatory requirements that impact your organization and its environment.

Over the past several years, there has been a large focus on the risks that “bring your own devices” (BYOD), and the associated apps, posed to organizations. This is known as “shadow IT,” or as explained by Cisco, “the use of IT-related hardware, software, or services by a department or individual without the knowledge of the IT or security group within an organization.”

While BYOD and unapproved apps and software remain a high risk for many organizations, we should also be concerned with BYO-AI, or “bring your own artificial intelligence,” which has caused a stir industrywide as organizations realize there is now “shadow AI,” where employees are using AI technology and tools without the organization’s knowledge.

Anton Chuvakin and Marina Kaganovich from Google Cloud summarize it by saying: “Over the past two decades, we’ve seen the challenges of ‘bring your own device’ and the broader ‘bring your own technology’ movement, also known as ‘shadow IT.’ Today, organizations are grappling with an emerging artificial intelligence trend where employees use consumer-grade AI in business situations that we call ‘shadow AI.’”

They continue: “The incredibly rapid adoption of generative AI poses a significant challenge when employees want to use gen AI tools that haven’t been explicitly approved for corporate use. The use of shadow AI is likely to increase the risks that an organization faces, raising serious questions about data security, compliance, and privacy.”

It is recognized that the use of AI tools can potentially assist team members with the performance of job duties; however, there are many risks. Shadow AI can pose considerable security risks, including unauthorized access and exposure to sensitive or proprietary information; not to mention its impact on areas such as brand reputation, data integrity, and compliance consequences.

CIOs and similar leadership personnel should consider whether they would rather have employee’s use a sanctioned tool or completely ban the use of AI. Of course, banning its use may result in end users breaking organizational policy and guidelines, leaving the organization in the dark on tools used and lurking risks.

As such, we put together a few items to consider when looking to integrate AI into your business operations:

  • Implement a policy documenting your organization’s stance on leveraging AI in the workplace and related data governance. Like an Acceptable Use Policy, the AI Usage Policy should cover the authorized and prohibited use cases for AI technologies.
  • End users should be required to request and obtain approval for the specific tool and use case. If approved, ensure proper tracking and inventorying is completed, including the purpose of the tool, product owner, expected inputs and outputs, needed safety measures, training requirements, potential risks and risk rankings, and any other ongoing licensing and reviews needed.
  • An important, but often overlooked area, is workforce training. Forrester found 59% of leaders believe they’ve given staff sufficient training (in gen AI), but only 45% of employees say they’ve had any formal training. Training should be provided and tailored to the individual or group. This includes those developing applications, employees using AI, and senior management and/or the board, who must provide appropriate oversight.
  • In addition to other IT-related reporting to senior management and the board of directors, if applicable, plans and updates regarding AI should also be presented, so that proper governance and risk-based evaluation can be established.
  • If AI products are not developed in-house, but instead provided by a third party, ensure you are performing proper due diligence before contracting with them and regularly thereafter.
  • All AI-generated content should be properly cited when used as a resource for organization work product. Similarly, as generative AI may produce content that is plagiarized (including copyrighted work) or inaccurate (e.g., the information may be outdated, misleading or – in some cases – fabricated), all AI-generated content must be reviewed for accuracy before relying on it for work purposes.

Overall, the goal should be to have team members use AI for work-related purposes in an ethical way to help make them more efficient and, while doing so, protect employees, clients, suppliers, customers, and the organization from harm.  

If you need assistance with a baseline AI Usage Policy, please reach out to Avalon Cyber today.