Governance Considerations in the Age of AI

There has been a lot of talk recently about artificial intelligence (AI), especially around ChatGPT, a chatbot which interacts in a conversational way. As a broad category, AI is the simulation of human processes by machines and computer systems. A few business use cases may include leveraging AI to provide fast and accurate response for customer inquiries, assisting with topic research, or reviewing patient medical history for purposes of medical diagnosis. As this technology continues to evolve, businesses are finding more and more use cases for AI programs.

With AI becoming mainstream in many processes used across a variety of industries, there must be consideration taken for acceptable use, restrictions, and proper governance. This will help ensure suitable use by employees, proper oversight by management, and show appropriate preparation to auditors and regulators as AI makes its way into their audit programs.

If you currently use or are contemplating the use of AI, here are some suggestions:

  • Implement a policy documenting your company’s stance on leveraging AI in the workplace and related data governance. Ensure approved use cases are documented and that employees not only know of acceptable use, but of any prohibited or restricted actions as well. Prohibited activities may include a lawyer using ChatGPT for purposes of preparing contract documents for their client or a student developing a research paper.
  • Companies may also wish to specifically call out within their policies that intellectual property (e.g., core application code, manufacturing details), client information, or other protected data is not loaded into a chatbot like ChatGPT.
  • For those authorized uses, there should be a way to perform a type of “checks and balances” on the output of the activity or data produced.
  • Always review potential technical implementations with appropriate personnel, such as management, compliance, and IT. You always want to be sure that such tools not only meet the needs of the business, but also meet applicable laws, regulations, or standards.
  • In addition to other IT-related reporting to senior management and the board of directors, if applicable, plans and updates regarding AI should also be presented, so that proper oversight and risk-based evaluation and guidance can be established.
  • As a companion to AI policy, principles and procedures for design, development, deployment, and use of AI, including privacy, should be created.
  • Once an AI application is implemented, ensure proper tracking and inventorying is completed, including the purpose of the tool, product owner, expected inputs and outputs, needed safety measures, training requirements, potential risks and risk rankings, and any other ongoing licensing and reviews needed.
  • Like any application or cyber program, a risk assessment should be performed prior to implementation and regularly thereafter. The applications should be classified based on criticality and ranked based on risk factors.
  • Training should be provided and tailored to the individual or group. This includes those developing applications, employees using AI, and senior management and/or the board who must provide appropriate oversight.
  • While you may have plans in place, such as incident response and business contingency, these should all be updated to include how to respond to events that arise from AI-related technologies, both those with technical and legal implications. Plans should be tested regularly, including with AI scenarios. (Read Avalon Cyber’s white paper on cybersecurity tabletop exercises.)
  • If AI products are not developed in-house, but instead provided by a third party, ensure you are performing proper due diligence before contracting with them and regularly thereafter. Frequency should be based on criticality and risk rank, as mentioned in the risk assessment bullet above. Vendors should have appropriate attestation documentation and a mature cybersecurity program, and any required contract language should be discussed and included in agreements. Statements regarding use and accuracy of AI-provided products should be made available publicly or upon request.

The time where prohibiting the use of AI tools altogether seems to be fading and a time where having a robust AI program is emerging. While the above points are not all-inclusive, these will put you well on the path to reducing risk and improving efficiency and effectiveness of AI-driven processes that, when governed appropriately, can allow for significant business value.

Need help documenting an AI / ChatGPT program policy? Contact our experts today.

    Share this Post

Contact Our Team Now