(18 May 2023) Legal and compliance leaders should address their organization’s exposure to six specific ChatGPT risks, and what guardrails to establish to ensure responsible enterprise use of generative AI tools, according to Gartner, Inc.
“The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks,” said Ron Friedmann, senior director analyst in in the Gartner Legal & Compliance Practice. “Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed, both within the enterprise and its extended enterprise of third and nth parties. Failure to do so could expose enterprises to legal, reputational and financial consequences.”
The six ChatGPT risks that legal and compliance leaders should evaluate include:
Risk 1 – Fabricated and Inaccurate Answers
Perhaps the most common issue with ChatGPT and other LLM tools is a tendency to provide incorrect – although superficially plausible – information.
“ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann. “Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted.”
Risk 2 – Data Privacy and Confidentiality
Legal and compliance leaders should be aware that any information entered into ChatGPT, if chat history is not disabled, may become a part of its training dataset.
“Sensitive, proprietary or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” said Friedmann. “Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organizational or personal data into public LLM tools.”
Risk 3 – Model and Output Bias
Despite OpenAI’s efforts to minimize bias and discrimination in ChatGPT, known cases of these issues have already occured, and are likely to persist despite ongoing, active efforts by OpenAI and others to minimize these risks.
“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant,” said Friedmann. “This may involve working with subject matter experts to ensure output Is reliable and with audit and technology functions to set data quality controls.”
Risk 4 – Intellectual Property (IP) and Copyright risks
ChatGPT in particular is trained on a large amount of internet data that likely includes copyrighted material. Therefore, it’s outputs have the potential to violate copyright or IP protections.
“ChatGPT does not offer source references or explanations as to how its output is generated,” said Friedmann. “Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”
Risk 5 – Cyber Fraud Risks
Bad actors are already misusing ChatGPT to generate false information at scale (e.g., fake reviews). Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which malicious adversarial prompts are used to trick the model into performing tasks that it wasn’t intended for such as writing malware codes or developing phishing sites that resemble well-known sites.
“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann. “They should also conduct an audit of due diligence sources to verify the quality of their information.”
Risk 6 – Consumer Protection Risks
Businesses that fail to disclose ChatGPT usage to consumers (e.g., in the form of a customer support chatbot) run the risk of losing their customers’ trust and being charged with unfair practices under various laws. For instance, the California chatbot law mandates that in certain consumer interactions, organizations must disclose clearly and conspicuously that a consumer is communicating with a bot.
The press release in full is here.