Editor’s Note: From time to time, ComplexDiscovery highlights publicly available or privately purchasable announcements, content updates, and research from cyber, data, and legal discovery providers, research organizations, and ComplexDiscovery community members. While ComplexDiscovery regularly highlights this information, it does not assume any responsibility for content assertions.

Contact us today to submit recommendations for consideration and inclusion in ComplexDiscovery’s data and legal discovery-centric service, product, or research announcements.


Background Note: This article based on Gartner’s announcement of the report “Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate” underscores the importance of assessing and mitigating six identified risks associated with the usage of generative AI tools like ChatGPT. These risks include generating incorrect or fabricated responses, breaching data privacy and confidentiality, inherent model bias, potential IP and copyright infringement, cyber fraud, and consumer protection concerns.

This summary is significant for cybersecurity, information governance, and legal discovery professionals because it brings to light potential vulnerabilities that could be exploited, leading to misinformation, data breaches, or even cyber fraud. It emphasizes the need to ensure compliance with AI bias laws, copyright and IP laws, and consumer protection laws, thus highlighting the important role of these professionals in setting guidelines and controls for the usage of such AI tools. Additionally, the article underscores the importance of continuous auditing, monitoring, and improvement in managing these AI-related risks.

Industry Backgrounder*

Managing ChatGPT in the Corporate Environment? Six Crucial Risk Factors Highlighted by Gartner

ComplexDiscovery Summary of Gartner Recommendations from Ron Friedman

In a world where AI tools like ChatGPT, a large language model (LLM), are increasingly being used, Gartner Inc., a renowned research and advisory company, stresses the importance of vigilance. Ron Friedmann, Senior Director Analyst at Gartner Legal & Compliance Practice, has raised concerns about the susceptibility of outputs from ChatGPT and similar tools to numerous risks. He emphasizes that legal and compliance leaders need to ascertain whether these concerns pose significant threats to their organization and devise appropriate strategies to manage them, to avert serious legal, reputational, and financial implications.

One risk that Friedmann points out is that ChatGPT and other LLM tools can generate responses that may seem plausible yet incorrect. To counter this, he advises organizations to emphasize the assessment of the accuracy and relevance of ChatGPT outputs before their acceptance.

A further risk lies in data privacy and confidentiality. Information entered into ChatGPT could potentially become part of its training dataset, raising alarm about the misuse of sensitive, proprietary, or confidential information. To guard against this, Friedmann suggests that legal and compliance departments implement strict compliance frameworks that prohibit the input of sensitive personal or organizational data into public LLM tools.

Despite efforts by OpenAI to limit bias in ChatGPT, there have been known instances of bias, which constitutes another risk. Friedmann urges legal and compliance leaders to stay informed about laws governing AI bias and ensure their guidelines are compliant.

In the realm of intellectual property (IP) and copyright, risks are not absent either. With ChatGPT being trained on extensive internet data, including copyrighted content, its outputs could infringe on copyright or IP protections. Friedmann, therefore, emphasizes the importance of leaders staying updated on copyright law changes that are relevant to ChatGPT’s output.

Given the instances of misuse of ChatGPT by malicious actors for the spread of false information, there is a real threat of cyber fraud. To counter this, Friedmann suggests that legal and compliance leaders work closely with cyber risk managers to keep company cybersecurity personnel well-informed.

Lastly, a failure to disclose the usage of ChatGPT, especially when it functions as a customer support chatbot, can result in consumer protection risks. Businesses might face legal action and lose customer trust. In light of this, Friedmann advises that organizations ensure their usage of ChatGPT complies with all relevant laws and proper disclosures have been made to customers.

For further details about these risks, Gartner’s clients can refer to the company’s report titled “Quick Answer: What Should Legal and Compliance Leaders Know About ChatGPT Risks?”. Non-clients can gain more information from another of Gartner’s reports, “5 Corporate Governance Trends Affecting the Board’s Oversight Role in 2023”.

Read the original announcement.

Reference: Gartner identifies six CHATGPT risks legal and compliance leaders must evaluate (May 18, 2023) Gartner. Available at: https://www.gartner.com/en/newsroom/press-releases/2023-05-18-gartner-identifies-six-chatgpt-risks-legal-and-compliance-must-evaluate (Accessed: 20 May 2023).


*Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery

The post Managing ChatGPT in the Corporate Environment? Six Crucial Risk Factors Highlighted by Gartner appeared first on ComplexDiscovery.