English, INmagazine

AI and Compliance: Navigating Challenges, Embracing Ethics

Just over a year after generative AI entered the mainstream, legal and compliance teams grapple with integrating this technology into daily operations while preparing for a surge in regulations and requirements aimed at the technology.

The year 2023 posed substantial inquiries about AI’s management internally and its ideal use. While the potential of AI to reshape operational landscapes isn’t lost on compliance officers, the central question remains: will 2024 witness the practical integration of AI into Governance, Risk, and Compliance (GRC) technology?


AI in Compliance: Anticipated Progress

 In the landscape of GRC, many vendors boast AI integration, but these claims primarily materialize in beta tests or specialized trials. The dream scenario where compliance officers seamlessly query GRC tools for detailed risk assessments on third parties remains on the horizon.

From due diligence to Know Your Customer (KYC) controls and sanction screenings, the application of AI across diverse compliance areas is conceivable. Large and small companies are vigorously developing such capabilities – specifically, the Chat GPT-like tools, sparking debates on the pace of adoption and the enthusiasm of compliance officers.


Implementation trends

Contrary to the buzz, many compliance professionals have yet to harness AI’s potential in their operational fabric. A survey conducted by Compliance Week last November unveiled that 59 percent of respondents were not using AI to assist with compliance obligations.

Among the respondents who said their compliance departments are relying on AI, the most popular use cases included improving policies and procedures (19 percent), monitoring third parties and communications (12 percent each), and keeping pace with regulatory change (11 percent).

Another recent survey by Bloomberg Law, conducted among legal professionals across various specialties, revealed that despite lingering concerns among many lawyers regarding AI’s risks, such as its potential threat to confidential information and propensity for errors, the industry is starting to embrace the technology.

Fifty-three percent reported utilizing AI for legal research, 42 percent for summarizing legal narratives, 34 percent for reviewing legal documents, and 21 percent for due diligence. These figures indicate an across-the-board increase in AI utilization since last summer.


Challenges and Realities

However, certain barriers will likely hinder the widespread adoption of generative AI in 2024. Most businesses lack the preparedness: much of the value will come from training the models on in-house data, but companies don’t have data capabilities and technical know-how to do that. Coupled with the high cost of technology and uncertainty about how best to integrate it into everyday business likely means the generative AI revolution will not come quickly.

Another significant point of concern is the privacy risk already mentioned earlier. AI companies – both firms developing AI solutions and businesses using them – need to actively step up their compliance programs to ensure they continue to meet the regulations of the GDPR and other data privacy rules. Existing regulatory frameworks – around privacy-by-design, encryption, data anonymization, and minimization, robust data handling policies, providing privacy settings for user consent, and user training programs – must also be rigorously applied to AI. The regulators are playing catchup, so companies should expect and prepare for rule changes, likely on a fairly frequent basis, as new regulations are brought in.


Strategic Oversight

When it comes to the role of corporate governance and, more specifically, the boards, this year should be about getting back to basics. Boards should ask fundamental questions such as: How do the board and management perceive AI – a force for good, a global threat, or somewhere in between? Should we leverage AI to enhance employee productivity and efficacy? Because if you can, it doesn’t mean you should.

Boards need to apply a cautious strategy and initially view AI as a company risk rather than an opportunity, allowing management to present the potential benefits of adopting AI, and assessing worst-case scenarios. For instance, consider the implications of a data breach or a malicious deepfake video featuring a CEO, which could swiftly erode a company’s hard-earned reputation. Legal and compliance teams can greatly help in running such red teaming and pre-mortem exercises.

Read More >>>


Yazı: Vera Cherepanova

Kaynak: INmagazine 33. Sayı

Diğer Sayıları İçin: INmagazine

Note: The opinions and comments in the articles belong to the author or authors and do not reflect the opinions of the Ethics and Reputation Association on the subject.