ABBYY
Back to ABBYY Blog

Managing AI Laws: Maintaining Security & Compliance in an Algorithmic World

October 4, 2024

Generative artificial intelligence (AI), the decade's defining technology, has seen remarkable uptake in the past year, with McKinsey reporting that two or more AI applications are deployed by 65% of global companies to support key business functions.

 

on-demand-intelligent-automation-month-abbyy-2024-728x90

 

As we forge ahead into an algorithmic society, the question isn't whether AI will be regulated, but how the scope of this regulation will be navigated. With many regulations going through ratification stages, any organization involved with AI - be it as a developer, service provider, or deployer - should prepare for the wave of regulations soon to be in force across multiple regions.

 

Is the corporate world ready for increased regulatory scrutiny?

In most cases, no.

Many organizations lack the necessary governance and accountability infrastructure needed to meet regulatory standards like the EU AI Act or the Federal Trade Commission (FTC) enforcement for unfair, deceptive, or abusive practices. This is evident from the widespread non-compliance with General Data Protection Regulation (GDPR), despite it being enacted over six years ago.

The recently introduced EU AI Act, though often overlooked, imposes even more exhaustive requirements in its content. With over 50 different requirements for an ethics officer, for example, it is unlikely that organizations have such expertise on staff and readily available. While much of the act’s enforcement is postponed until 2026, setting up organizations to meet these requirements remains a hurdle.

 

Cybersecurity concerns

Generative AI opens new potential attack vectors throughout training data, output, and models themselves. AI use must begin with data, much of which isn’t reviewed by human eyes, as skipping this step is a driving motivator for AI adoption in many businesses.

As a result, outdated information or personal data are often inadvertently fed into models, leading to skews and opportunities for data theft. Businesses must proactively think about prevention to stay ahead of these threats.

 

Inherent AI biases

Key tech figures, such as OpenAI CEO Sam Altman, have conceded AI's inherent risk for devastating consequences. Thus, mitigating this potential for harmful biases of AI “is just one dimension to harnessing its potential.” Regardless of regulatory compliance, organizations must be answerable for managing AI risks. The non-profit ForHumanity has proposed 20 areas of responsibility for top managers and supervising bodies.

 

ai-management-pillars-oversight-848x444

 

But are businesses equipped with the necessary experts to enforce these pillars? They need two types of experts: those who understand algorithmic risk and those who can manage ethical challenges. While the former are likely scattered throughout organizations, the latter is more elusive. Nonetheless, teams comprised of these experts are necessary to collaborate with the management to implement the twenty pillars of accountability.

 

Promoting a “security first” culture

A security-first approach demands constant commitment and communication from top to bottom. Cybersecurity training should be tailored to what the business does, rather than simply going through the motions to check a box. Additionally, security should be considered at every stage of a new initiative. It's crucial to remember that ethical decisions will inevitably be made alongside technological choices.

 

Risk Management

AI risk management frameworks should take a human-centric approach to risk management. The tech sector's typical “move fast and break things” philosophy doesn't align with this need, thus a shift in culture towards a focus on constant, committed responsibility is essential.

For managing risk, start by understanding the context and purpose of AI use, then identify and analyze the risks before devising control measures. This cycle should be repeated through communications, consulting, monitoring, and reviewing.

 

Independent audit of AI

Despite the best intentions, AI compliance can sometimes take a back seat to other projects. Independent auditing of AI systems can help shift this mindset, promoting compliance by design. As with financial auditing, where the introduction of the internal audit completely changed attitudes and approaches towards compliance, independent AI audits will likely do the same by fostering a mutual sense of accountability between businesses and individual auditors.

 

ABBYY’s trust-based AI culture

ABBYY fosters a trustworthy AI culture by ensuring everyone understands the organization's goals and objectives. Honesty, evidence-based decision making, and openness about the implications of AI solutions are key pillars of this culture. Everyone should be prepared to dig deep, seek information, scrutinize basic issues, and provide feedback to truly understand AI system usage.

ABBYY recently hosted four webinars for our yearly Intelligent Automation Month. This session on the inevitable wave of AI regulations featured insights from ABBYY AI Ethics Evangelist Andrew Pery, ABBYY Chief Information Security Officer Clayton C. Peddy, and ForHumanity Executive Director Ryan Carrier. To explore this topic in greater detail, request access to a recording.

Interested in intelligent automation solutions, but not sure where to start? Tell us about your business process challenges and we can find a solution together. We look forward to learning about your digital transformation journey.

Contact us