Building Trust through Security at ABBYY
by Clayton Peddy, Chief Information Security Officer
The Managing Director of the International Monetary Fund, Kristalina Georgieva, asserted in January 2024 that artificial intelligence (AI) will affect almost 40% of jobs around the world, replacing some and complementing others. Nearly a year later, it’s become clear that AI has become ubiquitous within companies, with enterprise AI usage skyrocketing by 595%. This is forcing legislative bodies, governing boards, and companies to find a careful balance of policies to tap its potential.
Among the potential – and concerns – with AI technology is security. While 93% of security professionals say that AI can ensure cybersecurity, 77% of organizations find themselves unprepared to defend against AI threats. Compounding the conundrum is the level of trust enterprise decision makers have in AI. In the latest ABBYY State of Intelligent Automation Report – AI Trust Barometer, 50% of those who don’t trust AI cite concerns about cybersecurity and data breaches. Furthermore, 47% and 38% have concerns about the accuracy in interpretation and analysis of AI models.
At ABBYY, we understand that trust is paramount for leading companies, governments, and institutions worldwide when implementing AI. In this edition of The Intelligent Enterprise—the Security issue—we’ll explore security and trust when using AI in automation. Topics range from guiding you on how to overcome risk and uncertainty with GenAI, taking the guesswork out of when and where to use AI to improve business processes, and key action items to ensure privacy with The Privacy Guy.
The ABBYY AI approach
As we explore AI security and trust-related topics, it’s important we reiterate that our purpose-built AI solutions are founded on our unwavering commitment to transparency in our ethical AI principles, data usage, security, and compliance. Our approach to trustworthy AI is rooted in six essential ethical principles that guide every aspect of our product development:
- Transparency: Informing our customers about our data security and trustworthy AI practices.
- Fairness and bias mitigation: Striving to prevent biases in AI outputs.
- Accountability: Responsibility for outcomes ensured at every level.
- Privacy and data protection: Prioritizing customer rights with stringent data sanctity.
- Robustness and reliability: Implementing industry standards, best practices, and regular auditing.
- Human-centric design: Keeping our customers’ needs at the core of innovation.
Absolute data integrity
Additionally, customers can rest assure that we have instituted rigorous technological and organizational measures to safeguard customer data. We thoroughly inform customers about how their data is processed and protected, ensuring they understand every step involved. All of our intelligent automation solutions—from process intelligence to intelligent document processing—give customers the power to fully control their data, letting them ensure its security.
Whether deploying on-premises solutions or cloud-based solutions, we steadfastly adhere to industry standards and best practices, comprehensive, regular external audits, and implementing strong encryption and staunch access controls to ward off potential threats. Furthermore, we collaborate with non-profit organizations to guide customers through the AI regulatory space. We help them ensure seamless navigation through the audit processes required for compliance with global data privacy and cybersecurity standards.
Purpose-built AI for today's challenges
For the past 35 years, our innovations in AI, security, and compliance have provided our customers with the tools necessary to focus on what truly matters—helping them achieve their goals with confidence in their data's security and integrity.
We recognize that the threat landscape evolves continuously. To stay ahead, we’ll continue to adapt our security measures to surpass emerging data protection regulations and client needs. At ABBYY, we don’t just build AI; we build trust.