ABBYY
Back to ABBYY Blog

ABBYY Trust Center Exemplifies Our Commitment to Security, Privacy, and Ethical Use of AI

Clayton Peddy

December 5 2024

The rate of artificial intelligence (AI) innovation is expected to compound exponentially as we dawn into The Intelligent Age. Rapid advancements in AI, quantum computing and blockchain are converging to empower people to live and work in ways we never could have imagined. Specifically, the advancements in deep learning, machine learning, natural language processing, and computer vision are altering AI and automation to deliver intelligent automation.

Many anticipate decreases in regulations for AI technologies in the US will accelerate how AI infrastructure benefits and protects American competitiveness. However, a pro-innovation AI strategy does not necessarily mean an absence of guardrails to protect against adverse impacts of AI.

Beyond European governing bodies and US federal and state agencies instituting regulations, companies must also be proactive to achieve a balance between AI innovation and safeguarding fundamental human and economic rights of consumers.

At ABBYY, we are committed to advancing reliable AI with ethical AI principles and best practices for transparency, confidentiality, privacy, security, and integrity related to software development processes and the use of data. With transparency in mind, we have opened the ABBYY Trust Center, a centralized platform showcasing our commitment to reliable AI. This initiative will streamline access to important compliance and security information. You’ll find:

  • Compliance reports
  • Security policies
  • Security environment details
  • Legal and privacy practices
  • ESG information

In addition to the documentation available in the ABBYY Trust Center, you can learn more about ABBYY’s approach to ethical AI in our day-to-day operations, privacy by design, and why we’re trusted by global enterprises here.

You can also read more about our advocacy for trustworthy AI in the following articles:

Is Generative AI Trustworthy?
Can AI regulations help build trust in the technology?
Bridging the AI Trust Gap - Unite.AI
The Ethical AI Dilemma: Navigating the Future of AI Privacy and Compliance
Banks Meeting Compliance & Fighting Fraud with AI
Learning to trust generative AI - LeadDev

We believe collaboration will be key to successful adherence to AI regulations and practice of trustworthy AI policies. ABBYY is in support of non-profit public charity ForHumanity to assist financial services organizations in meeting new regulations on the use of artificial intelligence (AI). ABBYY is a co-leader of a working group focused on establishing audit criteria for using Artificial Intelligence, Algorithmic and Autonomous systems (AAA), and supports the development of independent audit criteria.

We strongly believe organizations must be attentive to the possible adverse consequences of decisions based on biased or misused AI models that can lead to financial loss, poor business and strategic decision-making, or damage to a banking organization’s reputation. Ensuring ethical use of AI is paramount to consumer trust and safety.

Visit the ABBYY Trust Center
Clayton Peddy

Clayton Peddy

Clayton Peddy is Chief Information Security Officer (CISO) at ABBYY. Clayton brings over two decades of experience in cybersecurity, technology leadership, and software development. Clayton has held key roles in leading industry players such as OutSystems and Citrix. As the CISO of ABBYY, Clayton leads the company's information security initiatives, reinforcing its commitment to maintaining the highest standards of data protection, regulatory compliance, and innovation for its customers and partners. Follow Clayton on LinkedIn.