Positive Business Outcomes Require Trustworthy AI and Intelligent Process Automation
Andrew Pery
August 18, 2022
“An integral part of AI-enabled process automation is implementation of trustworthy AI best practices.”
Andrew Pery, AI Ethics Evangelist and Digital Transformation Consultant
Recently, I had the opportunity to attend and participate in the Corporate Innovation Summit, held in Toronto, which is part of the annual Collision Conference that brought together over 30,000 industry leaders, academics, and thought leaders to address the future of innovation. Key themes included building frameworks for algorithmic governance, digital trust and identity management, taking responsibility for the unintended consequences of tech innovation, 5G, blockchain, business in the metaverse, and marketing in the digital future.
I had the privilege of participating in three roundtable discussions that explored how AI-driven digital identity, customer onboarding, and regulatory compliance impact consumer experiences within financial services, insurance, and retail market segments. It was clear that ethical use of AI could either positively impact outcomes or exploit consumers. The following takeaways explore:
- Digital Trust and Identity Management
- AI Governance and Regulatory Frameworks
- Democratization of AI
- Three Recommendations to Fully Benefit From the Business Value of AI Technologies
Digital Trust and Identity Management: Adoptions trends and challenges associated with digital identity
COVID-19 accelerated the trend toward AI-powered chatbots, virtual financial assistants, and touchless customer onboarding, using AI-based biometric identity verification. This trend is confirmed in research by Cap Gemini i, which shows that 78% of consumers surveyed are planning to increase use of AI technologies, including digital identity management, in their interactions with financial services organizations. Biometric identity verification is more secure and removes onboarding friction, as evidenced in a Payments.com survey ii, which revealed that nearly 75% of US consumers “rely on their memory to recall passwords,” and 90% use the same password across many sites.
These inherent benefits notwithstanding, the roundtable discussion raised a number of challenges. Chief among them is continued consumer distrust of AI technologies and how their ubiquitous nature impacts privacy and security rights, with 30% of survey respondents indicating that they would be more comfortable sharing their biometric information if their financial service providers provided more transparency in explaining how their information is collected, managed, and secured. Another challenge discussed was the development and adoption of international standards iii for digital identity management and the need for collaboration iv among diverse stakeholders, including regulators. Consumers also need to have better control over their digital identities.
Today, digital identity management is typically a centralized process, which is prone to cyberattacks and to privacy breaches. There is momentum toward a more decentralized digital identity management framework that gives consumers more control over their own digital identities without dependence on third-party service providers.
AI Governance and Regulatory Frameworks: AI governance best practices designed to minimize AI bias and engender consumer trust
While, according to The OECD’s tracker on national AI policies, there are over 700 AI regulatory initiatives under development in over 60 countries, there are no legislatively mandated AI regulations in place. There are, however, voluntary codes of conduct and ethical AI principles developed by international standards organizations such as the Institute of Electrical and Electronic Engineers (“IEEE”) v and the National Institute of Standards and Technology (NIST) vi. However, the AI regulatory framework is rapidly evolving. AI regulation is inevitable, as evidenced by recent developments by the European Commission, Canadian Federal Government, US Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and The Federal Reserve Board. They are all flexing their regulatory muscles through their enforcement mechanisms to protect consumers against adverse impacts arising from the increased applications that may result in discriminatory outcomes, albeit, unintended. The bottom line is that implementation of AI governance best practices is no longer a nice to have initiative, but an imperative.
Democratization of AI: How AI is rapidly proliferating from a highly technical domain of data scientists to line of business users
The shift is toward no-code / low-code AI applications development, which according to Business Wire vii is forecast to reach $45.5 billion by 2025. The main driver is hyper-productivity improvements, which Forrester viii estimates will improve application development productivity by a factor of 10x and deliver faster time to value. While no-code / low-code development of AI-based applications offers unprecedented speed and ease of use, it also creates potential challenges. Foremost among them is AI governance, risk, and compliance challenges due to lack of scrutiny by data scientists and IT professionals.
“Low code can be like fast food: delivered quickly and in bright packaging, but bad for you, your community, and your ecosystem.”
Sean O'Brien
As Sean O’Brien ix, an academic at Yale Law School and the founder of Yale’s Privacy Lab, recently warned: “Low code can be like fast food: delivered quickly and in bright packaging, but bad for you, your community, and your ecosystem.” It is important that while no-code / low-code AI development delivers inherent efficiency improvements and time to value, its implementation must include comprehensive testing to ensure that it performs in accordance with initial design objectives, removes potential bias in the training data set, and is secure from adversarial AI attacks that can undermine AI algorithmic outcomes.
Three recommendations organizations may want to consider to fully benefit from the business value of AI technologies
1. Take a data-driven approach
A data-driven approach assists in determining where the applications of AI technologies may have the greatest impact before proceeding with implementation. Is it to improve customer engagement, or to realize operational efficiencies, or to mitigate compliance risks? Each of these business drivers require an understanding of how such processes execute. For example, customer onboarding processes are case-based with a high degree of variability of process execution. Here, task mining enables organizations to capture interactions associated with onboarding processes and can surface the time customer-facing staff spend onboarding tasks. It shows how escalations and exceptions are handled and identifies variations in process execution roadblocks and their root causes. Based on such data-driven analysis, organizations can make informed business decisions as to the impact associated with implementation of AI-based customer onboarding solutions.
2. Combine task mining and process mining
Second, the combination of task mining and process mining can further help organizations gain insight to end-to-end process execution by visualizing the flow of work through the process stages and see the delays, bottlenecks, and outliers. Process mining gives line of business facts and figures from real-time event log data to back up their decisions, assess the value of AI automation opportunities, continuously monitor the performance of AI systems, and combine process mining with machine learning and artificial intelligence to achieve highly integrated and fully automated insights to forecast processes in their future state, and take action to ensure positive outcomes.
3. Implement intelligent process automation
Once organizations have the benefit of data-driven insights to automation opportunities, consider implementing more advanced intelligent process automation solutions. In particular, AI applications are adept at automating highly labor-intensive and error-prone case-based and document-centric processes such as compliance auditing and KYC/AML in financial services. Organizations can take advantage of no-code / low-code AI applications that deliver pre-trained skills designed to understand and extract information from all types of documents such as invoices, purchase orders, receipts, W-2 forms, utility bills, insurance claims—to save development time and gain quicker ROI. AI-driven document skills are poised to help organizations automate up to 95% of document processing, minimize repetitive work, and reduce document processing time by 50%.
The common theme from the conference is that an integral part of AI-enabled process automation is implementation of trustworthy AI best practices. Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Ethical use of AI ought not be considered only as a legal and moral obligation, but as a business imperative. It makes good business sense to be transparent in the application of AI. It fosters trust and engenders brand loyalty.
Sources:
i How to drive AI at scale to transform the financial services customer experience | Cap Gemini
ii AI Removes Friction From Challenger Bank Onboarding | PYMNTS.com, 2021
iii Digital Identity Guidelines | National Institute of Standards and Technology (NIST)
iv "Digital ID: Three big opportunities – and three challenges" by Neil Butters | Interac.ca, 2019
vi Artificial Intelligence | National Institute of Standards and Technology (NIST)
ix "Low Code: Satisfying Meal or Junk Food?" by Pam Baker | Information Week, 2022