Intro: From AI Compliance to Competitive Advantage

5-minute read

In Brief

  • As companies deploy AI for a growing range of tasks, adhering to laws, regulations and ethical standards will be critical to building a sound AI foundation.

  • 80% of companies plan to increase investment in Responsible AI, and 77% see regulation of AI as a priority.

  • Most companies (69%) have started implementing Responsible AI practices, but only 6% have operationalized their capabilities to be responsible by design.

The rewards of Responsible AI

In a recent report, The Art of AI Maturity, Accenture identified a small group (12%) of high-performing organizations that are using AI to generate 50% more revenue growth while outperforming on customer experience (CX) and Environmental, Social and Governance (ESG) metrics. Among other success factors that have a combinatorial impact on business results, these Achievers are, on average, 53% more likely than others to be responsible by design. That means that they apply a responsible data and AI approach across the complete lifecycle of all their models, helping them engender trust and scale AI with confidence.

Being responsible by design will become more beneficial over time, especially as governments and regulators consider new standards for the development and use of AI. Countries such as the United Kingdom, Brazil, and China are already taking action, either by evolving existing requirements related to AI (for example, in regulation such as GDPR), or through the development of new regulatory policy.

Will the regulation of AI limit innovation?

There are some people who are concerned that regulation might stifle innovation. But I don't think that it needs to be that way. Consider the following analogy. If you build a fence at the edge of a cliff, people can go right up to the fence and they can look over it. But if there's no fence, then people have to stand well back from the edge. So regulation is a little bit like the fence for AI. Having regulation or strong guidelines, let companies know just how far they can push their innovation rather than perhaps holding back out of uncertainty.

How will the proposed regulations change how organizations view responsible AI?

Previously, many organizations were motivated to do the right thing from a risk of reputational damage if they didn’t. But the EU has now introduced draft AI legislation, which has really changed the dynamic. And other governments around the world are also considering what regulation or guidance or standards they should introduce. And these EU regulations, when they do come into law, will most likely set a kid of global standard for the development of secure, trustworthy and ethical AI. And that means noncompliance is no longer just an oversight. So companies that don’t build the right governance and controls into AI products and services risk a fine as high as 6% of their global annual turnover. So ultimately, if they want to protect their profits, they have to take Responsible AI seriously.

We surveyed 850 C-suite executives across 17 geographies and 20 industries to understand organizations’ attitudes toward AI regulation and assess their readiness to embrace it. Here’s what we learned.

The role of regulation

Our research shows that awareness of AI regulation is generally widespread and that organizations are well-informed.

  • Nearly all (97%) respondents believe that regulation will impact them to some extent

  • 95% believe that at least part of their business will be affected by the proposed EU regulations specifically

Interestingly, many organizations see regulatory compliance as an unexpected source of competitive advantage. The ability to deliver high quality, trustworthy AI systems that are regulation-ready will give first movers a significant advantage in the short-term, enabling them to attract new customers, retain existing ones and build investor confidence.

  • 43% think it will improve their ability to industrialize and scale AI

  • 36% believe it will create opportunities for competitive advantage/differentiation

  • 41% believe it can help attract/retain talent

Our research also reveals that organizations are prioritizing AI compliance and want to invest. Coupled with the opinion that Responsible AI can fuel business performance, it’s unsurprising that majority of respondents plan to increase investment in Responsible AI.

  • 77% indicated that future regulation of AI is a current company-wide priority

  • More than 80% say that they’ll commit 10% or more of their total AI budget to meeting regulatory requirements by 2024

Responsible AI readiness

However, most organizations have yet to turn these favorable attitudes and intentions into action.

  • Alarmingly, we found that only 6% of organizations have built their Responsible AI foundation and put their principles into practice. Organizations in this category are prepared to accommodate near-term and ongoing regulatory changes. Because they’re responsible by design, these companies can move past compliance and focus on competitive advantage.

  • A majority of respondents (69%) have some dimensions in place but haven’t operationalized a robust Responsible AI foundation. This group understands the value of Responsible AI, but they have yet to embed it across their entire organization.

  • Finally, 25% of respondents have yet to establish any meaningful Responsible AI capabilities. This group will have the most work to do to prepare their organizations for regulatory change.

While most companies have begun their Responsible AI journey, the majority (94%) are struggling to operationalize across all key elements of Responsible AI.

The question becomes: why? We identified a few primary barriers.

The biggest barrier lies in the complexity of scaling AI responsibly — an undertaking that involves multiple stakeholders and cuts across the entire enterprise and ecosystem. Our survey revealed that nearly 70% of respondents do not have a fully operationalized and integrated Responsible AI Governance Model. As new requirements emerge, they must be baked into product development processes and connected to other regulatory areas, such as privacy, data security and content.

Additionally, organizations may be unsure what to do while they wait for AI regulation to be defined. Uncertainty around rollout process/timing (35%) and the potential for inconsistent standards across regions (34%) were the largest concerns in relation to future AI regulation. This lack of clarity can lead to strategic paralysis as companies adopt a “wait and see” approach. As experienced with GDPR, reactive companies have little choice but to be compliance-focused, prioritizing the specific requirements rather than the underlying risk, which can lead to problems down the road…and value left on the table.

Consider these common challenges:

  • Responsible AI is cross-functional, but typically lives in a silo.

    Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.

  • Risk management frameworks are a requirement for all AI, but they aren’t one-size-fits-all.

    Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight

  • There is power in the AI ecosystem, but you’re only as strong as your weakest partner.

    AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers

  • Culture is key, but talent is scarce.

    Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.

  • Measurement is critical, but success is defined by non-traditional KPIs.

    The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.

    Responsible AI is cross-functional, but typically lives in a silo.

    Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.

    Risk management frameworks are a requirement for all AI, but they aren’t one-size-fits-all.

    Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight.

    There is power in the AI ecosystem, but you’re only as strong as your weakest partner.

    AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers.

    Culture is key, but talent is scarce.

    Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.

    Measurement is critical, but success is defined by non-traditional KPIs.

    The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.

    While there’s no set way to proceed, it’s important to take a proactive approach to building Responsible AI readiness to overcome or avoid the barriers above.

While there’s no set way to proceed, it’s important to take a proactive approach to building Responsible AI readiness to overcome or avoid the barriers above.

Becoming responsible by design

Based on our experience helping organizations across the globe scale AI for business value, we’ve defined a simple framework to help companies become responsible by design. This framework consists of four key pillars:

  • Define and articulate a Responsible AI mission and principles (supported by the C-suite), while establishing a clear governance structure across the organization that builds confidence and trust in AI technologies.

  • Strengthen compliance with stated principles and current laws and regulations while monitoring future ones, develop policies to mitigate AI risk and operationalize those policies through a risk management framework with regular reporting and monitoring.

  • Develop tools and techniques to support principles such as fairness, explainability, robustness, accountability and privacy, and build these into AI systems and platforms.

  • Empower leadership to elevate Responsible AI as a critical business imperative and provide all employees with training to give them a clear understanding of Responsible AI principles and how to translate these into actions.

VIEW THE DETAILED FRAMEWORK

Organizations can use this framework to inform a Responsible AI foundation that allows them to quickly assess the impact of any new regulation and respond to compliance requirements without starting from scratch each time.

All roads lead to responsibility

Scaling AI can deliver high performance for customers, shareholders and employees, but organizations must overcome common hurdles to apply AI responsibly and sustainably. While they’ve historically cited lack of talent and poor data quality/availability as their biggest barriers to AI adoption, “managing data ethics and responsible AI, data privacy and information security” now tops the list.

Being responsible by design can help organizations clear those hurdles and scale AI with confidence. By shifting from a reactive AI compliance strategy to the proactive development of mature Responsible AI capabilities, they’ll have the foundations in place to adapt as new regulations and guidance emerge. That way, businesses can focus more on performance and competitive advantage.


Ray Eitel-Porter, Managing Director – Applied Intelligence, Global Lead for Responsible AI

Ulf Grosskopf, Managing Director – Accenture Strategy, Data for Growth

Source: From AI Compliance to Competitive Advantage | Accenture