Is Innovation Outpacing Responsible AI?

AI investment and use have accelerated in response to generative AI’s wildfire adoption. Responsible AI is lagging, but it’s gaining momentum.

Lisa Morgan, Freelance Writer

April 15, 2024

5 Min Read
wildfire concept
LEBLOND Catherine via Alamy Stock

Ethical AI, responsible AI, trustworthy AI. These terms had little meaning just a few years ago, but the AI’s acceptance and momentum has been steadily increasing. The point of responsible AI is to ensure that AI efforts don’t cause harm to individuals or society at large, though the topic is not limited to risk mitigation. Responsible AI can also drive business value. 

One catalyst igniting responsible AI is regulation, such as the EU’s AI Act. 

“It’s more about anticipating regulations and making sure you are prepared to be compliant. It’s no longer about waiting until the regulations are fully baked and out,” says Beena Ammanath, technology trust ethics leader at Deloitte and author of Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI. “I’ve seen [an increasing number] of AI trainings for board members, members of the C-suite and employees from an upskilling or reskilling perspective in terms of AI fluency, how to use AI most effectively and balancing risk with the optimal use of AI.” 

Responsible AI involves several things, as discussed in Deloitte’s recent report about building trustworthy generative AI. Those considerations include: 

  • Mapping trust domains to generative AI 

  • Fairness and impartiality 

  • Transparency and explainability 

  • Safety and security 

  • Accountability 

  • Responsibility 

  • Privacy 

Related:SAP’s Sophia Mendelsohn on Using AI to Scale Sustainability

Deloitte is also a founding sponsor of the World Economic Forum Centre for Trustworthy Technology that promotes and facilitates ethical technology.

Beena_Ammanat-deloitte_.jpg

“Organizational leaders are looking for the tangible things they can do today, whether it’s training their workforce on AI ethics, putting out AI use policies and guidelines or making sure they have checks and balances in place when AI is being used,” says Ammanath. “There’s much more bias towards action -- what we as an organization can do now.” 

AI Adoption Versus Responsible AI Adoption 

While the adoption of both AI and responsible AI adoption have increased, the path to risk mitigation and driving business value can be realized more easily if companies prioritize responsible AI. 

For example, Boston Consulting Group (BCG) and MIT teamed up on some surveys, one of which found that 20% of companies have responsible AI programs, 30% have nothing, and the rest fall somewhere in between. 

“There wasn’t a lot of correlation between AI maturity and responsible AI maturity. Interestingly, you’d think all the really mature [organizations] would also have mature responsible AI,” says Steven Mills, managing director and partner and chief AI ethics officer at BCG. “When we dug into the data, we found that it was advantageous to scale responsible AI ahead of AI.” 

Related:7 Top IT Challenges in 2024

The reason for that is by tackling responsible AI first, system lapses happen less frequently, and organizations are able to drive more value from their AI investments. 

“It’s less about guardrails and policy, and more thinking about uses cases and outcomes. I’ve always told people the policies you create, the risk frameworks you create cannot be static, they have to evolve over time as you learn,” says Mills. 

Mills set an example at BCG, for example. He updated the policy four or five times during the first 12 months of the generative AI explosion because there were use cases that needed more eyes on them. 

“You have to accept that you learn as you go. You also need a tracking function to understand how the technology is evolving. What new risks are popping up?” says Mills. 

It also helps to have someone who’s responsible and accountable for a responsible AI program, such as a chief ethics officer. If that’s not practical, maybe a CIO or CTO might spearhead the effort or in the case of small companies, the CEO. However, driving value requires the support of the organization, meaning the chief ethics officer or other responsible person has the financial and human resources they need to drive action.

Related:Does Your Organization Need a Dedicated AI Leader?

It turns out that the companies best equipped to manage AI risks out of the gate operate in highly regulated industries because they already know how to model risk management.  

Steve_Mills_boston_consulting_group.jpg

“I always stress to companies that it’s a journey to build a mature responsible AI program. We figure it takes two or three years to go from zero to really mature but you don’t have to wait two or three years to realize any benefit,” says Mills. “Particularly if you accelerate the review of use cases, you will realize benefit very quickly and it’s the right thing to do.” 

Bear in mind that responsible AI is not just about regulatory compliance and risk management. It can be a source of business value. According to the BCG/MIT research referenced above, half of responsible AI leaders report having developed better products and services and nearly the same amount says they’re achieving better brand recognition. Slightly fewer (43%) cite accelerated innovation. 

Bottom Line 

Today’s organizations are more likely to have a responsible AI program than they were just a couple of years ago because they realize there are AI-related risks, more regulation is coming that will require compliance and they want to drive as much value as possible. Mature responsible AI programs are more likely to achieve all three, though most companies still lack a mature responsible AI program. 

While not all organizations have the deep pockets larger organizations enjoy, they are nevertheless wise to have a responsible AI program in place and someone to lead the effort. Just remember that person needs adequate resources and the authority to run a successful program. 

 

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights