⇤ Back to edition

A new study shows that banks still have a long way to go on tackling responsible AI.  

As artificial intelligence usage swells within financial firms, they’re woefully behind on ensuring that the tech is deployed safely. Execs need to be rigorous about maturing their responsible AI strategies.

Financial services firms need to drastically increase their focus on responsible AI, according to a recent survey by industry giant FICO.  

The survey found that 27% of organizations are yet to start developing responsible AI capabilities and only 8% describe their responsible AI strategy as “mature.”   

Responsible AI standards encompass a firm’s commitment to making sure its AI systems are explainable and auditable, which, in turn, reduces reputational or regulatory risks and protects customers.  

“Many AI systems lack a way to trace or explain how they make specific predictions,” according to Insight Partners’ managing director, Lonne Jaffe. “This makes it hard to test these systems and also makes it difficult for these systems to engender trust and meet regulatory requirements.” 

Companies like Insight portfolio firms Fiddler and Zest AI have emerged to manage the process of creating explainable and responsible AI. After all, ethically monitoring and managing these systems should be non-negotiable:

“It’s just part of the cost of doing business,” State Street managing director, Dan Power, told FICO. “If you’re going to have models, you’re going to have to govern them, and monitor them, and manage that evolution.” 

While the study found that financial firms are generally behind on self-regulation, they are treading carefully when it comes to one of the buzziest new AI applications, ChatGPT: A handful of banks have banned employees from using the tool.