ChatGPT – the AI-powered tool that can respond to prompts and queries in a human-like way – puts data privacy, security, and control at risk for big FinServs, experts tell us. While banning employees from using the tool may provide protection in the short term, banks need to progress their overall AI strategies to integrate its potential, too, they add.
JPMorgan, Citi, Goldman Sachs, and other big financial institutions have restricted employees from using ChatGPT at work, and two experts at tech consultancy Capco tell Insights Distilled that the move provides both benefits and risks.
ChatGPT threatens data privacy and security, according to R&D lead Ryan Favro, and its algorithmic decision-making also “raises questions about accountability and control.”
Plus, there are copyright concerns, adds technology delivery principal Luke Penca, and employers may be worried that workers will use the tool to cut corners and become disengaged from their responsibilities.
Still, banning ChatGPT outright could lead to missed opportunities and frustration among employees (which could lead to riskier workarounds).
“An immediate drawback to the ‘burn it with fire’ mentality about a technology we barely understand is that our risk adverseness will impede that understanding and stall the arrival of potential benefits to utility and productivity that these tools may hold,” Penca said.
Similarly, Favro cautions that banks should avoid “unintended consequences” of a ban by considering how ChatGPT could fit into their larger AI strategy and ideating on approaches that “balance risk management with innovation” by protecting data privacy and security.
For example, Swedish investment firm EQT is using ChatGPT internally to help its deal makers query a giant, proprietary data platform without relying on data scientists.