⇤ Back to edition

Banks need to tread carefully when deploying customer service chatbots, according to US regulators. 

The US Consumer Financial Protection Bureau (CFPB) recently warned financial institutions that it’s on the lookout for “poorly deployed chatbots,” as the generative AI wave continues.  

The US government has its eye on how FinServs are using artificial intelligence for customer service after receiving numerous complaints about the technology.  

The CFPB recently released an advisory that raised concerns about effectiveness, privacy, and false information in AI-powered customer service.  

“To reduce costs, many financial institutions are integrating artificial intelligence technologies to steer people toward chatbots,” CFPB Director Rohit Chopra said in a statement. “A poorly deployed chatbot can lead to customer frustration, reduced trust and even violations of the law.” 

Here’s what to keep in mind to protect consumers (and your own institution):  

Keep humans accessible. Even if a chatbot is the first line of customer service, consumers should be able to easily talk to a real person if they want to. The best chatbots make it simple and intuitive to switch to a human agent: They don’t force customers to get stuck in repetitive loops.  

Safeguard data. Protecting customer data is key. Chatlogs should be secure and private. This risk is especially noteworthy for FinServs working with third-party providers (the government also recently released guidance on how banks need to conduct due diligence on fintechs or other vendors). 

Know which problems to tackle. Chatbots are well-suited for simple tasks like retrieving account balances, looking up recent transactions, and paying bills. Be wary of assigning complex problems to chatbots, like explaining the nuances of new products or services, for inaccurate information could lead to fees or other penalties.