loader

AI Adoption in Financial Services: Reality Check from SEC Roundtable

Financial services companies are still at the beginning stages of implementing generative and agentic artificial intelligence (AI), despite the optimistic outlook presented by AI vendors and industry leaders during a recent roundtable discussion organized by the Securities and Exchange Commission (SEC) on March 27.

Industry experts, including Sarah Hammer from the Wharton School and Hardeep Walia of Charles Schwab, acknowledged the potential of generative AI to enhance operational efficiency in areas such as compliance and human resources. However, they also pointed out that many financial institutions are progressing at a slower pace than their technology counterparts.

Walia noted that the adoption rate among financial firms is not as swift. He mentioned, ‘All of us are in those early innings. We’re all experimenting, we’re all doing evaluations, we’re all trying to calculate the ROI.’ He added that most use cases currently involve a human operator.

Despite the troubleshooting benefits of generative AI in areas like clearing and settlement, Hammer highlighted a common struggle with quantifying return on investment. ‘Companies are still thinking about value,’ she said, emphasizing the high expenses associated with advanced AI technology.

Panelists indicated a downward trend in costs, driven by open-source AI models like DeepSeek, which are contributing to reduced expenditures. MIT researcher Peter Slattery raised concerns about the limitations of generative AI, stating that while it can match a human worker’s performance to a degree, achieving a level of quality that can replace human workers is still far off.

Additionally, Broadridge’s Tyler Derr pointed out that AI’s primary purpose is not to replace workers but to enhance their workflows. ‘It’s more about enhancing human operations,’ he stated.

Exploring AI Risks

Alongside the discussions of benefits, experts also addressed the risks associated with AI implementations. Slattery leads an initiative at MIT called the AI Risk Repository, which monitors new risks introduced by AI adoption. He noted that the lines defining liability in scenarios involving AI are becoming increasingly blurred due to the complexity of interactions between AI systems.

According to Slattery, ‘There’s fundamentally new risks there, and a lot we need to think about.’ Hammer reinforced the message regarding the necessity for robust governance policies to navigate the evolving landscape of AI regulation.

Moving forward, the panelists stressed the importance of collaboration between financial service firms and regulators, indicating that improved communication and shared insights are crucial for successful AI integration.