Emerging Trend in GenAI: Observations on AI Agents
By Greg Ruppert, Executive Vice President and Chief Regulatory Operations Officer, FINRA
After 17 years at the FBI, I learned a lot about agents and the important work they do. Now, as the co-lead for FINRA’s enterprise-wide initiative on GenAI, I have seen AI agents emerge as a focal point in the financial services landscape.
An agent is someone who is authorized to act on behalf of a person or organization. But when we talk about AI agents—or Agentic AI—we’re clearly not referring to people. AI agents are systems or programs that can perform and complete tasks autonomously, without human intervention. They can plan, make decisions, and take actions to achieve goals without predefined rules or logic programming. AI agents can enhance GenAI capabilities by providing additional opportunities to automate tasks and interact with a wider range of data and systems, both faster and at a potentially lower cost than traditional process automation.
AI agents operate with varying degrees of autonomy and oversight. Yet unlike human employees, AI agents lack tacit knowledge, and they may also lack the transparency and predictability that traditional supervisory and governance practices assume. Some potential risks associated with AI agents were shared in this year’s Annual Regulatory Oversight Report and include:
- Autonomy: AI agents acting autonomously without human validation and approval.
- Scope and Authority: Agents may act beyond the user’s actual or intended scope and authority.
- Auditability and Transparency: Complicated, multi-step agent reasoning tasks can make outcomes difficult to trace or explain, complicating auditability.
- Data Sensitivity: Agents operating on sensitive data may unintentionally store, explore, disclose or misuse sensitive or proprietary information.
- Domain Knowledge: General-purpose AI agents may lack the necessary domain knowledge to effectively and consistently carry out complex and industry-specific tasks.
- Rewards and Reinforcement: Misaligned or poorly designed reward functions could result in the agent optimizing decisions that could negatively impact investors, firms or markets.
- Unique Risks of GenAI: The unique risks of GenAI—bias, hallucinations, privacy—also remain present and applicable for GenAI agents and their outputs.
Observations Regarding Member Firms’ Use of AI Agents
As discussed in Robert Cook’s recent blog post, FINRA is committed to engaging with our member firms on their use of GenAI to stay abreast of new developments. Where possible, we also look to share that information back with the industry and our fellow regulators to help inform their discussions and activities. Following the release of FINRA's Generative AI Member Firm Use Portfolio, which catalogued some of the most common GenAI use cases, we are continuing our commitment to facilitating informed industry dialogue on AI technologies.
Recently, as part of our risk monitoring activities and other member engagement efforts, FINRA’s regulatory staff have spoken to numerous member firms about their use of AI agents to better understand this emerging trend. We are building on our prior efforts by sharing observations from these discussions. Specifically, as FINRA observes agentic AI move from conceptual discussion into early practical deployment among member firms, below are some observations regarding different types of AI agents that the industry is beginning to explore. We hope sharing these can help to establish common terminology, better understanding of emerging practices, and support industry awareness as firms evaluate their own agentic AI strategies and associated risks.1
Types of AI Agents
- Conversational Agents: Autonomous software systems that interact through natural language (text or voice) to understand user intent, execute tasks, and provide responses by accessing and integrating information across multiple systems and data sources.
- Software Development Agents: AI systems that autonomously perform coding, testing, debugging, and infrastructure management tasks throughout the software development lifecycle, with varying degrees of human oversight and intervention.
- Fraud Detection and Prevention Agents: Agentic AI software programs that autonomously identify and analyze data to execute routine fraud detection workflows with faster escalation protocols.
- Trade and AML Surveillance Agents: AI systems that autonomously monitor alerts or trading activity to identify potential market manipulation, insider trading, and other prohibited practices utilizing adaptive technology with varying levels of human oversight.
- Process Automation and Optimization Agents: Agentic AI systems that autonomously execute, optimize, and adapt business workflows by analyzing data, making decisions within defined parameters, and coordinating actions across multiple systems with minimal human intervention.
- Trade Execution Agents: Autonomous systems that analyze market conditions, generate trading strategies, and execute transactions operating with varying levels of human oversight.
Next Steps
Given the rapid growth in GenAI capabilities, AI agents will likely become more prevalent and sophisticated in the years ahead. As their performance improves, AI agents will likely handle increasingly complex and sensitive functions. Member firms that establish robust GenAI supervision and governance frameworks may be better positioned to capitalize on GenAI’s opportunities and benefits while managing the associated new and unique risks.
We welcome feedback from firms regarding agentic AI implementations in your organization and encourage you to proactively engage with FINRA as your strategies develop, as noted in Regulatory Notice 24-09. This ongoing dialogue will inform future guidance and support the industry's responsible deployment of these technologies.
Disclaimer: This publication reflects FINRA's observations of Agentic AI use cases among our member firms. It does not create new legal or regulatory requirements or new interpretations of existing requirements, nor does it relieve firms of any existing obligations under federal securities laws and regulations. FINRA reminds its member firms that FINRA’s rules–which are intended to be technology neutral–and the securities laws more generally, continue to apply when member firms use Generative AI or similar technologies in the course of their businesses. For more information, see Regulatory Notice 24-09.
1 2026 FINRA Annual Oversight Report – Emerging Trends in GenAI: Agents
