Skip to main content

GenAI: Continuing and Emerging Trends

NEW FOR 2026

Regulatory Obligations

FINRA’s rules—which are intended to be technologically neutral—and the securities laws more generally, continue to apply when firms use GenAI or similar technologies in the course of their businesses, just as they apply when firms use any other technology or tool. It is important for firms to consider how they will comply with applicable regulations, including FINRA rules, when evaluating GenAI tools prior to testing and deployment within their business environment.

For example, using GenAI can implicate rules regarding supervision, communications, recordkeeping and fair dealing. Pursuant to FINRA Rule 3110 (Supervision), a member firm must have a reasonably designed supervisory system tailored to its business. If a firm is relying on Gen AI tools as part of its supervisory system, its policies and procedures may consider the integrity, reliability and accuracy of the AI model.

How FINRA Member Firms Use GenAI

Generative AI (GenAI) use cases are emerging quickly in the financial industry. This use portfolio reflects some of the most common GenAI use cases FINRA has observed among our member firms. We are sharing how we categorize and define these uses, as it may be helpful to our fellow regulators, member firms, and others to have a shared terminology to facilitate discussions of this fast-evolving technology.

Use Case Types

Gen AI

Emerging Trends and Current Practices

Through FINRA’s survey of firms and engagement with other regulators, FINRA has noted that:

  • firms have started to implement GenAI solutions with a focus on efficiency gains, particularly with respect to internal processes and information retrieval; and
  • the top GenAI use case among FINRA member firms is “Summarization and Information Extraction,” which refers to condensing large volumes of text and extracting specific entities, relationships or key information from unstructured documents. 

Firms contemplating using GenAI tools and technologies may want to consider the following:

  • General:
    • Developing supervisory processes to develop and use GenAI at an enterprise level.
    • Approaches to identify and mitigate associated risks, including, but not limited to, accuracy (e.g., hallucinations) and bias:
      • Hallucinations refer to instances where the model generates information that is inaccurate or misleading, yet is presented as factual information.
        • Misrepresentation or incorrect interpretation of rules, regulations or policies or inaccurate client or market data can impact decision making.
      • Bias refers to situations where a model’s outputs are skewed or incorrect due to model design decisions or data that is limited or inaccurate, including outdated training data leading to concept drifts.
        • GenAI outputs and decision making could be influenced by historical data, model design or limited/skewed training data.
    • Assessing whether the firm’s cybersecurity program appropriately contemplates:
      • risks associated with the firm’s and its third-party vendors’ use of GenAI; and
      • how its technology tools, data provenance and processes identify how threat actors use AI or GenAI against the firm or its customers.
  • Supervision & Governance:
    • Implementing formal review and approval processes to assess and evaluate GenAI opportunities and the necessary controls to help manage unique risks, including both business and technology experts.
    • Establishing a supervision, governance or model risk management framework that establishes clear policies and procedures to develop, implement, use and monitor GenAI, while maintaining comprehensive documentation throughout.
  • Testing: Robust testing of GenAI to understand the capabilities, limitations and performance of the model. Testing areas to consider include areas such as privacy, integrity, reliability and accuracy.
  • Monitoring: Ongoing monitoring of prompts, responses and outputs to confirm the GenAI solution continues to perform as expected and results in compliant behavior. This may include storing prompt and output logs for accountability and troubleshooting; tracking which model version was used and when; and validation and human-in-the-loop review of model outputs, including performing regular checks for errors or bias.

Emerging Trends in GenAI: Agents

AI agents are systems or programs that are capable of autonomously performing and completing tasks on behalf of a user. An AI agent can interact within an environment, plan, make decisions and take action to achieve specific goals without predefined rules or logic programming. These agents can enhance GenAI capabilities by providing users with additional opportunities for task automation and the ability to interact with a wider range of data and systems faster and at a potentially lower cost than more traditional process automation. 

While AI agents may offer many potential benefits, there are also notable risks and challenges that could result in adverse impacts to investors, firms or the markets:

  • Autonomy: AI agents acting autonomously without human validation and approval.
  • Scope and Authority: Agents may act beyond the user’s actual or intended scope and authority.
  • Auditability and Transparency: Complicated, multi-step agent reasoning tasks can make outcomes difficult to trace or explain, complicating auditability.
  • Data Sensitivity: Agents operating on sensitive data may unintentionally store, explore, disclose or misuse sensitive or proprietary information.
  • Domain Knowledge: General-purpose AI agents may lack the necessary domain knowledge to effectively and consistently carry out complex and industry-specific tasks.  
  • Rewards and Reinforcement: Misaligned or poorly designed reward functions could result in the agent optimizing decisions that could negatively impact investors, firms or markets.
  • Unique Risks of GenAI: Keep in mind that the unique risks of GenAI—bias, hallucinations, privacy—also remain present and applicable for GenAI agents and their outputs.

Firms exploring and developing AI agents may wish to consider whether the autonomous nature of AI agents presents the firm with novel regulatory, supervisory or operational considerations. The rapidly evolving landscape and capabilities of AI agents may call for supervisory processes that are specific to the type and scope of the AI agent being implemented. Considerations may include:

  • how to monitor agent system access and data handling;
  • where to have “human in the loop” agent oversight protocols or practices;
  • how to track agent actions and decisions; or
  • how to establish guardrails or control mechanisms to limit or restrict agent behaviors, actions or decisions.

FINRA will continue to engage with firms on GenAI and emerging trends as the technology progresses.

Additional Resources