Skip to main content

Key Challenges and Regulatory Considerations

AI-based applications offer several potential benefits to both investors and firms, many of which are highlighted in Section II. Potential benefits for investors include enhanced access to customized products and services, lower costs, access to a broader range of products, better customer service, and improved compliance efforts leading to safer markets. Potential benefits for firms include increased efficiency, increased productivity, improved risk management, enhanced customer relationships, and increased revenue opportunities. 

However, use of AI also raises several concerns that may be wide-ranging across various industries as well as some specific to the securities industry. Over the past few years, there have been numerous incidents reported about AI applications that may have been fraudulent, nefarious, discriminatory, or unfair, highlighting the issue of ethics in AI applications. As such, several organizations have established initiatives or developed principles to promote the ethical use of AI. 29

AI-based applications present some particular challenges that securities market participants may wish to consider as they explore and adopt related technology tools. Specifically, where applicable, factors for market participants to consider when seeking to adopt AI-based applications include model risk management, data governance, customer privacy, and supervisory control systems. Other factors for potential consideration include cybersecurity, outsourcing/vendor management, books and records, and workforce structure. This section provides a brief discussion of each of these factors and highlights certain related regulatory considerations.30  

While this section highlights certain key thematic areas, it is not meant to be an exhaustive list of all factors or regulatory considerations associated with adopting AI-based applications. Broker-dealers should conduct their own assessments of the implications of AI tools, based on their business models and related use cases.

Model Risk Management

Firms that employ AI-based applications may benefit from reviewing and updating their model risk management frameworks to address the new and unique challenges AI models may pose. These challenges may include those related to model explainability, data integrity, and customer privacy. Model risk management becomes even more critical for ML models due to their dynamic, self-learning nature.

A comprehensive model risk management program typically includes areas such as model development, validation, deployment, ongoing testing, and monitoring. Where applicable, the following are potential areas for firms to consider as they update their model risk management programs to reflect the use of AI models:

  • Update model validation processes to account for complexities of an ML model.31 This includes reviewing the input data (e.g., review for potential bias), the algorithms (e.g., review for errors), any parameters (e.g., verify risk thresholds), and the output (e.g., determine explainability of the output).
  • Conduct upfront as well as ongoing testing, including tests that experiment with different and stressed scenarios (e.g., unprecedented market conditions) and new datasets.
  • Employ current and new models in parallel and retire current models only after the new ones are thoroughly validated.
  • Maintain a detailed inventory of all AI models, along with any assigned risk ratings such that the models can be appropriately monitored and managed based on their risk levels.
  • Develop model performance benchmarks (e.g., number of false negatives) and an ongoing monitoring and reporting process to ensure that the models perform as intended, particularly when the models involved are self-training and evolve over time.

Model Explainability

Many ML models allow for some level of explainability with respect to the underlying assumptions and factors considered in making a prediction. Some ML models, however, are described as “black boxes”32 because it may be difficult or impossible to explain how the model works (i.e., how its predictions or outcome are generated).33

While firms indicated that operational deployment of black box models in the near term within the securities industry was unlikely, they also noted that some cutting-edge applications of AI had presented explainability challenges.

An appropriate level of explainability may be particularly important in AI applications that have autonomous decision-making features (e.g., deep learning-based AI applications that trigger automated investment decision approvals). Against this backdrop, firms noted that their compliance, audit, and risk personnel would generally seek to understand the AI-models to ensure that they conform to regulatory and legal requirements, as well as the firms’ policies, procedures, and risk appetites before deployment.

FINRA Rule 3110 (Supervision) requires firms to establish and maintain a system to supervise the activities of its associated persons that is reasonably designed to achieve compliance with the applicable securities laws and regulations and FINRA rules. This rule applies to all activities of a firm’s associated persons and its businesses, regardless of the use of technology. As such, in supervising activities related to AI applications, firms have indicated that they seek to understand how those applications function, how their outputs are derived, and whether actions taken pursuant to those outputs are in line with the firm’s legal and compliance requirements.

The following are some potential areas for firms to consider, as applicable, when establishing policies and procedures that address concerns related to explainability.

  • Incorporating explainability as a key consideration in the model risk management process for AI-based applications. This may involve requiring application developers and users to provide a written summary of the key input factors and the rationale attributed to the outputs. The models can then be tested independently by the model validation teams or by external parties. Some firms noted that they test certain ML models using techniques that involve isolating specific data variables or features in the model to determine their impact on the output. For instance, if eliminating an important feature does not significantly change the output, it may indicate that the model is not appropriately incorporating that feature in the decision-making. Some firms also noted introducing new datasets during the model validation process to ensure that the model operates consistently. However, in considering factors around explainability, it is important to guard against ex-post rationalization of ML models based on correlations that may not link to any underlined causality.34
  • Building a layer of human review of the model outputs, where applicable, to ensure that the results are in line with business goals, as well as firms’ internal policies, procedures, and risk appetite. In our discussions with industry participants, the vast majority noted that their ML-based applications do not involve autonomous action but are instead used to aid human decision-making.
  • Establishing appropriate thresholds and guardrails, where ML models trigger actions autonomously. For example, some firms exploring ML for trading applications indicated that they have risk-based limitations built into those applications, such as amount or threshold limits for trade orders.

There are several efforts underway in the financial services industry, as well as more broadly, to develop tools that can provide transparency and explainability for AI models. One such notable effort is the Explainable AI (XAI) program undertaken by the Defense Advanced Research Project Agency (DARPA) to “[p]roduce more explainable models, while maintaining high levels of learning performance (prediction accuracy); and [e]nable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.”35

Data Governance

Data is the lifeblood of any AI application. AI applications benefit from large amounts of data to train and retrain models, conduct comprehensive analyses, identify patterns, and make predictions. Accordingly, the quality of the underlying dataset is of paramount importance in any AI application.

Industry participants noted that one of the most critical steps in building an AI application is to obtain and build the underlying database, such that it is sufficiently large, valid, and current. Depending on the use case, data scarcity may limit the model’s analysis and outcomes, and could produce results that may be narrow and irrelevant. On the other hand, incorporating data from many different sources may introduce newer risks if the data is not tested and validated, particularly if new data points fall outside of the dataset used to train the model. In addition, continuous provision of new data, both in terms of raw and feedback data, may aid in the ongoing training of the model.

Data Bias

Data integrity is a key area of consideration for ML models. An important factor for maintaining data integrity centers on considering issues of data bias.36 One component of data bias involves data sets that may not include relevant information or include a skewed subset of information, resulting in potential distortions in the output produced by ML models. To limit the potential for this type of data bias it is helpful to have data that is complete, current, and accurate. Another component of data bias involves issues around demographic biases. 37 Such biases may relate to individuals or institutions and may exist in data due to historical practices and could foster discriminatory outcomes if not appropriately addressed. 38

Data bias, in general, may also be inadvertently introduced during the data preparation process, as data scientists determine which data fields and related features to incorporate in the ML model. Any biases in the underlying data may propagate through the ML model and may lead to inappropriate decision-making.

FINRA Rule 2010 requires firms, in the conduct of their business, to observe high standards of commercial honor and just and equitable principles of trade. These general requirements apply to activity engaged by the firm including, where applicable, those resulting from AI applications.

When reviewing and modifying data governance policies and procedures to address potential data-related risks that may emerge in AI applications, some areas for consideration, where applicable, include:

  • Data review for potential bias: When building an AI application, it is important to review the underlying dataset for any potential built-in biases. Some firms undertake steps during the testing process to review for potential biases. For example, some firms adjust or eliminate certain data features of the AI model to see how the changes impact the model output. Using such data filters may provide indications of potential biases in the model output based on those features. Testing the models using proxies instead of demographic data is another technique that may reveal potential biases. Firms may also involve multiple participants representing different functions to review the dataset as well as to test the outputs of the models. Recent reports have noted that introducing diversity in the staffing teams that build and test AI systems may provide wider perspectives and enhanced reviews for potential bias in the data.39 “A more diverse AI community would be better equipped to anticipate, review and spot bias and engage communities affected.” 40 Furthermore, providing training on this issue to all individuals that are involved in the development and testing process will likely make them more cognizant of the issue. Firms may also consider using open-source tools created by large technology companies to assist in identifying unwanted bias in data.41  
  • Data source verification: AI models often incorporate data from both internal and external sources. As firms compile data, many indicated that they regularly review the legitimacy and authoritativeness of data sources. This is particularly important where the data is obtained from external sources (e.g., open-source data libraries, market data providers, data aggregators, and social media channels). When sourcing data from open source platforms or social media, firms benefit from incorporating appropriate verification steps, particularly given the proliferation of deep fakes42 and social media manipulation.43
  • Data integration: As firms tap into various data sources to power their AI applications, they seek to obtain and integrate the data effectively into their systems so that it can be leveraged across their organizations. While traditionally, data may have resided in silos in different parts of their organizations, firms are now creating central data lakes to ensure consistency in data usage, to maintain appropriate entitlement and access levels, and to create synergies in data usage.
  • Data security: Another key consideration is the security of the data that is made available to various stakeholders, both internal and external, in order to develop, test, and use AI applications. It is critical that firms develop, maintain, and test appropriate entitlement, authentication, and access control procedures, as well as use encryption techniques for sensitive data. As discussed in the following section, ensuring customer data privacy is a key objective in establishing data security measures.
  • Data quality benchmarks and metrics: As part of a comprehensive data governance strategy, firms may also wish to consider developing and monitoring benchmarks and metrics to measure and assess the effectiveness of their data governance programs.

Customer Privacy

AI applications used in the securities industry may involve the collection, analysis, and sharing of sensitive customer data, as well as ongoing monitoring of customer behavior. For example, AI-based customer service tools may involve collection and use of personally identifiable information (PII) and biometrics. Similarly, certain customer focused AI applications monitor information, such as customer website or app usage, geospatial location, and social media activity. Some tools also involve recording written, voice, or video communications with customers. While AI tools based on these types of information may offer firms insights into customer behavior and preferences, they also may pose concerns related to customer privacy if the information is not appropriately safeguarded. Broker-dealers benefit from considering the applicability of relevant customer privacy rules when developing and using such applications, both with respect to the data that is used in AI models and the information that is made available by their outputs.

Protection of financial and personal customer information is a key responsibility and obligation of FINRA member firms. As required by SEC Regulation S-P (Privacy of Consumer Financial Information and Safeguarding of Personal Information), broker-dealers must have written policies and procedures in place to address the protection of customer information and records.44 In addition, as detailed in NASD Notice to Members 05-49 (Safeguarding Confidential Customer Information), firms are required to maintain policies and procedures that address the protection of customer information and records, and ensure that their policies and procedures adequately reflect changes in technology. Firms also should provide initial and annual privacy notices to customers describing information sharing policies and informing customers of their rights. Additionally, SEC Regulation S-ID (the Red Flags Rule) requires broker-dealer firms that offer or maintain covered accounts to develop and implement written “Identity Theft Prevention Programs.” Moreover, numerous international, federal, and state regulations and statutes set forth specific rules and requirements related to customer data privacy. Firms should assess the applicability of these laws as they build their AI applications and any underlying infrastructures.

Firms also should update their written policies and procedures with respect to customer data privacy, to reflect any changes in what customer data and information is being collected in association with AI applications, and how that data is collected, used, and shared. In this regard, below are some questions for firms to consider.

  • Have appropriate consents been obtained from customers to track, collect, and monitor their information, including information obtained directly from customers (e.g., PII and biometrics) as well as other sources (e.g., website usage, social media platforms, or third-party vendors)?
  • Has the applicable data been authorized for each relevant use case related to an AI application?
  • Have user entitlements and access procedures been updated as new, shared databases or centralized data lakes are created?
  • Has sensitive data been obfuscated45 (as needed) and does the data continue to remain protected as it is applied across different AI applications?
  • Has the data governance framework been appropriately updated to reflect any changes related to customer data privacy policies and procedures?

Supervisory Control Systems

FINRA rules require firms to establish and maintain reasonable supervisory policies and procedures related to supervisory control systems in accordance with applicable rules (e.g., FINRA Rules 3110 and 3120). This includes having reasonable procedures and control systems in place for supervision and governance of AI-based tools and systems across applicable functions of a broker-dealer.

As discussed earlier in this report, use of AI-based applications may pose unique and complex challenges, such as those related to model explainability and bias. As noted in FINRA’s 2020 Risk Monitoring and Exam Priorities Letter, “Firms’ increasing reliance on technology for many aspects of their customer-facing activities, trading, operations, back-office, and compliance programs creates a variety of potential benefits, but also exposes firms to technology-related compliance and other risks.”

As broker-dealer firms employ AI-based tools and services across the firm, they should update and test related supervisory procedures and reflect those updates in their written supervisory procedures (WSPs). In addition to the topics discussed earlier in this report, some areas for consideration, where applicable, when adopting AI applications are noted below.46

  • Establish a cross-functional technology governance structure: As previously stated by FINRA, firms may find it beneficial to establish a cross-disciplinary technology governance group to oversee the development, testing, and implementation of AI-based applications.47 Such a group could include representation from different functions across the organization, including business, technology, information security, compliance, legal, and risk management. FINRA has previously stated in the trading context that: “[A]s the use of algorithmic strategies has increased, the potential of such strategies to adversely impact market and firm stability has likewise grown. When assessing the risk that the use of algorithmic strategies creates, firms should undertake a holistic review of their trading activity and consider implementing a cross-disciplinary committee to assess and react to the evolving risks associated with algorithmic strategies.”48
  • Conduct extensive testing of applications: Testing new tools and applications across various stages of their lifecycle can help identify potential concerns in a timely manner and limit potential issues. This could involve extensive testing of the applications by various user groups, and by using new data sets and new scenarios in the testing process. In addition, this could also include maintaining existing parallel systems as firms test new ones.
  • Establish fallback plans: Establishing back-up plans in the event an AI-based application fails (e.g., due to a technical failure or an unexpected disruption) can help ensure that the relevant function is carried on through an alternative process. FINRA Rule 4370 (Business Continuity Plans and Emergency Contact Information) requires firms to create and maintain a written business continuity plan with procedures that are reasonably designed to enable firms to meet their obligations to customers, counterparties, and other broker-dealers during an emergency or significant business disruption.
  • Verify personnel registrations: The skillsets of securities industry personnel are evolving rapidly to keep pace with the adoption of emerging technologies. Technical and operational roles are starting to blend, as information technologists and data scientists are playing key roles in operational functions like trading and portfolio management. Firms may need to evaluate the roles of these personnel to ensure that they have the appropriate FINRA licenses and registrations. For instance, as stated in Regulatory Notice 16-21, FINRA requires registration of associated persons involved in the design, development, or significant modification of algorithmic trading strategies. Furthermore, FINRA Rule 1220(b)(3) and FINRA Regulatory Notice 11-33 state that certain firm personnel engaged in “back office” covered functions must qualify and register as Operations Professionals.

AI technology has the potential to disrupt and transform supervisory functions within a broker-dealer. Firms may benefit from conducting an overall assessment of the functions and activities that are employing AI-based tools, and updating their supervisory procedures accordingly. The following are examples of some areas that firms may wish to review.

  • Trading applications: FINRA has previously stated in Regulatory Notice 15-09, “In addition to specific requirements imposed on trading activity, firms have a fundamental obligation generally to supervise their trading activity to ensure that the activity does not violate any applicable FINRA rule, provision of the federal securities laws or any rule thereunder.” As firms adopt AI algorithms and strategies in their trading functions, they benefit from reviewing and testing their supervisory controls to ensure that there is continued compliance with applicable rules and regulations, including but not limited to FINRA Rules 5210 (Publication of Transactions and Quotations), 6140 (Other Trading Practices) and 2010 (Standards of Commercial Honor and Principles of Trade), SEC Market Access Rule, and SEC Regulation NMS, Regulation SHO and Regulation ATS.
  • Funding and liquidity risk management: As firms employ AI applications across functions like liquidity and cash management, portfolio management, and trading, they may wish to consider reviewing their supervisory procedures to ensure that the applications and the underlying models incorporate appropriate risk thresholds and relevant regulatory requirements, and do not create an environment of excessive risk-taking. This is particularly relevant where AI tools are used for liquidity and cash management, cases in which the models may generate aggressive recommendations for liquidity and leverage or may lead to unsound recommendations in unforeseen or stressed situations. As firms review their supervisory procedures, some factors to review include adequacy of existing controls, monitoring tools, and reporting tools to manage such risks. As previously stated in Regulatory Notice 15-33, “As part of a firm's obligation to supervise the businesses in which it engages, FINRA expects each firm to regularly assess its funding and liquidity risk management practices so that it can continue to operate under adverse circumstances, whether these result from an idiosyncratic or a systemic event.” Further, as stated in Regulatory Notice 10-57, “FINRA expects broker-dealers affiliated with holding companies to undertake these efforts at the broker-dealer level, in addition to their planning at the holding-company level.”49
  • Investment advice tools: Market participants are exploring the use of AI tools that generate client risk profiles and potential investment recommendations. These tools may aid in developing a new investment strategy, rebalancing portfolios, suggesting specific products or asset classes, or offering tax-minimization strategies. Market participants may benefit from considering how SEC Regulation Best Interest (BI) and FINRA Rule 2111 (Suitability) would apply in these contexts. In addition, as noted in FINRA’s Report on Digital Investment Advice, firms should ensure “sound governance and supervision, including effective means of overseeing suitability of recommendations, conflicts of interest, customer risk profiles and portfolio rebalancing.”

Additional Considerations

  • Cybersecurity: As noted in Section II of this report, cybersecurity continues to be a key threat for the financial services industry. While AI technology empowers the industry to identify potential security threats and attacks in real-time, use of related applications may also pose new vulnerabilities and threats. For instance, AI-based applications that pull in data from multiple sources may expose the firm to new security risks. In addition, customer-facing tools offered by firms on third-party platforms (e.g., virtual assistants offered on third-party consumer devices) could also pose security risks, such as those introduced through vulnerabilities of those third-party platforms or through inadequate customer authentication procedures. Firms would benefit from incorporating cybersecurity as a critical component of the evaluation, development, and testing process of any AI-based application. For additional resources on this topic, including applicable rules, guidance, and FINRA’s report on Cybersecurity Practices, refer to FINRA’s webpage on cybersecurity.
  • Outsourcing and vendor management: As firms look to take advantage of the benefits offered by AI-based tools, many are choosing to outsource specific functions or purchase turnkey applications from vendors. Some vendors are developing niche products that leverage AI for specific activities (e.g., financial crime monitoring and trade surveillance). Use of such vendor tools can be appealing to both small and large firms that seek to implement AI-based technology with low upfront capital investment and faster implementation time.50 Firms are reminded that outsourcing an activity or function to a third-party does not relieve them of their ultimate responsibility for compliance with all applicable securities laws and regulations and FINRA rules. As such, firms should review and update their WSPs to ensure that they appropriately address outsourcing arrangements (see, e.g., Notice to Members 05-48 (Outsourcing)) and to ensure that the security of the third-party meets or exceeds that expected by the firms. Firms may also wish to consider introducing language in contracts with third-party vendors that includes, but is not limited to, requiring vendors to notify firms in the event of a security breach and giving firms the right to audit, including the ability to review third-party System and Organization Controls (SOC) reports.
  • Books and records: The use of AI applications may lead to the creation of new records. Firms should review the use of their AI tools and systems to ensure compliance with recordkeeping obligations, such as those associated with Exchange Act Rules 17a-3 and 17a-4 and FINRA Rule 4510 (Books and Records Requirements). For example, the use of AI tools with respect to chatbots and virtual assistants may create novel issues in the context of compliance with applicable recordkeeping requirements.

Workforce structure: Adoption of AI tools and services could potentially impact a firm’s workforce in multiple ways. AI-based applications may conduct certain tasks previously performed manually in a more effective manner and in a fraction of the time. This could result in a reduced number of jobs in certain skillsets and an increased number in others. This transition may be challenging for many reasons, including that there is a significant shortage of individuals with AI-related skills in the industry.51 Firms may also face challenges with adjusting the culture of their technology functions as they shift away from a traditional waterfall methodology to a more agile process. Accordingly, as firms seek to adopt AI-based technology tools, they may want to consider reviewing the potential impact on their workforce, and take the appropriate steps to, for example, review any new staffing needs or training required for the use of new applications.


29 For instance, in Apr. 2019, the European Commission published a set of non-binding “Ethics guidelines for trustworthy AI”, prepared by the Commission's High-Level Expert Group on AI. European Commission, Ethics Guidelines for Trustworthy AI, Apr. 8, 2019, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

30 Supra note 1.  While the paper highlights certain regulatory and implementation areas that broker-dealers may wish to consider as they adopt AI, the paper does not cover all applicable regulatory requirements or considerations. FINRA encourages firms to conduct a comprehensive review of all applicable securities laws, rules, and regulations to determine potential implications of implementing AI-based applications.

31 Model validation refers to “the set of processes and activities intended to verify that models are performing as expected, in line with their design objectives and business uses. Effective validation helps to ensure that models are sound, identifying potential limitations and assumptions and assessing their possible impact.” Board of Governors of the Federal Reserve System, Supervisory Letter (SR 11-7) on Guidance on Model Risk Management, Apr. 4, 2011, https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm.

32 In its February white paper, the European Commission also noted the ‘black box effect’ as one of its top concerns: “opacity, complexity, unpredictability, and partially autonomous behavior may make it hard to verify compliance with, and may hamper the effective enforcement of, rules of existing EU law.” European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, Feb. 19, 2020, https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

33 In simple machine learning models (e.g., models that use traditional statistical methods, such as logistic regression or decision trees), one can follow the logic used by the models and the factors that contribute to the final outcome. However, more complex models (e.g., deep learning models) involve multiple layers and a dynamic, iterative learning process, where the internal learnings are opaque, making it difficult to identify the specific factors and their interrelationships that lead to the final outcome. Despite the challenges, these more complex AI models continue to garner interest from the industry because they offer the potential to be more powerful in identifying patterns and making more precise predictions relative to simpler models.

34 Several research studies have noted that data mining can lead to incorrect or misleading results because of the identification of spurious correlations. See, for instance, Hou, Kewei and Xue, Chen and Zhang, Lu, Replicating Anomalies, Working Paper No. 2017-03-010 presented at the 28th Annual Conference on Financial Economics and Accounting, Fisher College of Business, June 12, 2017, https://ssrn.com/abstract=2961979.

35 Matt Turek, Explainable Artificial Intelligence (XAI), Defense Advanced Research Projects Agency, https://www.darpa.mil/program/explainable-artificial-intelligence.

36 Digital Challenges: Overcoming Barriers to AI Adoption, May 28, 2019, https://www.technologyreview.com/s/613501/digital-challenges-overcoming-barriers-to-ai-adoption/ (in which EY and Massachusetts Institute of Technology conducted a survey at the 2019 EmTech Digital Conference, and found that “42% cite a lack of quality, unbiased data as the greatest barrier to AI adoption.”).

37 For example, Federal Study Confirms Racial Bias of Man Facial Recognition Systems, Casts Doubt on Their Expanding Use, The Washington Post, Dec. 19, 2019, https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/.

38 Over the last few years, there have been several reports and notable incidents across various industries of ML models producing allegedly biased and discriminatory results. For example, the European Commission, in its February 2020 white paper, noted its concern about bias and discrimination: “The use of AI can affect the values on which the EU is founded and lead to breaches of fundamental rights, including the rights to freedom of expression, freedom of assembly, human dignity, nondiscrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation, as applicable in certain domains, protection of personal data and private life, or the right to an effective judicial remedy and a fair trial, as well as consumer protection. These risks might result from flaws in the overall design of AI systems… or from the use of data without correcting possible bias…” European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, Feb. 19, 2020, https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

39 Rubin Nunn, Workforce Diversity Can Help Banks Mitigate AI Bias, American Banker, May 30, 2018, https://www.americanbanker.com/opinion/workforce-diversity-can-help-banks-mitigate-ai-bias. See also, Jake Silberg & James Manyika, Tackling Bias in Artificial Intelligence (and in Humans), June 2019, https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans.

40 James Manyika et al, What Do We Do About the Biases in AI?, Harvard Business Review, Oct. 25, 2019, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.  

41 Some examples of tools that reportedly assist in identifying potential bias in data include IBM’s AI Fairness 360 and Google’s Responsible AI Practices.

42 Deep fakes refer to a form of synthetic or manipulated communication in which an existing video, image, or audio clip is replaced or superimposed with someone else’s likeness in order to create false impressions or communications.

43 FINRA, Social Sentiment Investing Tools – Thing Twice Before Trading Based on Social Media, Apr. 2019, https://www.finra.org/investors/alerts/social-sentiment-investing-tools.

44 U.S. Securities and Exchange Commission, Final Rule: Privacy of Consumer Financial Information (Regulation S-P), https://www.sec.gov/rules/final/34-42974.htm.

45 Obfuscation may be accomplished with encryption, tokenization, or anonymization techniques.

46 These are some of many possible areas that broker-dealers may wish to consider as they explore adjusting their supervisory processes. This does not express any legal position, does not create any new requirements or suggest any change in any existing regulatory obligations, nor does it provide relief from any regulatory obligations. It is not intended to cover all applicable regulatory requirements or considerations. FINRA encourages firms to conduct a comprehensive review of all applicable securities laws, rules, and regulations to determine potential implications of implementing AI-based tools and systems.

47 FINRA RegTech White Paper.

48 FINRA, Regulatory Notice 15-09 on Effective Supervision and Control Practices for Firms Engaging in Algorithmic Trading Strategies, Mar. 2015, https://www.finra.org/rules-guidance/notices/15-09.

49 FINRA, Regulatory Notice 10-57 on Funding and Liquidity Risk Management Practices, https://www.finra.org/rules-guidance/notices/10-57; FINRA, Regulatory Notice 15-33 on Guidance on Liquidity Risk Management Practices, https://www.finra.org/rules-guidance/notices/15-33.

50 However, some market participants have raised concerns that use of a limited number of vendors to develop AI tools for an industry, if not managed appropriately, may create overreliance and concentrate risks related to errors or malfunctioning of AI systems.

51 Scott Likens, How Artificial Intelligence Is Already Disrupting Financial Services, Barron’s, May 16, 2019, https://www.barrons.com/articles/how-artificial-intelligence-is-already-disrupting-financial-services-51558008001 (stating that “[A]lmost a third of the financial executives in the AI survey are worried that they won't be able to meet the demand for AI skills over the next five years.”).