Illuminating the Path: A Guiding Light in the AI Financial Frontier

Illuminating the Path: A Guiding Light in the AI Financial Frontier
March 19th, 2024

Last year, the SEC proposed a rule to address conflicts of interest associated with using predictive data analytics (PDA) and similar technologies. Unfortunately, the 60-day comment period closed in October 2023. The staff is currently reviewing the submitted comments. Commissioners like Hester Peirce are meeting with industry representatives to decide what rules should be approved in this area. This three-part series highlights the SEC's concerns with PDA and artificial intelligence and suggests ways the Commission could handle those concerns without adopting a rule that is a solution in search of a problem.

Commissioner Peirce advocates a more targeted approach where rules address specific, well-defined issues. In the context of AI, those issues are not yet well defined, at least not in the financial services space, which does not necessarily suffer from some of the privacy, data, and intellectual property issues inherent in consumer-oriented products. Peirce cautions against hastily adopting rules to address AI simply because it's a trending topic. She suggested that existing rules should first be considered for applicability to AI-related issues, and only if unique issues arise should specific solutions be developed.

With that in mind, we dive into some of the concerns raised by the SEC's rule proposal and how the industry could address the issues without the help of the SEC. We started first with the complexity of compliance.

Complexity of Compliance

The proposed SEC rules aim to address conflicts of interest associated with using PDAs and similar technologies, including AI, by requiring firms to identify, eliminate, or neutralize conflicts that arise from these technologies. However, the compliance requirements could be particularly burdensome for smaller firms, which might struggle with the resources to evaluate, test, and monitor AI technologies.

Some suggestions for addressing these concerns include:

Partner with Reputable Vendors

While this sounds simple, most development in this space will be at large asset management firms such as Blackrock, Fidelity, and Vanguard. These global firms have the resources to hire engineers, lawyers, and other staff to develop products that give them a competitive edge. Additionally, these global behemoths have vast amounts of data from which they can train their models. Smaller firms do not have such resources and will depend on third-party tools, including open-source libraries and models.

While a certification process would help smaller firms identify reputable vendors, such a process is beyond the SEC's mandate. It represents a level of intrusion that is anathema to our capital markets.

Use Explainable AI (XAI) Tools

Since the SEC either cannot or should not develop a certification process for reputable vendors, could it require such vendors to meet specific XAI standards? The answer is again no since the SEC's jurisdiction does not allow it to oversee non-registered service providers. However, below are some industry practices that could alleviate some of the SEC's concerns.

  • Model Transparency: Vendors could explain to asset managers how the vendor's AI models make decisions, including the logic and reasoning behind predictions and outputs. This could involve using more straightforward, more interpretable models or providing detailed documentation and explanations for more complex models.
  • Feature Importance: Vendors could disclose feature importance to explain how each input variable influences the AI model's predictions or decisions. This helps users understand which factors are driving the model's behavior. Such disclosure would be essential if the asset manager used such tools in its portfolio management process. The feature highlights could provide adequate disclosure to investors through offering documents or Form ADV.
  • Data Provenance and Audit Trails: Vendors could enforce strict guidelines on documenting the data used to train and test AI models, including sourcing, handling, preprocessing, and any transformations applied. This ensures that the data feeding into AI systems is traceable, reliable, and free from biases that could skew model outputs. It would also facilitate bias and other audits.

Implement requirements for comprehensive audit trails that record the decision-making process of AI systems. This would enable retrospective analysis to understand how and why certain decisions were made, which is crucial for accountability and compliance.

Asset managers could also encourage or mandate adherence to internationally recognized standards and frameworks for responsible AI, such as those developed by the Institute of Electrical and Electronics Engineers (IEEE), the International Organization for Standardization (ISO), or industry-specific guidelines.

  • Performance Metrics: Vendors could provide comprehensive performance metrics focusing on accuracy, considering other aspects like fairness, bias, and robustness, and providing regular updates.
  • Human-Readable Explanations: Vendors could build AI systems that include mechanisms to generate human-readable explanations for model outputs, enabling non-expert users to understand and trust AI decisions. This could be in the form of narrative explanations, decision trees, or visualizations.
  • User Feedback Loops: Vendors could embed mechanisms that allow users to provide feedback on AI decisions, contributing to continuous improvement and alignment with human expectations and ethical standards.
  • Ethical and Bias Assessment: Asset managers could require vendors to conduct periodic assessments of AI systems for ethical considerations and bias, including external audits by independent parties. This is a common practice used by other service providers, such as fund administrators, accountants, and transfer agents who undergo a Service Organization Compliance 1 Type 2 audit to test their processes for internal weakness. An AI-based SOC 1 audit could evaluate the AI's impact on various demographic groups to prevent discrimination and ensure fairness.

Conclusion

By establishing strong industry standards, regulated entities could work to address conflicts and other issues without rulemaking. They would also create a well-informed community that could collaborate with the SEC, contributing to a regulatory environment that supports capital market growth and innovation while ensuring market integrity and investor protection. Such an approach underscores Commissioner Peirce's regulatory approach, which is precise, problem-oriented, and conducive to innovation, particularly in the rapidly evolving area of AI and technology.

Share This Blog Post