Skip to main content
Ethical and Regulatory Risks of Artificial Intelligence in Advisory Practice
80% Rating
Bookmark
Richard ChenGuest Expert: Richard Chen, Esq., Brightstar Law Group

Ethical and Regulatory Risks of Artificial Intelligence in Advisory Practice

This session, led by Richard Chen, provided a comprehensive framework for understanding how artificial intelligenc...

Trusted by 1000+ Financial Advisors

Unlock Unlimited Expert Webinars

Get Full Access to 100+ Sessions at $19/month

Subscribe Now

Discussions & Comments

missy@financialexpertsnetwork.com 5 days 3 hours ago
A few comments from listeners when they were asked what the learned from the webinar:

More need to pay attention to AI data sources and conclusions, which may not be factual.
- John L.

I didn't realize the many divisions there are in the AI field with respect to regulatory oversight.
- Lois B.

The AI issue is something I know we need to implement more thoroughly and we now have an asset we can use to keep us within the appropriate guardrails. We are very cautious with cyber security and have held back some for that reason.
- Paul L.

missy@financia…

Thu, 04/30/2026 - 13:47

A few comments from listeners when they were asked what the learned from the webinar:

More need to pay attention to AI data sources and conclusions, which may not be factual.
- John L.

I didn't realize the many divisions there are in the AI field with respect to regulatory oversight.
- Lois B.

The AI issue is something I know we need to implement more thoroughly and we now have an asset we can use to keep us within the appropriate guardrails. We are very cautious with cyber security and have held back some for that reason.
- Paul L.

Search Webinars, Sessions, and More

Click Here to Download Summary Below

Ethical and Regulatory Risks of Artificial Intelligence in Advisory Practice

This session, led by Richard Chen, provided a comprehensive framework for understanding how artificial intelligence (AI)—particularly generative AI and large language models (LLMs)—intersects with the regulatory obligations of financial advisors. The central takeaway is that AI does not introduce entirely new regulatory regimes, but rather amplifies existing fiduciary, compliance, and supervisory responsibilities under laws such as the Investment Advisers Act and Regulation S-P. 

Chen emphasized that AI tools should be treated similarly to junior employees—fast, capable, and helpful, but requiring supervision, verification, and controls. Advisors who rely on AI outputs without appropriate diligence risk violating core obligations related to accuracy, client confidentiality, and fair dealing.

The session also highlighted that the risks of AI are not theoretical. Inaccurate outputs (“hallucinations”), data leakage through integrations, biased recommendations, and inadequate recordkeeping can all lead to regulatory exposure. Advisors must implement governance frameworks, vendor oversight, employee training, and audit processes to responsibly integrate AI into their practices.

From a regulatory standpoint, key governing frameworks include:

  • Investment Advisers Act of 1940 (fiduciary duty, marketing rule, anti-fraud provisions) 
  • Regulation S-P (privacy and safeguarding of client information) 
  • Books and Records Rule (Rule 204-2) (record retention requirements) 

These rules collectively require that AI use be accurate, supervised, secure, documented, and in the best interest of clients


Key Topics and Expanded Insights

1. Understanding AI Architecture: Why It Matters for Risk Management

Chen outlined a practical framework for understanding how AI tools function, breaking them into three core layers:

  • Large Language Model (LLM): Generates outputs based on probability and training data 
  • Orchestration Layer: Coordinates inputs, instructions, and data retrieval 
  • Context Layer: Supplies data from sources such as documents, databases, or connectors 

A key insight is that AI does not “reason” in the human sense—it predicts likely outputs based on patterns. This has several implications:

  • Outputs may sound authoritative but still be incorrect 
  • Accuracy depends heavily on the quality of inputs (“garbage in, garbage out”) 
  • Context limitations (token limits) can cause incomplete or distorted outputs 

Example: If an advisor uploads incomplete client data or relies on unverified web sources, the AI may produce a confident but flawed recommendation.

Planning implications:

  • Advisors must understand how AI tools work—not just how to use them 
  • Prompt design and input quality are critical to output reliability 
  • Overreliance on AI without understanding its mechanics increases compliance risk 

2. Fiduciary Duty and AI: Duty of Care and Duty of Loyalty

AI use directly implicates the advisor’s fiduciary obligations:

Duty of Care

Advisors must ensure recommendations are:

  • Based on accurate information 
  • Suitable for the client’s objectives and circumstances 

Using AI-generated research or recommendations without verification can violate this duty. 

Duty of Loyalty

Advisors must:

  • Avoid or disclose conflicts of interest  
  • Ensure advice is not biased or self-serving 

AI tools may introduce bias by:

  • Reinforcing prior user behavior or preferences 
  • Drawing from biased datasets 
  • “Pleasing” the user with favorable outputs 

Example: An AI tool trained on prior firm recommendations may disproportionately favor products that generate higher fees, creating undisclosed conflicts.

Planning implications:

  • AI outputs must be independently validated before use in advice 
  • Advisors should consider whether AI use itself is a material disclosure issue 
  • Bias mitigation should be an explicit part of compliance processes 

3. Accuracy, Hallucinations, and Fact-Checking Obligations

A major risk discussed was AI-generated inaccuracies (“hallucinations”), including fabricated facts or citations.

Chen noted that:

  • Many inaccuracies stem from poor inputs or unreliable sources 
  • AI tools are designed to sound confident, increasing the risk of misuse 

Real-world example referenced:
Law firms have been sanctioned for submitting court filings citing non-existent cases generated by AI.

Regulatory implications:

  • Violations of the SEC Marketing Rule if claims are not substantiated 
  • Anti-fraud violations if false information is presented to clients 
  • Breach of fiduciary duty if advice is based on inaccurate data 

Best practices:

  • Verify outputs using primary sources (e.g., IRS, SEC) 
  • Use AI tools that provide citations and review them directly 
  • Apply “front-end and back-end review” (input quality + output validation) 

Planning implications:

  • Advisors should prioritize fact-checking for high-impact outputs (e.g., tax advice, investment recommendations) 
  • AI should assist—not replace—professional judgment 
  • Firms should formalize verification procedures 

4. Confidentiality and Data Security Risks (Regulation S-P)

Confidentiality was identified as the highest-risk area for AI adoption. 

Key risks include:

  • Sharing client data with AI tools that may store or train on that data 
  • Exposure through integrations (e.g., CRM, email, note-taking tools) 
  • Data leakage across multiple vendors in the AI stack 

Regulation S-P requires firms to implement:

  • Administrative safeguards 
  • Technical controls 
  • Physical protections for client data 

High-risk scenarios:

  • Uploading financial plans or client profiles into AI tools 
  • Using AI note-takers during client meetings 
  • Granting AI access to email or CRM systems 

Best practices:

  • Avoid entering non-public client information when possible 
  • Use enterprise-grade AI tools with contractual data protections 
  • Implement access controls and data classification systems 
  • Conduct vendor due diligence (e.g., SOC 2 reports) 

Planning implications:

  • Data minimization is a key risk mitigation strategy 
  • Vendor selection should be treated as a compliance function 
  • Cybersecurity and AI governance are now intertwined 

5. Vendor Due Diligence and Third-Party Risk

AI workflows often involve multiple vendors (LLMs, orchestration tools, connectors), increasing complexity.

Advisors must:

  • Evaluate vendor security practices 
  • Understand how data is stored, shared, and processed 
  • Ensure contractual protections (e.g., no data training clauses) 

Chen emphasized:

  • The importance of SOC 2 reports and security audits 
  • The need for 72-hour breach notification provisions under updated regulatory expectations 

Example risk:
A note-taking AI tool with weak security could expose sensitive client conversations.

Planning implications:

  • Vendor due diligence should be ongoing, not one-time 
  • Firms may engage third parties to assess vendor compliance 
  • Cyber insurance may mitigate financial risk but not reputational damage 

6. Recordkeeping and Supervision Requirements

Under the Books and Records Rule, firms must retain:

  • Communications with clients 
  • Marketing materials 
  • Research and recommendations 
  • Compliance documentation 

AI introduces new recordkeeping challenges:

  • Prompts and outputs may constitute records 
  • Off-platform (“shadow AI”) use is difficult to monitor 
  • Data may reside outside firm-controlled systems 

Regulatory requirement:
Records must generally be retained for at least five years. (https://www.ecfr.gov/current/title-17/section-275.204-2)

Best practices:

  • Use enterprise AI tools with audit logs and archiving 
  • Prohibit or restrict personal AI tool usage 
  • Maintain access to records even after vendor termination 

Planning implications:

  • AI governance must include record retention policies 
  • Compliance teams should monitor AI usage through audit logs 
  • Firms should prepare for SEC examination scrutiny 

7. Bias and Conflicts of Interest in AI Outputs

Bias can arise from:

  • Training data 
  • User history and preferences 
  • Firm-specific inputs 

AI tools may:

  • Reinforce existing beliefs 
  • Favor certain products or strategies 
  • Generate outputs aligned with perceived user preferences 

Regulatory concern:
Bias can lead to undisclosed conflicts of interest, violating fiduciary duty. 

Mitigation strategies:

  • Use neutral, objective prompts 
  • Provide diverse data inputs 
  • Conduct periodic output reviews 
  • Monitor audit logs for patterns 

Planning implications:

  • Bias detection should be part of compliance testing 
  • Advisors should remain skeptical of “too perfect” answers 
  • Disclosure of AI use may become a best practice 

8. Agentic AI and Automation Risks

Agentic AI tools can:

  • Execute actions (e.g., send emails, update CRM) 
  • Automate workflows 
  • Interact with external systems 

While powerful, they introduce heightened risks:

  • Unauthorized actions 
  • Data corruption 
  • Unintended client communications 

Key caution:
“AI amplifies efficiency—but also amplifies risk.” 

Best practices:

  • Limit access and permissions 
  • Monitor actions through logs 
  • Conduct periodic audits 

Planning implications:

  • Automation should be phased and controlled 
  • High-risk functions require human oversight 
  • Governance frameworks must evolve with capabilities 

Practical Advisor Takeaways

Treat AI as a supervised tool, not an autonomous decision-maker. It should enhance—not replace—professional judgment.

Implement a formal AI governance framework, including:

  • Approved tools list 
  • Usage policies 
  • Training requirements 
  • Audit and monitoring processes 

Prioritize data protection:

  • Avoid sharing client-sensitive information when possible 
  • Use enterprise solutions with contractual safeguards 
  • Conduct vendor due diligence 

Adopt rigorous verification practices:

  • Fact-check critical outputs using primary sources 
  • Review citations and underlying data 
  • Use dual-review processes for high-risk applications 

Control and monitor usage:

  • Restrict “shadow AI” 
  • Use tools with audit logs and access controls 
  • Maintain records in compliance with regulatory requirements 

Address bias proactively:

  • Use objective prompts 
  • Diversify inputs 
  • Monitor outputs for patterns 

Prepare for regulatory scrutiny:

  • Document policies and procedures 
  • Maintain records of AI usage 
  • Be able to demonstrate supervision and controls 

External Reference Sources

U.S. Securities and Exchange Commission, Investment Advisers Act of 1940
https://www.sec.gov/about/laws/iaa40.pdf

U.S. Securities and Exchange Commission, Marketing Rule (Rule 206(4)-1)
https://www.sec.gov/rules/final/2020/ia-5653.pdf

U.S. Securities and Exchange Commission, Books and Records Rule (Rule 204-2)
https://www.ecfr.gov/current/title-17/section-275.204-2

U.S. Securities and Exchange Commission, Regulation S-P (Privacy of Consumer Financial Information)
https://www.sec.gov/rules/final/34-42974.htm

Federal Trade Commission, Safeguards Rule
https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act

National Institute of Standards and Technology (NIST), AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework

Cybersecurity and Infrastructure Security Agency (CISA), Data Security Guidance
https://www.cisa.gov/cybersecurity