Skip to main content

MDF Law PLLC Comment On Section V: Request for Comments

Marc Fitapelli
MDF Law PLLC

I appreciate the opportunity to submit comments on FINRA’s report, Social Media–Influenced Investing. The report provides an important and timely examination of how social media platforms increasingly shape investor behavior and influence financial decision-making across the retail investing landscape.

From the perspective of an elder fraud attorney and consumer protection advocate, the most significant and rapidly escalating risk in this area is no longer traditional misinformation or social media hype, but rather the widespread deployment of artificial intelligence to impersonate financial professionals, regulators, and trusted institutions. Artificial intelligence has fundamentally altered the nature of financial fraud, particularly as it impacts older investors who are statistically more vulnerable to deception, impersonation schemes, and identity-based financial abuse.

AI-driven financial fraud now involves sophisticated systems that can convincingly pose as licensed financial advisors, broker-dealers, customer support representatives, and even regulatory authorities such as FINRA and the Securities and Exchange Commission. These systems generate highly personalized, professional-quality communications in real time, often using voice synthesis, deepfake video, and conversational AI to create the appearance of legitimate financial authority. For elderly investors, who tend to rely more heavily on perceived institutional trust, this form of synthetic impersonation presents an unprecedented level of risk.

The traditional regulatory framework governing financial communications assumes identifiable human speakers, traceable firms, and auditable records. Artificial intelligence undermines each of these assumptions. AI-generated financial communications frequently lack a real human origin, operate across multiple jurisdictions, and can disappear instantly after defrauding a victim. From an elder fraud litigation standpoint, this creates severe barriers to accountability, recovery, and enforcement, particularly when victims have interacted with systems that present themselves as licensed professionals but have no legal existence.

This development creates a uniquely difficult enforcement environment for FINRA and other regulators. Investor harm increasingly occurs outside any regulated entity, without a broker-dealer to supervise, a firm to examine, or a human actor to discipline. Financial fraud is becoming software-driven rather than institution-driven. One bad actor can deploy thousands of AI-generated financial personas across platforms, each operating continuously, adapting to user behavior, and targeting vulnerable investors at negligible cost.

For elderly investors, the risk is not merely bad financial advice but complete synthetic authority. Older victims are now encountering AI systems that convincingly present themselves as compliance officers, regulatory agents, or institutional representatives. These systems exploit trust, urgency, and perceived legitimacy, often inducing victims to transfer retirement funds, disclose sensitive financial credentials, or authorize irreversible transactions. From an elder fraud attorney’s perspective, this shift represents the single greatest structural threat to investor protection in modern financial markets. The core issue is no longer misinformation but automated, adaptive, personalized financial influence that operates entirely outside existing licensing, registration, and supervisory regimes.

Artificial intelligence does not simply amplify existing risks associated with social media. It fundamentally changes the nature of financial deception. The regulatory problem is no longer how to supervise human behavior on digital platforms, but how to regulate financial influence generated by autonomous systems that simulate professional authority without any legal identity.

FINRA’s report is an important starting point, but future regulatory guidance must explicitly treat artificial intelligence impersonation and synthetic financial authority as primary risk drivers in social media-influenced investing. Investor protection, particularly for older and vulnerable populations, now depends on addressing AI-based fraud as a core regulatory priority rather than a secondary technological concern.

Any effective framework for elder financial abuse prevention must recognize that financial fraud is increasingly being committed by machines rather than people, and that the traditional regulatory perimeter no longer captures the true sources of investor harm. Without addressing artificial intelligence directly, the regulatory system will remain structurally misaligned with the actual mechanisms of modern financial exploitation.

Respectfully submitted,

Marc Fitapelli

MDF Law PLLC