This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Technology Law

| 2 minute read

FTC Takes Aim at Tech Platform Content Moderation

The Federal Trade Commission under new Chair Andrew Ferguson is wasting little time scrutinizing how technology platforms moderate content. On February 20, 2025, invited public comment “to better understand how technology platforms deny or degrade (such as by 'demonetizing' and 'shadow banning') users’ access to services based on the content of the users’ speech or their affiliations, including activities that take place outside the platform.” The Request for Information warns it could consider these content moderation practices to be unfair or deceptive acts or practices or unfair competition methods if they harmed consumers.

How did we get here?

After Twitter began fact-checking President Trump's posts in 2020, the administration issued an Executive Order directing the FTC to prohibit unfair and deceptive acts or practices that restrict speech. Almost five years later, it is acting on this directive.

The previous Commission under Chair Lina Khan increased RFIs usage, seeking public input on serial acquisitions, roll-up strategies, and surveillance pricing. Although Chair Ferguson rescinded many RFIs initiated by his predecessor, it seems he will continue this strategy, putting businesses on notice about potential enforcement focus. Considering this practice and continued adherence to the 2023 Merger Guidelines, the change in administration may not be the complete reversal of FTC practices widely expected by many.

What might this RFI mean going forward?

The future of content moderation under this administration is uncertain, but the RFI offers insight into focus areas. It defines technology platforms to include companies providing social media, video sharing, photo sharing, ride sharing, event planning, internal or external communications, or other internet services. This indicates an expansive approach that may affect businesses typically not targeted for their content moderation.

As is common in FTC actions, the RFI focuses on the company’s public representations and the harm misrepresentations could cause consumers. Platform content moderation often consists of an exceptionally complex web of policies and procedures and gives platforms wide latitude in their editorial decisions. Businesses should review these policies and procedures carefully to align with the RFI topics and questions, particularly regarding user appeals processes and documenting reasons for actions. 

Question 5(a) examines whether platform policies or adverse actions were made in response to pressure from advertisers or businesses. After the brand exodus from X/Twitter following the billionaire’s purchase and removal of content moderation support, Elon Musk and conservative groups scrutinized advertiser practices. He sued the World Federation of Advertisers, leading to the sudden dissolution of the Global Alliance for Responsible Media. Advertiser boycotts of platforms over content moderation policies may also be a target of the Ferguson FTC.

While any government attempt to regulate platform policies will likely face significant constitutional challenges, the FTC's inquiry signals a new era of regulatory scrutiny for content moderation practices. Technology platforms should carefully review their policies and prepare for potential enforcement. The public comment period closes on May 21, 2025, and the Commission's next steps will likely shape the future of online content moderation.