Home » Congress passes ‘Take It Down’ Act to fight deepfakes

Congress passes ‘Take It Down’ Act to fight deepfakes

by Kylie Bower


Congress has passed a bill that forces tech companies to take action against certain deepfakes and revenge porn posted on their platforms.

In a 409-2 vote on Monday, the U.S. House of Representatives passed the “Take It Down” Act, which has received bipartisan support. The bill also received vocal support from celebrities and First Lady Melania Trump. The bill already passed the Senate in a vote last month.

The Take It Down Act will now be sent to President Donald Trump, who is expected to sign it into law.

First introduced by Republican Senator Ted Cruz and Democratic Senator Amy Klobuchar in 2024, the Take It Down Act would require that tech companies take quick action against nonconsensual intimate imagery. Platforms would be required to remove such content within 48 hours of a takedown request. The Federal Trade Commission could then sue platforms that do not comply with such requests.

Mashable Light Speed

In addition to targeting tech platforms, the Take It Down Act also carves out punishments, which include fines and potential jail time, for those who create and share such imagery. The new law would make it a federal crime to publish — or even threaten to publish — explicit nonconsensual images, which would include revenge porn and deepfake imagery generated with AI.

Digital rights groups have shared their concerns regarding the Take It Down Act. Activists have said that the bill could be weaponized to censor legally protected speech, and that legal content could be inaccurately flagged for removal.

Despite these concerns, the Take It Down Act even received support from the tech platforms it seeks to police, such as Snapchat and Roblox.

Congress isn’t finished addressing AI and deepfakes this year either. Both the NO FAKES Act of 2025 and Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025 have also been introduced this session. The former seeks to protect individuals from having their voice replicated by AI without their consent, whereas the latter looks to protect original works and require transparency around AI-generated content.





Source link

Related Posts

Leave a Comment