Summary
Fake AI video malware spreads Noodlophile Stealer via scam editing sites, stealing logins and crypto wallets, and this guide explains how it works plus safety steps.

As AI tools continue to shape content creation, cybercriminals are exploiting the hype with a new and highly deceptive tactic. In May 2025, researchers uncovered a campaign using fake AI video and image generation platforms to distribute a previously undocumented malware strain called Noodlophile Stealer. 

These fraudulent websites are being promoted through viral social media activity, including Facebook groups that attract thousands of views. Victims are encouraged to upload personal images and videos, expecting AI-generated content in return. Instead, they are redirected into downloading a malicious file disguised as their “processed” output.

This threat connects directly with the wider deepfake cybersecurity risks guide, especially as AI content becomes harder to verify.

Key takeaways inside the story

  • Fake AI platforms are being used to distribute a new infostealer called Noodlophile Stealer 
  • The malware is designed to steal browser credentials, cookies, and crypto wallet data 

Some infections also deploy XWorm, giving attackers deeper remote control.

How the fake AI video platforms lure users

What makes this campaign dangerous is how realistic the websites look. Instead of classic phishing emails or cracked software downloads, attackers created convincing AI-themed platforms that mimic legitimate content generation services. 

One social media post reportedly crossed 62,000 views, showing how quickly these scams can spread when creators and small businesses look for free AI video tools. 

Users are typically baited with promises such as:

  • AI-powered video transformation
  • AI image editing or enhancement
  • Fast video generation “for free”

After uploading media, the victim is prompted to download the “generated video,” which is where the real attack begins.


How do fake AI video sites infect your computer
They trick users into downloading a file that looks like an AI-generated video output, but the download is actually a malware loader hidden inside a ZIP archive.

What gets downloaded and why it looks harmless

The malicious download is often delivered as a ZIP file named VideoDreamAI.zip. Inside it is an executable designed to look like a video file, using a misleading name such as:
Video Dream MachineAI.mp4.exe

This filename is intentionally crafted to confuse users, because it appears to be an MP4 video at first glance. In reality, it is a Windows executable that launches the infection chain.

Researchers noted that the file is a 32-bit C++ application and is built using a repurposed version of CapCut version 445.0, signed with a fraudulent certificate to help evade detection. 


What is Noodlophile Stealer
Noodlophile Stealer is a newly documented information-stealing malware that targets browser credentials and cryptocurrency wallet data, and may also deploy XWorm for remote access.

The attack chain explained in simple steps

Once the fake “video output” is executed, the infection becomes multi-stage and heavily obfuscated.

Researchers reported the ZIP contains a hidden folder named 5.0.0.1886 that stores key components used during execution. 

The main components include:

  • CapCut.exe: a large C++ wrapper that loads malicious .NET code in-memory
  • AICore.dll: used for executing external commands
  • Document.pdf: a Base64-encoded and password-protected RAR archive disguised as a PDF
  • Document.docx: actually a batch script disguised as a Word document
  • meta: a WinRAR utility renamed during execution

The reported infection flow works like this:

A Python payload is downloaded to deploy Noodlophile and sometimes XWorm

CapCut.exe launches and invokes a loader component

The loader checks connectivity by pinging google.com

It renames disguised files and triggers the batch script

The script decodes and extracts the hidden archive and sets persistence via Windows Registry

Fake AI video tools depicted by a laptop screen showing a fraudulent website, with a malware icon and warning symbols, highlighting cybersecurity risks in 2025

What the malware steals and where it sends the data

Noodlophile Stealer is built to harvest high-value information, including:

  • Browser credentials
  • Saved passwords and autofill data
  • Cookies and session tokens
  • Cryptocurrency wallet data 

Researchers also reported that stolen data can be sent through a Telegram bot, allowing attackers to exfiltrate information quietly without traditional command-and-control infrastructure. 

Open-source intelligence investigations linked the malware to underground marketplaces and malware-as-a-service distribution models, with indicators suggesting the actor may be Vietnamese based on language and social patterns observed in the campaign. 


Why is this campaign dangerous for creators and small businesses
It targets users actively searching for productivity tools, and once credentials or cookies are stolen, attackers can access email accounts, cloud platforms, and financial assets.

What to do if you downloaded the file

If you clicked and ran the downloaded “video output,” treat it as a serious incident even if nothing obvious happened.

Immediate actions that reduce damage:

  • Disconnect your device from the internet
  • Run a full security scan using trusted endpoint protection
  • Change passwords for email and important accounts from a clean device
  • Log out of active browser sessions and regenerate security keys where possible
  • Monitor crypto wallets and financial accounts closely

Stopping malware campaigns also needs awareness and response speed, the cybersecurity skills gap makes that harder for many teams.

How to avoid fake AI video malware in 2026

The key lesson from this campaign is simple: attackers are now packaging malware as “AI-generated output,” not only as traditional apps.

In 2026, the safest approach is to tighten your download hygiene and stop trusting AI tools found through random social links.

Practical 2026 prevention practices include:

  • Avoid downloading “output files from unknown AI sites, especially ZIP archives 
  • Always verify the platform domain through trusted sources, not social media comments 
  • Enable file extension visibility on Windows so .mp4.exe is obvious
  • Block executable downloads on personal systems used for work content creation
  • Use sandboxing or a separate virtual machine if you must test new tools
  • Keep your browser, OS, and security tools updated to catch multi-stage loaders 

Key takeaway inside the content
If a website makes you download a ZIP file to “receive your AI video,” treat that as a red flag, because real AI platforms rarely deliver output through disguised executables.

Final thoughts

The fake AI video malware 2025 campaign proves how quickly cybersecurity threats evolve alongside new technology trends. By exploiting public excitement around AI creation tools, attackers are reaching a wider and less suspicious audience, making these scams especially dangerous.

For creators and businesses, the best defense is awareness, strict download discipline, and verifying platforms before you upload personal files or install anything. As AI adoption grows into 2026, scams like these are likely to become more frequent, making proactive protection a necessity, not an option.