ChatGPT scams are the new crypto scams, Meta warns | Engadget

As the buzz around ChatGPT and other generative AIs increases, so does the interest of scammers in the technology. In a new report published by Meta, the company says it has seen a sharp increase in malware masquerading as ChatGPT and similar artificial intelligence software.

In a statement, the company said that since March 2023 alone, its researchers have discovered “ten malware families that use ChatGPT and other similar themes to compromise accounts on the Internet” and that it has blocked more than 1,000 malicious links on its platform. According to Meta, the scams often involve mobile apps or browser extensions posing as ChatGPT tools. And while in some cases the tools offer some functionality of ChatGPT, their real purpose is to steal their users’ account credentials.

In a call with reporters, Meta security director Guy Rosen said that the scammers behind these exploits are taking advantage of the increased interest in generative AI. “As an industry, we have seen this in other topics that are popular at the time, such as cryptocurrency scams fueled by the immense interest in digital currency,” Rosen said. “So from a bad actor’s perspective, ChatGPT is the new crypto.”

Meta noted that people who run businesses on Facebook or use the platform for work have been particular targets. Scammers often go after users’ personal accounts to gain access to a connected business page or ad account, which is more likely to have a credit card linked to it.

To combat this, Meta said it plans to introduce a new type of account for businesses called “Meta Work” accounts. These accounts will allow users to access Facebook Business Manager tools without a personal Facebook account. “This will help keep business accounts more secure in cases where attackers start with a personal account compromise,” the company said in a statement. Meta said it will begin a “limited” trial of the new work accounts this year and will expand it “over time.”

Additionally, Meta is rolling out a new tool that will help companies detect and remove malware. The tool “guides people step-by-step on how to identify and remove malware, including using third-party antivirus tools” to help prevent businesses from repeatedly losing account access.

Meta researchers are not the first to warn about fake ChatGPT tools leading to hacked accounts. Researchers recently warned of a Chrome extension posing as the ChatGPT software that led to the hacking of several Facebook accounts. the feat, computer ringing, it became known as the “Lily Collins” hack because the names on the victims’ accounts were changed to “Lilly Collins”.

During a call with journalists, Meta’s head of security policy, Nathaniel Gliecher, said that these attacks also often target people connected to the companies. “What they will want to do is close that personal account to burn off their access and prevent the legitimate user from logging back in,” he said. “One of the tactics we are seeing now is that they will take the personal account and rename it to have the name of a prominent celebrity in the hopes that that account will be removed.” He added that the new work accounts would help prevent similar attacks in the future.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button