A new research shows that threat actors are exploiting the increasingly popular ChatGTP to write usable malware and share their results on the dark web. The study was based on recent findings from Cybernews, and three distinct cases were profiled where cybercriminals with less experience would be able to recreate with ease workable malware strains capable of infiltrating a network with guidance provided to them by the AI-based chatbot.

What Were the Threat Actors Able to Achieve?

These malicious copies can use malware to encrypt critical data, steal files and transfer them to a remote server, phish a system for user passwords, and even encrypt an entire network in order to demand ransom.

Some cases of more advanced threat actors posting their ChatGTP query results have also been spotted on several underground community forums on the Dark Web since the New Year. According to researchers, it is only a matter of time before these malware strains are deployed in the wild.

Guided by AI

The open-source AI engine offered step-by-step instructions to cyber criminals on how to achieve their goal of replicating strains. The Cybernews research team uncovered earlier this week that ChatGPT would, upon request, provide step-by-step instructions on numerous ways to successfully hack a website. The virtual training platform Hack the Box served as the site for the ethically conducted experiment. The team completed the hack in under 45 minutes using the AI’s supplied instructions.

Hackers were also provided by the chatbot with instructions on how to create a Dark Web marketplace for conducting illegal cyber activities, like trading and selling stolen bank accounts, files, and other fraudulent schemes, including API cryptocurrency payment abilities.

ChatGTP’s Developers Speak Up

ChatGTP, also known as Generative Pre-trained Transformed, was launched in November 2022 by the artificial intelligence research and deployment company Open AI. A frenzy of social media news and followers ensued after its release. To date, more than a million users have registered to test out the AI chatbot.

ChatGTP addressed the problem on its website, stating that the ChatGTP model is trained to reject inappropriate requests. However, cybersecurity researchers had no difficulty obtaining the information.

According to Cybernews, when asking ChatGTP directly about its own policy on the matter, the bot provided the following statement:

Threat actors may use artificial intelligence and machine learning to carry out their malicious activities…Open AI is not responsible for any abuse of its technology by third parties.

ChatGTP Bot (Source: Cybernews)

The company is set to reach $1B in revenue by 2024.

If you liked this article, follow us on LinkedIn, Twitter, Facebook, Youtube, and Instagram for more cybersecurity news and topics.

Deep Web vs. Dark Web: What is Each and How Do They Work

How to Get on the Dark Web: A Step-by-Step Guide

API Vulnerabilities: What Are These and How to Protect your Business Against Them

Malware vs. Ransomware: Do You Know the Difference?

How to Implement a Strong Password Policy. Best Practices and Mistakes to avoid

Phishing attacks explained: How it works, Types, Prevention and Statistics

4 Examples of Data Encryption Software to Consider for Your Enterprise

Leave a Reply

Your email address will not be published. Required fields are marked *