top of page
Lists

By

Cornelius Eichhorn

The Five Ways Hackers Utilize AI Like ChatGPT To Attack Businesses

AI is revolutionizing cybercrime - from deepfake scams to polymorphic malware. Learn how hackers weaponize ChatGPT and how to protect your business from AI-powered attacks.

Key Takeaways (TL;DR)

  • Software like ChatGPT can assist hackers with phishing messages and social engineering attacks

  • AI and ML help with the automation of cyber attacks

  • Hackers might use AI to generate or optimize malware

  • The game of password cracking just got leveled up by AI

  • Maybe the most dangerous of all, deepfake systems can create face and voice clones 

Welcome to the Dark Side of AI

AI can be a great tool, yet in the wrong hands it can also be a powerful weapon. And while you are doing your shopping list with ChatGPT before attending the office, where you automate a load of emails, brainstorming sessions, with AI, somewhere - a hacker will be leveraging the power of AI for their own purposes. Half an hour later, you receive twenty emails, two out of those phishing emails. And before you know it, your boss on the other side of the phone is a deepfake. With that, suddenly AI becomes one of the easiest tools for hackers to implement. Keep reading to find out about the ways cybercriminals make AI theirs - and bypass common cybersecurity systems.


Background Knowledge

Machine Learning (ML)

Artificial Intelligence (AI) systems utilize Machine Learning (ML). That means, the more data you give them, the more opportunity to try out different approaches they get, the more their system advances. Without ever being programmed like that, these systems can improve, and become more dangerous over time.

Generative AI

Generative AI employs ML to create content, such as texts, videos, or images.

Large Language Model (LLM)

An LLM, or large language model, is an AI system trained on massive amounts of text to understand and generate human language. It can perform tasks like language translation, text analysis, and content creation. LLMs are based on deep learning techniques.

Social Engineering (Phishing)

Broadly speaking, social engineering refers to the manipulation of people into performing a certain act. Phishing, for example, stands for emails and texts that might ask you to click on a malicious link, provide sensible information or carry out a transaction. Deepfakes, which will be explained later on, are also a type of social engineering.


The Ways That Hackers Use AI

A login screen that states cyber security, a lock and artificial Intelligence

Automated Attacks

Hackers increasingly use AI to automate cyberattacks, making them faster and more precise. AI-driven systems can scan networks and identify vulnerabilities more quickly than humans. Once these weaknesses are detected, AI tools can launch automated attacks such as brute-force password cracking (see below) or Distributed Denial-of-Service (DDoS) attacks, overwhelming servers in seconds. DDoS attacks will overwhelm a system with traffic, making it inaccessible to legitimate users. AI can even adapt its tactics mid-attack, reacting in real-time to security measures. While automation increases the scale and efficiency of attacks, it also makes detection harder, creating a more challenging landscape for cybersecurity professionals.


While AI systems cannot and will not replace hackers, as of today, they can help skilled cybercriminals work more efficiently and avoid human mistakes.


Phishing/Social Engineering Emails

Read this example, fabricated by ChatGPT in 3 seconds:

A professional phishing email creating urgency

Now, imagine you were Mr. Morgan, would you click the link? Judging by how common phishing scams are, many of us might. With a hectic day and hundreds of emails in our inbox; it can be hard to scrutinize every email diligently. It’s much easier to simply click the link. Hackers from anywhere in the world, with any kind of education, can get ChatGPT to write something like that. The easiest signs to detect a phishing email, grammar and spelling mistakes, can be avoided by using AI. Luckily, phishing attacks are something you can actively fight against. By being cautious and understanding the signs that come with a typical phishing email, you can avoid falling for the trap. Beware mismatched URLs, generic greetings, urgency tactics, and unexpected attachments to stay safe from phishing.


Malware

Malware is the general term for software that affects your computer system in a negative way; whether it's ransomware or some other form of dangerous software.

AI can create malware that easily goes through the strictest security systems. One particularly dangerous type of malware that hackers can create with ChatGPT is known as “polymorphic malware,” which constantly changes its form to avoid detection. It alters its code, making it hard for antivirus programs to recognize them. In February 2023, Cyber Ark researchers reported that “[...] by continuously asking ChatGPT and rendering a new piece of code every time, users can create highly evasive polymorphic malware. Polymorphic viruses can be extremely dangerous.” (Tech Target)

As you can see when confronting ChatGPT with a prompt like “write me a piece of malware” - there are certain restrictions that make it harder to instrumentalize it in an malicious way. Yet, it is still possible.


Example of a good password and a bad password

Password Cracking

Another way that might even seem old fashioned nowadays is password cracking. Hackers can use the same portals you use when you forget a password and try out different combinations of the same letters and numbers to get into your accounts. This practice is nothing new, but now AI can help them do it on a much larger scale. Leveraging input from the hacker and understanding different password patterns, AI can be repurposed into a tool to crack these codes. Since it is constantly learning and getting better through ML, this is one of the easier ways for AI to assist a hacker.


A man holding a mask in front of a camera

Deepfakes

Finally, what was once a fictional plot device in action and spy thriller movies, Deepfakes are now a reality we must all deal with. With only 1-2 minutes of audio and a handful of publicly available images, AI makes it possible to create believable clones of faces, voices, movements.

Our classic example for this case is a video conference or a call with your boss. They are someone you trust, maybe even someone you want to impress. And of course they are real - aren’t they?


Deepfakes, as mentioned, make it possible for hackers to imitate people you work with - whether it is during a zoom conference or a phone call. At the same time, these cloning technologies offer the possibility of bypassing biometric systems. With those uses, deepfakes have become the second most common kind of security incident in North America. It is immensely important to educate yourself and others on the dangers of deep fakes. If you want to learn more about this type of cyber crime, visit our friends over at www.breacher.ai. Book a free demo today to see how deep fakes work and how they can be used against your own business.


Conclusion

AI, like most technologies, is a tool that can be used for good or bad. While the numbers of incidents in social engineering like phishing and deepfakes are getting higher every year, AI also offers hackers assistance with malware, password cracking, and automated attacks. With the multiple ways that cyber criminals may use ChatGPT against your company, is your business ready to face those challenges?

Stay in the loop!

Get notified when a new post goes live.

Success! Check Your Email For Confirmation.

Welcome to your trusted hub for insight and innovation. Explore our library of content designed to inform, empower, and inspire.

Stay in the loop

Success! Check Your Email For Confirmation.

Follow Us

  • LinkedIn
  • Facebook
  • Instagram

Recent Posts

Total Assure Attends the 2025 Baltimore Cybersecurity Summit

Malware Prevention for Robust Results: NIST SP 800-171

NIST SP 800-171: Securing Information and Technology

Optimized Cybersecurity Through NIST SP 800-171 Assessments

Strengthening Cybersecurity Risk Assessments for NIST SP 800-171

NIST SP 800-171: Strengthening Personnel Security to Protect CUI

NIST SP 800-171: Securing Controlled Unclassified Information (CUI) on Digital and Non-Digital Media

NIST SP 800-171 Maintenance: Protecting Systems and Data During Maintenance Activities

Want to Learn More?

bottom of page