AI's ability to rapidly identify and exploit vulnerabilities has made it a potent tool in the hands of malicious actors.
One of the most prevalent tactics employed by these cybercriminals is the use of AI to crack passwords. A landmark event occurred in July 2024 when the largest-ever compilation of passwords was leaked online, comprising billions of credentials. A study conducted by Kaspersky revealed a pervasive issue of compromised passwords, affecting both startups and industry leaders alike.
Alexey Antonov, Head of Data Science at Kaspersky, highlighted that a significant portion of these passwords were deemed insufficiently complex. Despite being in a hash form, these passwords could be easily recovered using specialized algorithms within an hour.
Cybersecurity experts caution that through AI, malicious actors can exploit deceptive content such as encompassing text, imagery, audio, and video to orchestrate non-technical attacks.
Large language models such as ChatGPT are being weaponized to generate extraordinarily sophisticated fraud scenarios and messaging or can even mimic the communication style of potential victims, rendering fraudulent activities increasingly challenging to detect.
Moreover, cybercriminals are deploying AI technologies to expedite the creation of novel malware, generating intricate and multifaceted attack strategies.
A 2024 cybersecurity report by Bkav Security Co. underscored that the primary threats posed by AI today are phishing attacks and Advanced Persistent Threat (APT) attacks, which are becoming increasingly complex, especially when combined with deepfakes and ChatGPT. AI's ability to gather and analyze user data facilitates the creation of highly targeted phishing campaigns, making them more difficult to detect.
The typical progression of AI-fueled cyberattacks:
- Data Collection: Compile intelligence from social platforms, open-source repositories, and comprehensive databases.
- AI Training: Configure AI models to identify vulnerabilities and formulate attack methodologies.
- Attack Generation: Construct fraudulent emails, malicious software, or deepfake content.
- Attack Deployment: Disseminate fraudulent campaigns, implement malware, or execute Distributed Denial of Service (DDoS) attacks.
- Adaptive Response: Dynamically adjust tactics to circumvent security protocols.
- Exploitation: Extract sensitive data or systematically disrupt critical infrastructure.
- Feedback and Improvement: Analyze outcomes to refine and optimize future attack strategies.
Technical Director Vu Ngoc Son of the National Cybersecurity Technology Corporation (NCS) observes that cybercriminals are increasingly weaponizing AI for destructive purposes. The rapid generation of hyper-realistic fraudulent emails, even mimicking the communication styles of prominent figures, represents an escalating threat landscape.
Cybercriminals are also leveraging AI to expeditiously generate novel malicious code without the traditional time-intensive programming process. This results in continuously morphing malware that complicates sample collection and antimalware tool development, rendering detection exponentially more challenging.
They may take advantage of software vulnerabilities as well by utilizing AI to generate customized exploit code, systematically analyzing return results to iteratively optimize attack strategies. This might be the method to potentially uncover zero-day vulnerabilities in prevalent commercial software.
Statistics from Imperva reveal that AI-powered cyberattacks are on the rise, targeting retail websites with over 500,000 daily attacks. Cybercriminals leverage AI tools like ChatGPT and Gemini, alongside specialized bots designed to harvest website data for large language model training (LLM), to launch sophisticated attacks, including business logic abuse, DDoS attacks, and API exploitation. As AI becomes more accessible, retailers face a growing threat landscape, necessitating robust security measures to protect against these advanced attacks.
The Vietnam Information Security Association (VNISA) and various security organizations have developed comprehensive AI-powered countermeasures, adopting the philosophical approach of combating AI-driven attacks with AI-enabled defensive strategies.
Contemporary AI solutions are deployed to fortify network systems, including intrusion detection, behavioral analysis, malware detection, and automated response mechanisms in order to continuously learn and enhance overall systemic defense capabilities against evolving cyber threats.