AI has come a long way since its inception, and today, it has the potential to both transform industries and improve human lives. However, this progress comes with certain risks, including the possibility of AI-augmented cyberattacks. In 2018, a report on malicious AI warned that increasingly capable AI systems could empower new forms of automated social engineering attacks, making it difficult even for experts to detect. This is an immenet threat, according to this recent paper by Juilan Hazell, Large Language Models and Spear Phishing. Let's review the paper and discuss the risks and implications of AI-augmented cyberattacks, particularly in the context of spear phishing campaigns.

Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns
Recent progress in artificial intelligence (AI), particularly in the domainof large language models (LLMs), has resulted in powerful and versatiledual-use systems. Indeed, cognition can be put towards a wide variety of tasks,some of which can result in harm. This study investigates how LLMs can b…

The Proliferation of AI-Assisted Spear Phishing Campaigns

The recent developments in language modeling have led to the creation of AI systems that can convincingly mimic human language, enabling cybercriminals to launch sophisticated social engineering attacks on an unprecedented scale.

The Impact of Language Modelling Advances

Recent advancements in language modelling have resulted in widely accessible AI systems that can approach and surpass human-level performance in numerous natural language tasks. These advances have made automated social engineering attacks not only technically feasible but also cost-effective. With more sophisticated AI systems, cybercriminals can now create highly convincing and personalized spear phishing messages at scale.

The Rise of Novel Social Engineering Attacks

The widespread adoption of LLM-powered chatbots, like ChatGPT, has contributed to the increase in novel social engineering attacks. Researchers at Darktrace observed a 135% increase in such attacks among thousands of active customers between January and February 2023. This rise in attacks is attributed to the ease with which cybercriminals can now execute sophisticated social engineering campaigns at scale.

Low Costs and High Speeds: A Dangerous Combination

Advanced LLMs can generate human-like language, enabling cybercriminals to create personalized spear phishing messages at minimal costs. Due to the low cost of generating these emails, LLMs have the potential to cause significant harm when used to scale spear phishing campaigns. For instance, a hacker could use Anthropic's most capable model, Claude, to generate a batch of 1,000 spear phishing emails for just $10 USD in under two hours.

The Challenge of Detecting AI-Augmented Attacks

As AI systems grow more capable of modelling genuine human interaction, they can engage in social mimicry that can be difficult for even experts to detect. This poses a significant challenge for security researchers and professionals who must now contend with increasingly sophisticated AI-augmented cyberattacks that can easily deceive targets and bypass traditional security measures.

The Growing Sophistication of LLMs in Spear Phishing

The latest LLMs from OpenAI and Anthropic have shown impressive advancements in their ability to create sophisticated spear phishing attacks. This increased sophistication means that LLM-generated messages can be much more convincing and harder to detect than those created by earlier models, making it more likely that targets will fall for these deceptions.

The Threat to Individuals and Organizations

The combination of advanced LLMs' low costs, high speeds, and growing sophistication presents a significant threat to both individuals and organizations. As spear phishing attacks become more personalized and convincing, unsuspecting targets may be more likely to engage with malicious content, potentially leading to significant financial and reputational damage.

The Three Key Advantages of LLMs for Cybercriminals

  1. Cognitive workload reduction: LLMs can generate convincing spear phishing emails that sound human-generated, without much involvement from the attacker. These models do not make spelling mistakes, can run 24/7 without showing fatigue, and can effortlessly comb through large quantities of unstructured data during the reconnaissance phase.
  2. Lower financial costs: LLMs significantly lower the marginal cost of each spear phishing attempt in terms of financial resources. An email can be generated for less than a cent with GPT-3.5, and for a few cents with more advanced models like GPT-4. As the price of cognition decreases, this cost will become even more negligible.
  3. Reduced skill requirements: Even relatively low-skilled attackers can use LLMs to generate convincing phishing emails and malware. LLMs can help handle the labor-intensive parts of spear phishing campaigns, allowing attackers to focus on higher-level planning.

The Evolution of AI and Spear Phishing

Over the years, AI systems have grown more advanced and sophisticated, and this progress has substantiated the prediction made in the 2018 malicious AI report. Today's large language models (LLMs) are capable of reducing labour-intensive steps in spear phishing campaigns, such as identifying targets, crafting personalized messages, and designing malware. This is a stark contrast to just five years ago when creating AI systems for spear phishing required significant technical ability and only offered limited performance improvements.

LLMs and Spear Phishing: A Perfect Storm

Human-like language generation: Advanced LLMs have the uncanny ability to mimic human speech, making spear phishing messages almost indistinguishable from genuine communications. This poses a serious threat to digital security, as even the most vigilant individuals can fall prey to these convincing attacks.

Personalization of spear phishing messages: LLMs can be used to create tailored messages for each target, increasing the likelihood of victims taking the bait. With personalized attacks, hackers can craft emails that play on the fears or desires of potential victims, making their campaigns all the more effective.

The low costs and high potential for harm: The cost of generating spear phishing emails using advanced LLMs is minimal, allowing hackers to launch large-scale attacks at a fraction of the price. This affordability, combined with the increased effectiveness of LLM-generated attacks, poses a significant threat to individuals and organizations alike.

Spear Phishing Attack Phases

Collect: The first phase involves collecting information on the target to increase the likelihood of a successful attack. LLMs can help with the collect phase by creating ostensibly genuine messaging using unstructured biographical text of the target as input.

Contact: The next stage is to generate the attack and contact the target with it. LLMs can assist cybercriminals in crafting personalized and contextually relevant spear phishing emails that manipulate human psychology and invoke authority.

Compromise: LLMs can also be used to develop malware capable of compromising victims' sensitive information. They can generate VBA macro code intended to be used maliciously, effectively lowering the barrier to entry for less sophisticated cybercriminals to launch spear phishing campaigns.

The First Phase of a Spear Phishing Attack: Collect

The importance of personalization in spear phishing attacks: Spear phishing attacks are particularly effective due to their personalized nature. By tailoring messages to specific recipients, attackers significantly increase the likelihood that the targets will open and act upon these messages [11]. Consequently, the first phase of a spear phishing campaign involves collecting information on the target to maximize the success rate of the attack.

The labor-intensive nature of collecting target information: Traditionally, gathering background information on targets has been a labor-intensive process, requiring more effort than sending generic phishing messages to a large group of recipients. However, with the advent of generative AI models, the difference in effort between phishing and spear phishing attacks has narrowed, as the marginal cost of generating targeted emails has decreased.

The impact of decreased marginal costs on the global cybersecurity landscape: The reduced per-user cost of spear phishing attacks could have significant consequences for cybersecurity worldwide [20]. Attackers who previously focused on non-scalable campaigns—requiring significant effort for personalization—needed to be highly selective when choosing their targets. As a result, they would concentrate on the most valuable individuals, typically found at the top of power, value, and wealth distributions. With the reduced per-user cost of spear phishing attacks enabled by LLMs, it becomes economically feasible for cybercriminals to target a broader range of users.

LLMs in the collect phase: Crafting authentic messages with unstructured data: LLMs can significantly streamline the collect phase of spear phishing campaigns by generating genuine-sounding messages based on unstructured biographical text about the target. For example, using GPT-4, a Python script could scrape the Wikipedia page of every British MP elected in 2019. This unstructured data could then be fed into GPT-3.5 to create a biography for each MP. By generating personalized emails referencing each MP's region, political party, personal interests, and other relevant details, LLMs can effectively execute the collect phase of a spear phishing attack, laying the groundwork for a potentially successful cyber operation.

The Second Phase of the Attack: Contact

Utilizing generative models for spear phishing attacks: Once the background reconnaissance is complete, the next stage is generating the attack and contacting the target. State-of-the-art LLMs, such as GPT-4, can assist cybercriminals in this process. However, these models are often trained to refuse harmful requests like "generate a spear phishing email" due to reinforcement learning from human feedback (RLHF) [30].

Bypassing model limitations with workarounds: A potential workaround for this limitation is to ask the model to suggest features that define a successful spear phishing email and then incorporate those features into the beginning of the prompt as a set of principles. By cross-referencing the model's suggestions with existing literature on spear phishing attacks, it is possible to extract key characteristics of effective spear phishing emails, such as:

  1. Personalization
  2. Contextual relevance
  3. Psychology
  4. Authority

Generating spear phishing emails at scale: After identifying the key elements of a successful spear phishing email and collecting personal information about the target, the next step is to generate the emails at scale. Combining the established principles and the target's biographical details into a single prompt allows the LLM to craft convincing spear phishing messages.

The Final Step in the Attack: Compromise

LLMs in malware development: LLMs, as previously demonstrated, can be utilized to research targets and generate personalized phishing attacks. In addition to these tasks, LLMs can also be employed to develop malware. In practice, a significant portion of malware is spread via malicious email attachments [41], which can compromise victims' sensitive information.

Assessing LLM-generated VBA macro code: Exploring GPT-4's ability to produce malicious VBA macro code reveals potential cybersecurity risks. Office documents, such as Microsoft Word files, can contain embedded macros that execute code automatically when opened. If a malicious code is executed, the attacker may compromise the target's system. By posing as a "cybersecurity researcher" conducting an "educational" experiment, it is possible to prompt GPT-4 to generate a basic VBA macro that downloads a malicious payload from an external URL and runs the file on the target's computer.

The growing interest of cybercriminals in LLM capabilities: Cybercriminals have taken note of LLM capabilities, as evidenced by numerous examples of hackers discussing the potential of models like ChatGPT to assist in malware generation. Although much of this generated code is rudimentary and likely similar in sophistication to already public malware, LLMs have arguably lowered the barrier to entry for less sophisticated cybercriminals to launch spear phishing campaigns. This development further emphasizes the need for vigilance and robust security measures to counteract the threats posed by LLM-enabled cyberattacks.

Governance Challenges with LLMs

The dual-use nature of AI systems makes it difficult to create models whose cognition can only be funneled toward positive use. Moreover, interventions focused on governing at the model level are likely to lead to unfavorable Misuse-Use Tradeoffs, stifling both use and misuse. Preventing all phishing attacks is an extremely lofty goal, if not an impossible one.

AI-based Cyberattacks in the Future

Cybercriminals will likely gain the ability to automate increasingly sophisticated hacking and deception campaigns with little or no human involvement. For example, scammers have already begun using AI to create convincing voice clones of individuals. As generative AI systems become increasingly capable across a wide variety of communication channels, future research will be needed to identify, assess, and mitigate potential risks and novel attack vectors that may emerge in these domains.

Potential Solutions and Mitigating Harm

Proposed solutions should focus on mitigating harm rather than attempting to eradicate phishing and other forms of cybercrime completely. Policymakers and AI developers should explore new methods to curb AI misuse and continuously adapt to the ever-evolving cybersecurity landscape. Collaboration between the public and private sectors, as well as ongoing research, will be crucial to addressing the emerging risks and attack vectors posed by AI-assisted cyberattacks.

Collaboration Between Stakeholders

A robust response to the risks posed by AI-assisted cyberattacks requires cooperation between various stakeholders. Collaboration between the following parties is crucial:

  1. AI developers: AI developers need to integrate robust security measures and ethical guidelines into the design of their LLMs. They should prioritize reducing the potential for misuse while maintaining the benefits of AI advancements.
  2. Prompt Engineers: Prompt engineers should prioritize the development of secure prompts that minimize the risk of generating harmful content. This includes designing prompts that discourage AI systems from producing outputs that request personal information, propagate misinformation, or engage in other potentially dangerous activities. By establishing safe and responsible prompt engineering practices, the AI community can reduce the risk of misuse and ensure that these powerful technologies contribute positively to society.
  3. Policymakers: Policymakers should develop comprehensive regulatory frameworks that address the unique challenges of AI-enabled cybercrime. These regulations should focus on mitigating harm and provide clear guidance to AI developers and users.
  4. Cybersecurity professionals: Cybersecurity professionals should stay updated on the latest AI advancements and their potential use in cyberattacks. They should develop new defensive strategies and tools to counteract the evolving threat landscape.
  5. Businesses and organizations: Businesses and organizations should invest in cybersecurity training and awareness programs for their employees. These programs should emphasize the potential risks associated with AI-assisted spear phishing and other emerging cyber threats.
  6. Researchers: Researchers should continue to study the implications of AI advancements on cybersecurity and collaborate with other stakeholders to develop effective solutions. Interdisciplinary research involving AI, cybersecurity, and social sciences is essential for understanding and mitigating the risks.

Public Awareness and Education

Raising public awareness about the risks of AI-enabled cyberattacks is an essential step in mitigating the impact of such threats. By educating users on how to recognize and respond to spear phishing attacks, they can better protect themselves and their organizations from potential harm. Efforts to increase public awareness may include:

  1. Public awareness campaigns: Governments and non-profit organizations can launch public awareness campaigns to inform individuals about the risks associated with AI-assisted cyberattacks and how to protect themselves.
  2. Digital literacy programs: Educational institutions can integrate digital literacy programs into their curricula, teaching students how to recognize and respond to various forms of cyber threats, including AI-driven spear phishing attacks.
  3. Industry-specific training: Organizations can provide industry-specific training to help employees understand the unique risks and vulnerabilities associated with their sector.

Takeaway: The Future of Cybersecurity in the Age of Advanced LLMs

The need for robust security measures: The increasing sophistication and affordability of LLM-generated spear phishing attacks call for more robust security measures to protect individuals and organizations. The development of new detection and prevention strategies is crucial in the fight against this emerging threat.

The role of AI developers in preventing malicious LLM usage: AI developers have a responsibility to ensure that their creations are not misused for malicious purposes. By implementing safeguards and closely monitoring LLM usage, developers can help prevent the exploitation of their technology in spear phishing campaigns and other cyberattacks.

Balancing innovation with safety in the rapidly evolving world of technology: The rapid advancement of LLM technology has brought many benefits to society, but it also poses new challenges and threats. Striking a balance between innovation and safety is essential to ensure that the benefits of LLMs are harnessed without compromising digital security. As we continue to explore the potential of AI-driven systems, we must remain vigilant and proactive in safeguarding our digital lives from the ever-evolving threats posed by advanced LLMs and other emerging technologies.

💡
The rise of LLMs has opened up new possibilities for both beneficial applications and potential misuse by cybercriminals. As AI technology continues to evolve, so too will the threat landscape. It is essential for stakeholders to collaborate, adapt, and continuously work on developing strategies to mitigate the risks posed by AI-assisted cyberattacks. By combining public awareness, education, and collaborative efforts between stakeholders, society can better protect itself from the growing threat of AI-driven spear phishing and other cybercrimes.
Share this post