AI as Weapons of Mass Destruction
Introduction
On 27-28 March 2025, the United Nations Institute for Disarmament Research (UNIDIR), an autonomous institution within the United Nations that conducts independent research on disarmament and international security issues, held a Global Conference on AI titled "Security and Ethics 2025." UNIDIR claims that “Artificial intelligence will revolutionise how militaries operate and how future wars will be fought. Competitive pressures in today’s fraught global security environment will only accelerate this trend. With AI technology advancing at scale, the time to act is now. AI will help to increase the speed and accuracy of military decision-making, planning and operations. In combination with robotic platforms and next-generation sensor technology, it will open up a vast spectrum of military applications, from AI-enabled logistical support, early warning systems, and intelligence gathering to AI-supported command structures, cyber operations and autonomous weapons systems” (A Transformative Technology with Myriad Benefits and Significant Risks, 2025).
The draft resolution presented to the United Nations General Assembly First Committee during its seventy-ninth session focuses on the implications of artificial intelligence (AI) in the military domain for international peace and security. The resolution emphasises the importance of ensuring that the use of AI in military applications complies with international law, including the UN Charter, international humanitarian law, and international human rights law, throughout all stages of the AI life cycle in the military context.
It highlights both the potential benefits of AI, such as improving compliance with international humanitarian law and protecting civilians, and the challenges and risks, including ethical, legal, security, and technological concerns. These risks involve the possibility of an arms race, miscalculation, escalation of conflicts, and proliferation to non-state actors, as well as issues related to bias in AI systems.
The resolution encourages States to engage in national and international efforts to manage AI's military applications, promote responsible knowledge sharing, and bridge the divides between developed and developing countries. It also calls for multilateral dialogue, involvement of various stakeholders (including civil society, academia, and the private sector), and further assessment of AI's impact on peace and security. Additionally, it requests that the UN Secretary-General seek and report on the views of Member States regarding AI in the military domain, excluding lethal autonomous weapons systems, to inform future discussions (Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security, 2024).
The intersection of Artificial Intelligence (AI) and weapons of mass destruction (WMD) remains a critical and evolving area with significant implications for global security. The integration of AI into military and security domains has accelerated in recent years, with its potential applications in WMD contexts becoming increasingly apparent. The rapid advancement of AI, particularly Large Language Models (LLMs) and generative AI has raised concerns about proliferation risks while also offering tools for enhanced monitoring and risk mitigation. This dual-use nature has prompted a global response, with organisations like the International Atomic Energy Agency (IAEA), the Organisation for the Prohibition of Chemical Weapons (OPCW), and the Comprehensive Test Ban Treaty Organisation (CTBTO) adapting their strategies to address AI’s impact.
For the purpose of this study, AI is defined as: "A digital technology that imitates human cognitive skills and, by doing so, affects most aspects of the social realm. AI emerges as an agent rather than a tool and, as such, posits far-reaching consequences that are likely to fundamentally transform the world as we know it, bringing both positive and negative effects. Consequently, utopian and dystopian elements are likely to shape the future of humanity".[1]
This paper starts with one fundamental assumption. Unlike most existing literature that views Artificial Intelligence (AI) as another variable complicating the control, usage, and perhaps the design and production of Weapons of Mass Destruction (WMD), the author of this research project derives from the hypothesis that AI is a WMD itself. To test this hypothesis, we will examine eight key features of AI, focusing on its nature and the consequences it has. They are Distraction, Disinformation, Distribution, Distress, Ambiguity, Dis-comprehension, Dis-attribution, and Dehumanisation. Two other related and fundamental notions for WMDs will also be explored: Mutually Assured Destruction (MAD) and Deterrence. First, let us examine the existing literature on WMD in the field of international security.
Weapons of Mass Destruction – key characteristics
Weapons of Mass Destruction (WMDs) are a critical concern in international security, encompassing nuclear, biological, chemical, and radiological weapons. Their proliferation, strategic implications, regional roles, and comparative policies shape global stability and security frameworks.
WMDs are categorised into four primary types, each with distinct technical characteristics and potential for mass destruction. Nuclear weapons derive their destructive power from nuclear reactions, either fission or fusion. They release vast amounts of energy, causing catastrophic damage through the blast, heat, electromagnetic pulse (EMP) and radiation. These weapons are the most lethal type of WMD, with the potential to destroy entire cities and ecosystems. The development and possession of nuclear weapons are regulated by the Nuclear Non-Proliferation Treaty (NPT), which allows only five nations—China, France, Russia, the United Kingdom, and the United States—to possess them legally. However, other states, such as North Korea, have pursued nuclear weapons programs despite international sanctions and diplomatic efforts (Les, 2024).
Biological weapons utilise pathogens or toxins to cause disease and death. These weapons are unique because they exploit living organisms, making them potentially more insidious than chemical weapons. Despite their high lethality, biological weapons have not been widely used in modern warfare due to ethical norms and international prohibitions, such as the Biological Weapons Convention (BWC). However, their potential for misuse by non-state actors, such as terrorist groups, remains a significant concern (Jordan et al., 2016).
Chemical weapons use toxic chemicals to cause harm or death. These weapons were extensively used during World War I and have been employed in regional conflicts, such as the Iran-Iraq War. The Chemical Weapons Convention (CWC) bans the production, stockpiling, and use of chemical weapons, but compliance remains inconsistent. For instance, Syria's use of chemical weapons in the 2010s highlighted the challenges of enforcement (Abdullah et al., 2024).
Radiological weapons disperse radioactive materials without producing a nuclear explosion. These weapons can cause long-term environmental contamination and health effects. While not as destructive as nuclear weapons, their potential for terror and economic disruption makes them a concern for international security (Faden, 2023).
The strategic implications of WMDs are profound, influencing global politics, military strategies, and the international system. Nuclear weapons, in particular, have played a central role in global power dynamics since the Cold War. The concept of mutually assured destruction (MAD) has deterred the use of nuclear weapons, as their deployment would result in catastrophic consequences for both attackers and defenders. However, the proliferation of nuclear weapons to additional states, such as North Korea and potentially Iran, complicates this balance of power and raises the risk of regional conflicts (Lodgaard, 2007).
The threat of WMDs extends beyond state actors to include non-state actors, such as terrorist groups. These groups may seek to acquire WMDs to inflict mass casualties and create fear. For example, the use of anthrax or other biological agents by terrorists could have devastating consequences. This has led to increased focus on counterterrorism measures and international cooperation to prevent the proliferation of WMDs (Ziska, 2022).
The proliferation of WMDs in regions such as the Middle East and East Asia has heightened tensions and destabilised international security. For instance, North Korea's nuclear program has raised concerns about regional security and the potential for nuclear conflict. Similarly, the presence of chemical and biological weapons in the Middle East has exacerbated existing conflicts and posed challenges for international diplomacy (Bennett, 2004).
The Middle East is a critical region for WMD proliferation, with countries such as Iran, Israel, and Syria being focal points. Iran's nuclear program has been a major concern, with allegations of covert enrichment activities raising fears of nuclear weapons development. Israel, while not a signatory to the NPT, is widely believed to possess nuclear weapons. The region's strategic importance and ongoing conflicts have made it a hotspot for WMD-related tensions (Bahgat, 2005).
East Asia, particularly the Korean Peninsula, is another critical region for WMD proliferation. North Korea's nuclear and missile programs have been a major focus of international concern, with the country conducting multiple nuclear tests despite global condemnation. The region's geopolitical dynamics, including the presence of major powers such as China, Japan, and the United States, further complicate the security landscape (Huang, 2024).
South Asia, particularly India and Pakistan, has also been a region of concern for WMD proliferation. Both countries possess nuclear weapons and have engaged in periodic conflicts, raising the risk of nuclear escalation. The region's instability is further exacerbated by the presence of non-state actors and the potential for WMDs to fall into their hands (Les, 2024).
The policies and approaches to addressing WMD proliferation vary significantly across regions and countries, reflecting differing priorities and strategic concerns. The United States has historically played a leading role in non-proliferation efforts, advocating for the strengthening of international regimes, such as the Nuclear Non-Proliferation Treaty (NPT). However, its policies have also been criticised for inconsistencies, such as its support for India's nuclear program despite India's non-membership in the NPT. The U.S. has also pursued unilateral measures, such as the Proliferation Security Initiative (PSI), to interdict WMD-related materials (Dokos, 2007).
The European Union has adopted a more multilateral approach to WMD proliferation, emphasising the importance of international treaties and regimes. The EU has been a strong supporter of the NPT and has engaged in diplomatic efforts to address the nuclear programs of countries such as Iran. However, the EU's approach has sometimes been criticised for being too reliant on diplomacy, with limited emphasis on enforcement mechanisms (Knopf, 2012).
China and Russia have pursued more pragmatic policies on WMD proliferation, often balancing their support for non-proliferation with their strategic interests. For example, both countries have been criticised for their reluctance to impose stringent sanctions on North Korea despite its nuclear program. Their policies reflect a focus on maintaining regional stability and avoiding confrontation with key allies (Huang, 2024).
NATO actively engages in preventing the spread of weapons of mass destruction (WMD) through a dynamic political agenda that spans arms control, disarmament, and non-proliferation, alongside defence and deterrence strategies. Member nations align and coordinate their WMD policies, collaborating in various international platforms to limit proliferation. They also work together to develop and, when necessary, deploy chemical, biological, radiological, and nuclear (CBRN) defence capabilities in line with political decisions. Combating proliferation necessitates a blend of political and military strategies, as well as building resilience among Allies against potential WMD threats. These efforts are part of a holistic strategy that involves engagement and collaboration with NATO partners and international organisations. The foundation of NATO’s strategy is detailed in the 2009 “Comprehensive, Strategic-Level Policy for Preventing the Proliferation of WMD and Defending against CBRN Threats,” which was approved by the heads of state and government at the Brussels Summit in 2018 (Nato Response to the Threats Posed by Weapons of Mass Destruction, 2018).
Weapons of Mass Distraction
Artificial Intelligence (AI) has become an integral part of modern life, influencing both individual behaviour and social dynamics. While AI offers numerous benefits, its role as a tool for distraction has emerged as a significant concern. AI-driven technologies, particularly through personalised content and notifications, have been shown to significantly impact individual behaviour, often leading to digital addiction and decreased productivity (Jawad et al., 2024).
AI technologies, such as those used in social media and mobile applications, employ sophisticated algorithms to capture user attention. These algorithms often prioritise engagement over other considerations, leading to prolonged screen time and digital addiction. For instance, AI-driven notification systems optimise when and how notifications are delivered, increasing the frequency of interactions with mobile devices. This can deepen digital addiction as users become conditioned to respond to constant stimuli (Madleňák & Hladíková, 2024).
AI personalisation algorithms curate content based on user preferences, creating an echo chamber effect that can lead to compulsive consumption of information. This personalised content can distract individuals from tasks that require focused attention, as the constant availability of tailored content reduces the need for self-regulation. Studies have shown that individuals exposed to personalised content exhibit lower self-esteem and higher anxiety levels, further exacerbating the potential for distraction (Jawad et al., 2024).
AI companions, such as chatbots and virtual assistants, offer a space for self-expression and exploration of identity. However, this can lead to a form of individual distraction, as users may spend significant time interacting with these AI entities, potentially at the expense of real-world interactions. While these interactions can enhance emotional well-being, they may also lead to a form of emotional dependency, where individuals rely on AI for validation and support rather than engaging with others (Kouros & Papa, 2024).
AI's impact on social dynamics is equally profound, influencing how individuals interact with each other and their communities. The rise of AI companions has been marketed as a solution to social isolation, offering constant availability and non-judgmental support. However, while these technologies can provide short-term comfort, they may exacerbate long-term social isolation by reducing the need for human interaction. Users may find it easier to interact with AI companions than with other humans, leading to a decline in social skills and emotional intelligence. This phenomenon is particularly concerning among adolescents, who are at a critical stage of social development (Savic, 2024).
AI personalisation algorithms can strengthen group identity, leading to greater polarisation and social fragmentation. By curating content that aligns with pre-existing beliefs and preferences, these algorithms can create echo chambers that reinforce in-group biases and reduce exposure to diverse viewpoints. This can lead to a form of social distraction, where individuals become more focused on their online communities than on engaging with broader society. Studies have shown that personalised content exposure predicts stronger in-group bias and reduced interaction with out-groups, further polarising social dynamics (Jawad et al., 2024).
AI tools also influence communication styles and self-presentation on social media. While these technologies can enhance user engagement, they may also diminish the authenticity of social interactions. For example, AI-driven filters and recommendation systems can lead to a more curated and less spontaneous form of communication, potentially reducing the richness of social connections. This can create a form of social distraction, where individuals become more focused on presenting a digital image than on engaging in meaningful interactions (Zulfiqar et al., 2024).
The ethical implications of AI-induced distraction are substantial, necessitating careful consideration of both the potential risks and benefits. Transparency and user awareness are critical, as users must be informed about the artificial nature of their interactions with AI companions to prevent unrealistic expectations or emotional dependency. Additionally, the commodification of care raises concerns about the authenticity and sustainability of emotional support provided by AI entities (Savic, 2024).
Weapons of Mass Disinformation
AI-generated disinformation can significantly influence political outcomes by spreading false or misleading information. For instance, deepfakes—AI-generated videos or images that can manipulate reality—pose a significant threat to electoral processes. These technologies can create convincing but false content, such as fake speeches or endorsements, which can sway public opinion and undermine trust in political leaders and institutions (Jones, 2023).
Moreover, AI-driven bots and algorithmic manipulation can amplify disinformation campaigns, targeting specific demographics with personalised messages. This microtargeting exploits emotional vulnerabilities, creating echo chambers that polarise opinions and erode confidence in democratic processes (Olanipekun, 2025). For example, during critical events, GenAI can flood information ecosystems with high-quality, convincing content, making it difficult to distinguish fact from fiction (Lucas et al., 2024).
Deepfakes have the potential to disrupt political stability by creating deceptive content that can damage reputations or manipulate public perception. For instance, a deepfake video of a political leader making inflammatory statements could escalate tensions or sway election results (Shoaib et al., 2023).
Foreign actors can also weaponise AI-generated disinformation to interfere in domestic politics. By leveraging AI tools, malicious actors can create and disseminate disinformation at scale, undermining trust in democratic institutions and exacerbating societal divisions (Mega, 2023). This clearly underscores the importance of international cooperation and cybersecurity measures to counter such threats (Yu, 2024).
AI-generated disinformation exacerbates polarisation by creating and amplifying divisive content. Algorithmic manipulation and information bubbles fueled by AI can deepen societal divisions, as individuals are exposed to content that reinforces their existing beliefs while isolating them from opposing viewpoints (Lapchit et al., 2024). This polarisation can lead to the erosion of trust in institutions, including media, government, and civil society, further destabilising social cohesion (Summerfield et al., 2024).
AI technologies, particularly LLMs, can craft emotionally charged narratives that exploit psychological vulnerabilities. For example, fear-based messaging can be tailored to specific audiences, creating widespread anxiety or panic. This manipulation can have profound psychological effects, including heightened stress, decreased civic engagement, and diminished faith in democratic processes (Liu et al., 2024).
The dissemination of AI-generated disinformation can also fuel radicalisation by spreading extremist ideologies. By creating and amplifying false or misleading content, malicious actors can recruit individuals into extremist groups or movements, further fragmenting society (Barman et al., 2024).
Regarding economic consequences, AI-generated disinformation can have substantial economic impacts, particularly in financial markets. For instance, deepfakes or false news about companies or economic indicators can lead to stock market volatility, damaging investor confidence and destabilising economies (Jones, 2023). Similarly, disinformation campaigns targeting businesses can harm reputations, leading to financial losses and economic disruption (Goldstein et al., 2023).
Weapons of Mass Distribution
The proliferation of AI is a fact. Artificial intelligence is being integrated into nearly every aspect of our lives. From healthcare to education and finance through cyber and national security. Most of us, especially in the developed world, use AI algorithms in one way or another and have been adapting to the new digital environment quite effectively.
Artificial Intelligence (AI) has revolutionised numerous sectors, offering unprecedented advancements in efficiency, productivity, and innovation. However, the rapid development and deployment of AI technologies have also introduced significant risks to cybersecurity, privacy, and national security.
AI has enabled cybercriminals to develop more sophisticated and targeted attacks. For instance, AI-enhanced cyber-attacks can now impersonate humans, manipulate data, and exploit system vulnerabilities on an unprecedented scale (Çalışkan, 2024). These attacks are more challenging to detect due to their automated nature, making traditional cybersecurity measures less effective. The SolarWinds hack and DeepLocker ransomware are prime examples of how AI-driven attacks have demonstrated advanced capabilities, such as precision targeting and data exfiltration (Karthikeyan, 2024).
The weaponisation of AI in cyberspace has introduced new challenges for national security. AI-powered tools, such as Occupy AI, have been specifically engineered to automate and execute cyberattacks, including phishing, malware injection, and system exploitation (Usman et al., 2024). These tools can bypass ethical and privacy safeguards, making them highly effective in generating and automating cyberattacks. The misuse of AI in this context underscores the urgent need for robust cybersecurity measures and ethical AI practices.
AI-powered deepfake technology has emerged as a significant threat to privacy and security. Deepfakes can manipulate audio and video content to create convincing but false information, leading to identity theft, political manipulation, and social unrest (Montasari, 2023) (T. et al., 2024). The ability of AI to generate realistic media has also been exploited in disinformation campaigns, further exacerbating the challenge of digital trust and authenticity (T. et al., 2024).
The use of AI in processing and analysing vast amounts of data has introduced significant privacy risks. AI systems often require access to sensitive data, which can be misused if proper safeguards are not in place. For instance, AI systems can be used to infer sensitive information about individuals through techniques like model inversion and membership inference (Al-Kharusi et al., 2024). Moreover, the lack of transparency in AI decision-making processes can lead to biased outcomes, further undermining privacy and trust (Vashishth et al., 2024).
Weapons of Mass Distress
The author of this study contends that in today’s world, AI is yet another manifestation of the technological revolution, most likely the fourth Industrial Revolution (Schwab, 2017). The development of AI has also been analysed in the context of the next Revolution in Military Affairs (RMA).
Firstly, the role of artificial intelligence - the increasing use of artificial intelligence in military applications raises questions about the potential for a new RMA. While the swift embrace of AI technologies is poised to revolutionise how states engage in warfare on multiple levels, this AI revolution also introduces new risks that need to be addressed as algorithmic systems become more integrated into military operations. Although cutting-edge technology promises to alter the nature of military strength in certain areas, it also complicates the cognitive elements of decision-making and bureaucratic processes within security organisations. The rapid pace at which complex AI systems facilitate entirely new forms of warfare may also lead to a potentially destabilising separation of human agency from the execution of military actions on various fronts. Mitigating the adverse effects of these "ghosts in the machine" will require substantial efforts to educate leaders, ensure accountability, and prevent the reckless use of AI Jensen, Whyte and Cuomo, 2019).
Secondly, autonomous weapons systems - the development of autonomous weapons systems is another area of active debate, with some arguing that it could lead to a significant shift in the character of warfare. Scholars assert that the rapid deployment of cyber weapons, robotics, and artificial intelligence will significantly transform the nature of warfare. The way wars are fought will affect the dynamics between policy, civilian populations, and military forces in ways that are just beginning to be understood. Progress in artificial intelligence will bring a new form of rationality to warfare, altering the traditional elements of Clausewitz's trinity. In the coming decades, this new source of logical calculation and innovation might be recognised as a distinct component of warfare (Hoffman, 2019).
Thirdly, cyber warfare - the growing importance of cyber warfare - is also considered a potential driver of future RMAs. The intersection of cyber warfare and artificial intelligence is a rapidly evolving field of study. AI is being explored for offensive and defensive cyber capabilities, leading to a complex and dynamic landscape. The current research suggests several key themes worthy of further exploration by the reader interested in the topic: AI-enabled cyber weapons. Here, the existing research strongly suggests that the first militarily significant autonomous weapons system enabled by AI will likely be deployed in cyberspace. This raises concerns about the potential for rapid escalation and the need for international norms and regulations (Strategic Comments, 2019).
To sum up, AI technology adds to the ongoing arms race significantly not only through the integration of this new and emerging technology into the existing WMD systems but also by establishing a whole new level of the AI race, led by the US and China, the winner of which is likely to establish its dominant position in the international system.
Weapons of Mass Ambiguity
The concept of Ambiguous Symbolism of Weapons (ASW) posits that all weapons and weapon systems inherently possess both defensive and offensive capabilities. ASW is a core aspect of international security, leading to what scholars label irresolvable uncertainty. On the flip side is the other mind’s problem (OMP). In the author's view, both are pertinent, if not even more significant, in relation to AI.
In the context of security studies, scholars often discuss the “double-edged sword” characteristics that AI technology presents. Artificial intelligence (AI) has become a critical component of modern military strategy, enhancing decision-making, operational efficiency, and battlefield capabilities. Autonomous weapon systems powered by AI can process vast amounts of data and make decisions in real-time, enabling faster and more precise military operations (Chalagashvili, 2024). AI is also being used to improve military logistics, surveillance, and intelligence operations. For instance, AI algorithms can analyse satellite imagery and predict trends in criminal activities, enabling preemptive responses to security challenges (Montasari, 2023).
Despite its protective capabilities, AI can also be weaponised to launch more sophisticated cyberattacks. Adversaries can use AI to impersonate humans, manipulate data, and exploit system vulnerabilities on an unprecedented scale (Çalışkan, 2024). For example, AI-powered phishing attacks have become increasingly prevalent, leveraging machine learning to craft compelling emails and social engineering campaigns (Kolade et al., 2025). The automation of cyberattacks has made them more targeted and challenging to detect. AI can enable adversaries to adapt their tactics in real time, making it difficult for traditional cybersecurity measures to keep up (Jensen et al., 2020). This has raised concerns about the potential for AI-driven cyberattacks to destabilise economies and undermine diplomatic relations.
AI has become a cornerstone of modern intelligence gathering, enabling the analysis of vast amounts of data to identify patterns and predict potential threats. AI-driven Open Source Intelligence (OSINT) tools, for example, have been instrumental in detecting and mitigating cyberattacks by analysing publicly available data (Malec, 2024). AI algorithms are also being used to enhance surveillance and threat assessment systems. By analysing communication traffic, satellite images, and social media posts, AI can identify potential security threats and enable preemptive responses (Montasari, 2023).
Weapons of Mass Dis-Comprehension
The Fog of War is a concept attributed to a Prussian military theorist, Carl von Clausewitz, who highlights the uncertainty and incomplete information that commanders face on the battlefield. The fog of war refers to the lack of clear, accurate information about the enemy's position, intentions, and capabilities, as well as the unpredictable nature of events during combat. Clausewitz also discussed "friction," which encompasses the physical and logistical challenges that impede military operations, further contributing to the overall uncertainty and difficulty of war.
As much as technological development is hoped to address the very fog of war, it may well exacerbate the problem even further. AI systems rely on data to make decisions, but in military operations, data is often incomplete, noisy, or uncertain. For example, sensor data may be incomplete due to limitations in sensor coverage or the presence of adversaries actively attempting to disrupt or deceive sensors. Noisy data can lead to incorrect or misleading conclusions, which can further obscure the commander's understanding of the battlefield (Judd et al., 2019).
AI systems can perpetuate and amplify biases present in the data used to train them. If the training data is biased or incomplete, the AI system may produce outputs that are systematically incorrect or misleading. This can lead to a situation where the biases of the AI system distort the commander's understanding of the battlefield (Oimann & Salatino, 2024).
Many AI systems, particularly those based on machine learning, operate as "black boxes," making it difficult for commanders to understand how decisions are being made. This lack of transparency can lead to a loss of trust in AI systems and create new sources of uncertainty, as commanders may be uncertain whether to rely on the AI system's outputs (Salmela et al., 2025).
In military operations, adversaries may attempt to manipulate AI systems through adversarial attacks, which involve feeding the system inputs designed to cause it to make incorrect decisions. These attacks can further obscure the commander's understanding of the battlefield and create new sources of uncertainty (Johnson, 2021).
One of the key challenges in AI-driven military operations is building and maintaining trust in AI systems. Commanders must have confidence in the accuracy and reliability of the information provided by AI systems, as well as the decisions recommended by these systems. However, establishing this trust can be challenging, particularly if the AI system is opaque or if its outputs are inconsistent with the commander's understanding of the situation (Oimann & Salatino, 2024).
Weapons of Mass Dis-attribution
To be continued…
References
- Les, I. (2024). The international legal regime of non-proliferation of nuclear weapons. Analìtično-Porìvnâlʹne Pravoznavstvo. https://doi.org/10.24144/2788-6018.2024.01.123
- Jordan, D., Kiras, J. D., Lonsdale, D. J., Speller, I., Tuck, C., & Walton, C. D. (2016). Weapons of mass destruction: radiological, biological and chemical weapons (pp. 379–404). Cambridge University Press. https://doi.org/10.1017/CBO9781316460276.018
- Abdullah, M. M., Ftayh, R. F., Khalaf, L. R., & Al-Obaidi, B. S. H. (2024). Weapons of Mass Destruction and International Law. Journal of Ecohumanism, 3(5), 511–524. https://doi.org/10.62754/joe.v3i5.3920
- Faden, D. L. (2023). Weapons of Mass Destruction (pp. 522–534). Routledge eBooks. https://doi.org/10.4324/9781003266365-43
- Lodgaard, S. (2007). Nuclear Proliferation and International Security. https://doi.org/10.4324/9780203089033
- Ziska, L. H. (2022). 21. Weapons of Mass Destruction (pp. 312–328). Oxford University Press eBooks. https://doi.org/10.1093/hepl/9780198862192.003.0021
- Bennett, B. W. (2004). Weapons of Mass Destruction: The North Korean Threat. Korean Journal of Defense Analysis, 16(2), 79–108. https://doi.org/10.1080/10163270409464066
- Bahgat, G. (2005). Nuclear Proliferation and the Middle East. Journal of Social, Political, and Economic Studies, 30(4), 401. https://www.questia.com/library/journal/1P3-948618451/nuclear-proliferation-and-the-middle-east
- Huang, Y. (2024). Nuclear Proliferation and International Security: Challenges and Perspectives. https://doi.org/10.62051/ijsspa.v4n2.52
- Dokos, T. P. (2007). Countering the Proliferation of Weapons of Mass Destruction: NATO and EU Options in the Mediterranean and the Middle East. https://www.taylorfrancis.com/books/mono/10.4324/9781003062578/countering-proliferation-weapons-mass-destruction-thanos-dokos
- Knopf, J. W. (2012). Multilateral Cooperation on Nonproliferation. https://apps.dtic.mil/sti/tr/pdf/ADA569925.pdf
- NATO’s Response to the Threats Posed by Weapons of Mass Destruction. (2018). In North Atlantic Treaty Organization. NATO. https://www.nato.int/nato_static_fl2014/assets/pdf/2018/10/pdf/1810-factsheet-wmd-en.pdf
- A transformative technology with myriad benefits and significant risks. (n.d.). UNIDIR. Retrieved November 6, 2025, from https://unidir.org/focus-area/artificial-intelligence/
- United Nations General Assembly. (2024, October 16). Artificial intelligence in the military domain and its implications for international peace and security (A/C.1/79/L.43). https://undocs.org/A/C.1/79/L.43
- Jawad, M., Talreja, K. R., Bhutto, S. A., & Faizan, K. (2024). Investigating how AI Personalization Algorithms Influence Self-Perception, Group Identity, and Social Interactions Online. Review of Applied Management and Social Sciences. https://doi.org/10.47067/ramss.v7i4.397
- Madleňák, A., & Hladíková, V. (2024). Manifestations of Technological Interference Associated with the Development of Artificial Intelligence Technologies for Automating Communication Processes in the Digital Environment. Deleted Journal, 459–466. https://doi.org/10.34135/mmidentity-2024-47
- Prayoga, H., & Wakhid, Z. N. (2024). AI chatbot distractions and academic triumphs: a mediation approach with self-control and coping skills. Journal of Accounting and Investment, 25(2), 673–691. https://doi.org/10.18196/jai.v25i2.20755
- Savic, M. (2024). Artificial Companions, Real Connections? M/C Journal, 27(6). https://doi.org/10.5204/mcj.3111
- Zulfiqar, N., B, S., Shah, S. A., & Awan, A. (2024). Investigating how AI Technologies Influence Changing Social Norms and Behaviors on Social Media, Particularly in Areas like Communication and Self-Presentation. Review of Education, Administration and Law, 7(4), 167–183. https://doi.org/10.47067/real.v7i4.371
- Jones, N. (2023). How to stop AI deepfakes from sinking society — and science. Nature, 621, 676–679. https://doi.org/10.1038/d41586-023-02990-y
- Olanipekun, S. O. (2025). Computational propaganda and misinformation: AI technologies as tools of media manipulation. World Journal Of Advanced Research and Reviews, 25(1), 911–923. https://doi.org/10.30574/wjarr.2025.25.1.0131
- Lucas, J. S., Maung, B., Tabar, M., McBride, K., & Lee, D. (2024). The Longtail Impact of Generative AI on Disinformation: Harmonizing Dichotomous Perspectives. IEEE Intelligent Systems, 39(5), 12–19. https://doi.org/10.1109/mis.2024.3439109
- Shoaib, M. R., Wang, Z., Taleby Ahvanooey, M., & Zhao, J. (2023). Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models. https://doi.org/10.1109/icca59364.2023.10401723
- Mega, R. A. Y. S. (2023). Countering Democratic Disruption Amid The Disinformation Phenomenon Through Artificial Intelligence (Ai) In Public Sector. Jurnal Manajemen Pelayanan Publik. https://doi.org/10.24198/jmpp.v7i1.48125
- Yu, C. (2024). How Will AI Steal Our Elections? https://doi.org/10.31219/osf.io/un7ev
- Lapchit, S., Suktam, W., Supsin, J., & Kenaphoom, S. (2024). Artificial Intelligence and Information Bubbles. Practice, Progress, and Proficiency in Sustainability, 161–178. https://doi.org/10.4018/979-8-3693-7989-9.ch009
- Summerfield, C., Argyle, L., Bakker, M. A., Collins, T., Durmus, E., Eloundou, T., Gabriel, I., Ganguli, D., Hackenburg, K., Hadfield, G., Hewitt, L., Huang, S., Landemore, H., Marchal, N., Ovadya, A., Procaccia, A., Risse, M., Schneier, B., Seger, E., … Botvinick, M. M. (2024). How will advanced AI systems impact democracy? arXiv.Org, abs/2409.06729. https://doi.org/10.48550/arxiv.2409.06729
- Liu, X., Lin, Y., Jiang, Z., & Wu, Q. (2024). Social Risks in the Era of Generative <scp>AI</scp>. Proceedings of the Association for Information Science and Technology, 61(1), 790–794. https://doi.org/10.1002/pra2.1103
- Barman, D., Guo, Z., & Conlan, O. (2024). The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination. Machine Learning with Applications. https://doi.org/10.1016/j.mlwa.2024.100545
- Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv.Org, abs/2301.04246. https://doi.org/10.48550/arXiv.2301.04246
- Çalışkan, E. M. (2024). The Threat of Tomorrow: Impacts of Artificial Intelligence-Enhanced Cyber-attacks on International Relations. Güvenlik Stratejileri Dergisi, 0 (War and International System), 109–130. https://doi.org/10.17752/guvenlikstrtj.1491683
- Karthikeyan, S. P. (2024). Rising Threat of AI-Driven Cybersecurity Attacks: Implications for National Security. International Journal For Science Technology And Engineering, 12(9), 756–762. https://doi.org/10.22214/ijraset.2024.64042
- Usman, Y., Upadhyay, A., Gyawali, P., & Chataut, R. (2024). Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks. https://doi.org/10.48550/arxiv.2408.12806
- Montasari, R. (2023). Internet of Things and Artificial Intelligence in National Security: Applications and Issues. Advances in Information Security, 27–56. https://doi.org/10.1007/978-3-031-21920-7_3
- Rao, V. N. T., G., H., Sai, K. A. N., & Sangers, B. (2024). Future Trends and Trials in Cybersecurity and Generative AI. Advances in Information Security, Privacy, and Ethics Book Series, 465–490. https://doi.org/10.4018/979-8-3693-5415-5.ch013
- Al-Kharusi, Y., Khan, A., Mughal, M. R., & Bait‐Suwailam, M. M. (2024). Open-Source Artificial Intelligence Privacy and Security: A Review. Computers, 13(12), 311. https://doi.org/10.3390/computers13120311
- Vashishth, T. K., Sharma, V., Samania, B., Sharma, R., Singh, S., & Jajoria, P. (2024). Ethical and Legal Implications of AI in Cybersecurity. Advances in Computational Intelligence and Robotics Book Series, 387–414. https://doi.org/10.4018/979-8-3693-7540-2.ch017
- Schwab, K. (2017). The Fourth Industrial Revolution. Crown Currency.
- Jensen, Benjamin, Christopher Whyte, and Sco tt Cuomo. 2019. “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence.” Oxford University Press.
- Hoffman, Frank G. 2019. “Squaring Clausewitz’s Trinity in the Age of Autonomous Weapons.” ORBIS. https://www.fpri.org/article/2019/01/squaring-clausewitzs-trinity-in-the-age-of-autonomous-weapons/
- Artificial intelligence and offensive cyber weapons. (2019). Strategic Comments, 25(10), x–xii. https://doi.org/10.1080/13567888.2019.1708069
- Chalagashvili, I. (2024). AI Era in Modern Warfare. Social Science Research Network. https://doi.org/10.2139/ssrn.4813807
- Montasari, R. (2023). Internet of Things and Artificial Intelligence in National Security: Applications and Issues. Advances in Information Security, 27–56. https://doi.org/10.1007/978-3-031-21920-7_3
- Çalışkan, E. M. (2024). The Threat of Tomorrow: Impacts of Artificial Intelligence-Enhanced Cyber-attacks on International Relations. Güvenlik Stratejileri Dergisi, 0(War and International System), 109–130. https://doi.org/10.17752/guvenlikstrtj.1491683
- Kolade, T. M., Obioha-Val, O., Balogun, A. Y., Gbadebo, M. O., & Olaniyi, O. O. (2025). AI-Driven Open Source Intelligence in Cyber Defense: A Double-edged Sword for National Security. Asian Journal of Research in Computer Science, 18(1), 133–153. https://doi.org/10.9734/ajrcos/2025/v18i1554
- Jensen, B., Whyte, C., & Cuomo, S. (2020). Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence. International Studies Review, 22(3), 526–550. https://doi.org/10.1093/ISR/VIZ025
- Malec, N. (2024). Sztuczna inteligencja a bezpieczeństwo państwa. Prawo i Bezpieczeństwo, 1 (2024), 20–24. 10.4467/29567610pib.24.002.19838
- Judd, G., Szabo, C., Chan, K. S., Radenovic, V., Boyd, P., Marcus, K., & Ward, D. (2019). Representing and reasoning over military context information in complex multi domain battlespaces using artificial intelligence and machine learning. 11006, 1100607. https://doi.org/10.1117/12.2518580
- Oimann, A., & Salatino, A. (2024). Command responsibility in military AI contexts: balancing theory and practicality. AI and Ethics. https://doi.org/10.1007/s43681-024-00512-8
- Salmela, L., Okkonen, J., & Heikkiniemi, R. (2025). State-of-the-art in human-centric studies of AI-enhanced situational awareness within the security domain. AHFE International, 160. https://doi.org/10.54941/ahfe1005808
- Johnson, J. (2021). Inadvertent Escalation in the Age of Intelligence Machines : A new model for nuclear risk in the digital age. 1–23. https://doi.org/10.1017/EIS.2021.23
[1] Author’s own definition.