We are tracking organizations who develop national security and defense policies to collect trending defense AI and arms control policies. Fresh reports and analysis from tracking sources are collected, filtered, categorized automatically and updated in our collection on a daily basis.
commentary
The Eurofighter Typhoon, a joint project of the UK, Germany, Italy, and Spain, represents a landmark in European defense integration. The advent of artificial intelligence (AI) has led many European defense analysts to call for an entirely new Eurofighter, a sixth-generation bird that incorporates things like AI.
2024-11-01
commentary
Establishing a taxonomy for AI risks would enable researchers, policymakers, and industries to communicate effectively and coordinate their efforts.
2024-10-21
statement
Read the Stop Killer Robots statement to First Committee at the 79th United Nations General Assembly.
2024-10-17
meeting report
Recent advances in artificial intelligence (AI) created powerful tools for research, particularly ... , health-related data in the Department of Defense (DoD). Discussions explored researchers’ ethical...
2024-10-15
open forum
The rapid technological advancement of AI in the civilian sector is accompanied by accelerating attempts to apply this technology in the military sector. This study focuses on the argument that AI-equipped let...
2024-10-15
commentary
First-person drone piloting is yesterday’s news. Drones are becoming smarter as the electronic environment around them makes operator communication more difficult.
2024-10-10
commentary
Future jets could be fully unmanned, with rapid design and production through 3D printing and AI-driven simulations.
2024-09-27
analysis
In this post, Erica Harper sets out the possible implications of AI-enabled military decision-making as this relates to the initiation of war, the waging of conflict, and peacebuilding. She highlights that while such use of AI may create positive externalities — including in terms of prevention and harm mitigation — the risks are profound. These include the potential for a new era of opportunistic warfare, a mainstreaming of violence desensitization and missed opportunities for peace. Such potential needs to be assessed in terms of the current state of multilateral fragility, and factored into AI policy-making at the regional and international levels.
2024-09-26
analysis
In this post, Matthias Klaus, who has a background in AI ethics, risk analysis and international security studies, explores the ethical challenges associated with a military AI application often overshadowed by the largely dominating concern about autonomous weapon systems (AWS). He highlights a number of ethical challenges associated specifically with DSS, which are often portrayed as bringing more objectivity, effectivity and efficiency to military decision-making. However, they could foster forms of bias, infringe upon human autonomy and dignity, and effectively undermine military moral responsibility by resulting in peer pressure and deskilling.
2024-09-24
open forum
New technologies with military applications may demand new modes of governance. In this article, we develop a taxonomy of technology governance forms, outline their strengths, and red-team their weaknesses.
2024-09-19
toolkit
While parliamentarians won’t participate directly in the negotiation of an international treaty banning and regulating killer robots – diplomats do under instructions from their government – parliamentarians will have a vital role in rejecting the automation of killing and ensuring meaningful human control over the use of force and building momentum towards a treaty. Engaging with them is therefore important in our lobbying efforts. This guide aims to assist campaigners in parliamentary engagement. Although there are specificities in approaching and engaging parliamentarians and you will have to adjust based on your national context, parliamentary outreach should be a critical part of your overall advocacy and lobbying activities.
2024-09-19
commentary
Today, it is far too easy for reckless and malicious actors to get their hands on the most advanced and potentially lethal machine-learning algorithms.
2024-09-05
analysis
As AI-based decision support systems (AI DSS) are increasingly used in contemporary battlefields, Jimena Sofía Viveros Álvarez, member of the United Nations Secretary General’s High-Level Advisory Body on AI, REAIM Commissioner and OECD.AI Expert, advocates against the reliance on these technologies in supporting the target identification, selection and engagement cycle as their risks and inefficacies are a permanent fact which cannot be ignored, for they actually risk exacerbating civilian suffering.
2024-09-04
analysis
Algorithmic bias has long been recognized as a key problem affecting decision-making processes that integrate artificial intelligence (AI) technologies. The increased use of AI in making military decisions relevant to the use of force has sustained such questions about biases in these technologies and in how human users programme with and rely on data based on hierarchized socio-cultural norms, knowledges, and modes of attention.
2024-09-03
report
Recent advances in the capabilities of artificial intelligence (AI) have increased state interest in leveraging AI for military purposes. Military integration of advanced AI by nuclear-armed states has the potential to have an impact on elements of their nuclear deterrence architecture such as missile early-warning systems, intelligence, surveillance and reconnaissance (ISR) and nuclear command, control and communications (NC3), as well as related conventional systems.
2024-09-01
analysis
Militaries incorporating increasingly complex forms of artificial intelligence-based decision support systems (AI DSS) in their decision-making process, including decisions on the use of force. The novelty of this development is that the process by which these AI DSS function challenges the human’s ability to exercise judgement in military decision-making processes. This potential erosion of human judgement raises several legal, humanitarian and ethical challenges and risks, especially in relation to military decisions that have a significant impact on people’s lives, their dignity, and their communities. It is in light of this development that we must urgently and in earnest discuss how these systems are used and their impact on people affected by armed conflict.
2024-08-29
statement
Read the statement in full from Stop Killer Robots to the first 2024 session of the Group of Governmental Experts of the High Contracting Parties related to emerging technologies in the area of lethal autonomous weapons systems (LAWS).
2024-08-26
report
Stop Killer Robots’ research and monitoring team has produced a publication summarising State submissions to the highly anticipated UN Secretary-General’s report on Autonomous Weapons.
2024-08-16
analysis
How Project Maven and the U.S. 18th Airborne Corps Operationalized Software and Artificial Intelligence for the Department of Defense
2024-08-14
commentary
There are still concerns about leaving too much control to the machines, lest stories from science fiction become self-fulfilling prophecies. But the fact remains that the use of drones could help keep soldiers out of harm's way.
2024-08-14
commentary
While the seventh generation isn't yet defined, it may feature autonomous capabilities, advanced materials, and multinational collaboration. However, such advancements could be decades away, possibly emerging in the 2070s or later.
2024-08-06
commentary
Militaries need to show it’s possible to build ethical killer robots that don’t say no, or engineer a safe right-to-refuse while keeping humans in the loop.
2024-08-06
report
As the global conversation on how to address the challenges posed by autonomous weapon systems (AWS) evolves, there is now growing support among states that one possible way to proceed is through a ‘two-tiered approach’. Such an approach would, on the one hand, prohibit certain types and uses of AWS and, on the other hand, place limits and requirements on the development and use of all other AWS. A critical task facing states is to agree on how such a two-tiered approach could be enacted.
2024-08-01
original research
Artificial intelligence (AI) has found extensive applications to varying degrees across diverse domains, including the possibility of using it within military contexts for making decisions that can have moral consequences. A recurring challenge in this area concerns the allocation of moral responsibility in the case of negative AI-induced outcomes.
2024-07-22
commentary
Retired U.S. Army General Mark Milley predicts that robots and autonomous systems could comprise up to one-third of the U.S. military by 2039, potentially operated and commanded by artificial intelligence (AI).
2024-07-17
commentary
The B-21 Raider stealth bomber, still in development, faced a $1.6 billion cost overrun in late 2023, raising concerns about its expense amid the rise of drone warfare. Despite this, the B-21 is intended to replace the B-2 Spirit, whose stealth capabilities are becoming outdated.
2024-07-14
commentary
As warfare evolves, traditional military strategies and platforms must adapt or face obsolescence. The U.S. military, heavily reliant on aircraft carriers, now confronts the growing threat of anti-access/area denial (A2/AD) systems.
2024-07-07
analysis
How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.
2024-07-01
original article
Artificial intelligence has been a hot topic in recent years, particularly as it relates to warfare and military operations. While rational choice approaches have been widely used to understand the causes of war, there is little literature on using the rational choice methodology to investigate the role of AI in warfare systematically.
2024-06-19
analysis
To help states elaborate on possible elements of a two-tiered approach to the governance of AWS, Laura Bruun from the Stockholm International Peace Research Institute (SIPRI) points to three lessons from past arms control negotiations that can be applied to the AWS debate
2024-06-13
annual report
The 2023 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2023 to March 2024.
2024-06-10
annual report
The 2022 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2022 to March 2023.
2024-06-10
annual report
The 2021 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2021 to March 2022.
2024-06-10
commentary
That should be seen as a danger not just for those in Ukraine today, but perhaps all of humanity in the not-so-distant future.
2024-06-05
commentary
Manned aircraft are still relevant. And with the onset of the B-21, manned aircraft should continue to be relevant. But automation and artificial intelligence are coming, and will one day encroach upon the pilot’s job security.
2024-05-21
response paper
The submission from Stop Killer Robots to the United Nations Secretary-General in response to Resolution 78/241 on autonomous weapons systems.
2024-05-15
commentary
A 2020 study identified at least 12 NATO member states as using social media to spread computational propaganda and disinformation, while two (the UK and USA) were shown to have high “cyber troop” (government or political party actors tasked with manipulating public opinion online) capacity. Such activities appear to be connected to US special forces and intelligence agencies, and are being linked to private sector initiatives using artificial intelligence.
2024-05-10
commentary
The future loyal wingmen of the United States Air Force are inching closer to becoming a reality, and more importantly, the artificial intelligence (AI) controlled aircraft could be on track to be as good as any human pilot. That was the assessment from Air Force Secretary Frank Kendall, who recently took flight in an autonomously-controlled X62A VISTA (Variable In-flight Simulation Test Aircraft), a modified F-16 Fighting Falcon
2024-05-10
commentary
Israel and the Gulf States are betting on Artificial Intelligence to help them fend off Iranian drones and proxies.
2024-05-09
commentary
Generative AI can wreak havoc in many ways but it’s not an existential threat any more than computer code is.
2024-04-29
commentary
The U.S. Air Force is advancing artificial intelligence (AI) capabilities within its ranks by incorporating AI pilots into F-16 combat aircraft as part of DARPA's Air Combat Evolution (ACE) program.
2024-04-21
commentary
In the recently disclosed flights, the ACE AI algorithms took control of a specially modified F-16 Fighting Falcon test aircraft designated as the X-62A, or VISTA (Variable In-flight Simulator Test Aircraft), at the Air Force Test Pilot School at Edwards Air Force Base (AFB), California. The demonstrations of autonomous combat maneuvers began last year.
2024-04-18
commentary
Surprisingly, people serving in the US military are less likely than the general public to support using unmanned vehicles in military operations, even when doing so could save soldiers’ lives.
2024-04-17
original paper
In the military domain, numerous bodies have argued that autonomous and AI-enabled weapon systems ought not incorporate unexplainable AI
2024-04-16
commentary
As countries prepare to deploy lethal autonomous weapon systems at scale, artificial intelligence is being integrated into drone operations and to support human decision-making in conflicts around...
2024-04-09
commentary
The U.S. Air Force is advancing its Next Generation Air Dominance (NGAD) program by integrating autonomous capabilities into older F-16 Fighting Falcons as part of the VENOM-AFT program.
2024-04-03
commentary
Two research institutes will collaborate on AI safety tests, among other things.
2024-04-03
commentary
Senators are seeking more information about AI safety within the AUKUS program.
2024-03-27
article
The UN General Assembly announced the unanimous adoption of a 13-point resolution aimed at regulating and ensuring the security of the field of artificial intelligence (AI). The resolution was...
2024-03-22
analysis
Last week, states parties met for the first session of the Group of Governmental Experts ...
2024-03-14
research article
This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains.
2024-03-13
commentary
The Defense Department’s key research arm will experiment with ethical chatbots and new robot super pilots.
2024-03-13
report
There is an urgent need for a more thorough and comprehensive examination of enabling technologies as well as their potential impacts on international security.
2024-03-06
statement
Read the statement in full from Stop Killer Robots to the first 2024 session of the Group of Governmental Experts of the High Contracting Parties related to emerging technologies in the area of...
2024-03-04
commentary
WARFARE EVOLUTION BLOG. In our last escapade, we investigated the worldwide market for warships and submarines. Out of respect for the literary principle of subject matter continuity, we are forced...
2024-02-29
main paper
There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous ... assurance relating to the development and use of AI in military setti...
2024-02-25
open forum
The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance...what to the how of AI ethics sees a nascent body of literature published by defence
2024-02-19
commentary
The proliferation of autonomous weapons systems (AWS)—often (mis) labeled ‘killer robots’—is a modern concern.
2024-02-16
commentary
New technologies for anti-missile defense are challenging the assumed priority of offense over defense.
2024-02-09
commentary
As tensions between Washington and Beijing continue to ramp up, the arms race to develop the world’s first next-generation fighters is on. From submarines and fighters to bombers and main battle tanks, the U.S. and China are prioritizing the development of advanced and cutting-edge technologies. Perhaps the most anticipated sixth-generation designs are the upcoming Next-Generation Air Dominance (NGAD) program, the B-21 stealth bomber and the F/A-XX airframe.
2024-01-30
commentary
The impact of generative AI on Asian deterrence is not well understood and may create greater risks of conflict.
2024-01-30
commentary
AI-based, autonomous weapon systems (AWS) have the potential of weapons of mass destruction, and effects of AWS are downplayed by the military and the arms industry staging these systems, it is also argued that they can be built on the basis of a ‘responsible’ or ‘trustworthy’ artificial intelligence (AI).
2024-01-11
news
Countries that approved the first-ever United Nations General Assembly resolution on “killer robots” should promote negotiations on a new international treaty to ban and regulate these weapons,...
2024-01-03
commentary
While 6th generation fighters like the NGAD, Tempest, and F/A-XX are all the rage, a 7th generation fighter is already being considered in some defense circles
2024-01-03
commentary
Could the new B-21 Raider stealth bomber be the last U.S. Air Force bomber that has pilots at the control of this expensive U.S. Air Force warplane?
2023-12-19
commentary
The military rationale of a pre-emptive strike is predicated upon the calculation and anticipation of threat. The underlying principle of anticipation, or prediction, is foundational to the operative logic of ...
2023-12-05
perspective
With the ongoing AI arms race in the Russia-Ukraine War, it is expected that AI-powered lethal weapon systems will become commonplace in warfare
2023-12-02
report
The Primer explores existing data challenges, both technical and organizational, introduces key technical characteristics and methods of generating synthetic data, and analyzes implications of using synthetic data in the context of international security, including for autonomous systems and in the cyber realm.
2023-11-30
commentary
US military officers can approve the use of AI-enhanced military technologies that they don't trust. And that's a serious problem.
2023-11-29
main paper
Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for A...
2023-11-27
article
This short explainer paper discusses autonomous weapons in the context of digital dehumanisation.
2023-11-14
briefing paper
This short briefing paper addresses the need for a prohibition on autonomous weapons systems designed or used to target humans, and the digital dehumanisation inherent in such systems.
2023-11-14
report
This paper presents an examination of convergences in state positions on human control in the context of autonomy in weapons systems.
2023-11-14
news
The Pentagon just released a new set of “ethical artificial intelligence tools” to help users use the technology more responsibly
2023-11-14
analysis
IHL calls for a ‘human-centered’ approach to the development and use of AI in armed conflict – to try to preserve humanity in what is already an inhumane activity.
2023-10-24
interview
In this interview, AI godfather Yoshua Bengio discusses attention-grabbing headlines about AI, taboos among AI researchers, and why top AI researchers may disagree about the risks AI may pose to humanity...
2023-10-17
analysis
Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms.
2023-10-16
news
United Nations Secretary-General António Guterres and President of the International Committee of the Red Cross Mirjana Spoljaric have a new message for governments: “act now to preserve human...
2023-10-06
commentary
The flights with the Osprey MK III were the first major experimental effort for the new ADAx proving ground, but others are soon to follow.
2023-10-04
q&a
A resolution on Autonomous Weapons Systems (AWS) will be tabled at the First Committee of the 78th Session of the United Nations General Assembly in October 2023. This document provides brief answers to some frequently asked questions on the issue.
2023-10-02
policy brief
The First Committee session of the 78th Session of the United Nations General Assembly (UNGA) in October 2023 provides a crucial opportunity for progress.
2023-10-02
commentary
The National Security Agency is standing up an artificial intelligence security center, with an end goal of promoting the secure development, integration, and adoption of AI capabilities within national security systems, and our defense industrial base.
2023-09-28
commentary
The future of warfare will certainly be data-driven and AI-enabled, and, in many ways, it already is.
2023-09-23
commentary
There's a three-dimensional solution to manage the evolving dual-use concern of AI: advance states-centric monitoring and regulation, promote intellectual exchange between the non-proliferation...
2023-09-11
commentary
The author of "Four Battlegrounds: Power in the age of artificial intelligence" surveys in matter-of-fact detail the struggle for world leadership in AI—especially as it relates to US-China power...
2023-09-11
commentary
LLM-based chatbots and bio-design tools influence the biosecurity landscape in different ways and require independent governance.
2023-09-01
commentary
Despite the dramatic pace of discoveries in the life sciences, the regulatory systems established for other dual-use risk domains, such as chemical and nuclear research, remain far more mature than...
2023-08-31
conference paper
This contribution provides an overview of nuclear risks emerging from the militarization of AI technologies and systems. These include AI enhancements of cyber threats to nuclear command, control and communica...
2023-08-25
commentary
Artificial intelligence is capable of amplifying the risks of other technologies, and demands a reevaluation of the standard policy approach.
2023-08-18
commentary
AI is making its way into decision-making in battle. Who’s to blame when something goes wrong?
2023-08-16
commentary
As AI becomes part of military decision-making, it’s important to be wary of the pristine ideas of how technology can transform conflict.
2023-08-14
report
This report explores how the United States can manage strategic risks—defined as increased risks of armed conflict or the threat of nuclear war—that could be created or exacerbated by military AI in its relationship with China...
2023-07-25
commentary
China is directing more of its AI-related research into defense applications than the United States, whose tech sector is more focused on consumer AI services such as ChatGPT.
2023-07-19
commentary
The reasons not to integrate AI into comprehensive nuclear command, control, and communications systems are manifold. They involve increased speed of warfare, accidental escalation, misperception...
2023-07-17
analysis
The UN Security Council will discuss the implications of artificial intelligence for the maintenance of international peace and security for first time in July 2023. The impact on arms control is a crucial element. So far, though, discussions have been limited and disjointed.
2023-07-13
report
The paper provides a cautionary tale regarding the mainstreaming of AI-driven technological solutions into security and defence across the EU, noting that this legitimizes a specific geopolitical and militaristic imaginary of innovation that might not be compatible with the EU’s promotion of responsible, trustworthy and human-centric visions of such systems...
2023-07-01
commentary
Transparency in scientific research is undeniably valuable. But it would be a mistake for AI research to be completely transparent. To minimize harm, dual use technologies—especially those like AI...
2023-06-08
news
Autonomous weapons systems could help automate Israel’s uses of force. These uses of force are frequently unlawful and help entrench Israel’s apartheid against Palestinians. Without new international law to subvert the dangers this technology poses, the autonomous weapon systems Israel is developing today could contribute to their proliferation worldwide and harm the most vulnerable...
2023-06-06
news
Although political and procedural hurdles have impeded progress on addressing autonomous weapons systems, proponents of a new treaty should look to the success of the Convention on Cluster Munitions, and the negotiations that led to it, for inspiration....
2023-05-30
opinion
Defeating autonomous weapons requires a constant, preventative effort, as technology development can sometimes outpace politics. Governments, civil society organizations, researchers, and industry players must work together to properly navigate this complex topic and assure the right and ethical implementation of emerging technology …
2023-05-23
event
On May 31, 2023, the III Scientific and Practical Conference on the Development of Robotics in the Field of Life Safety, known as «RoboEmercom», will take place. The conference will be discuss Experience in the use of Robotics and Technical Systems (RTS) in special military operations, along with associated problems and potential solutions...
2023-05-19
commentary
If the United States decides to send cluster munitions to Ukraine, it should consider investing in autonomous capabilities for demining.
2023-05-17
commentary
The difference between AI that’s a boon to society or a curse lies in truthfulness, a uniquely human concept.
2023-05-16
paper
Familiarity plays little role in support for AI-enabled military applications, for which opposition has slightly increased over time.
2023-05-12
commentary
Commercial competition, politics, and public opinion are driving AI development in the United States—and unnecessarily escalating the AI arms race with China.
2023-05-12
report
This resource paper offers a comparative analysis of the content of the different proposals related to emerging technologies in the area of lethal autonomous weapon systems (LAWS) submitted by...
2023-05-10
commentary
Among the major powers that have recognized the connotation of AI in shaping the future of global supremacy dynamics are the United States, China, and Russia. With the goal of having an edge over each other, these countries have made significant investments in AI exploration and growth.
2023-05-06
commentary
What should be done to manage AI and other technological advances that pose catastrophic risks? What the world should have done with nuclear technology: Expand scientific collaboration and avoid...
2023-04-12
article
In this blog post, Palantir Global Director of Privacy & Civil Liberties Engineering Courtney Bowman and Privacy & Civil Liberties Government and Military Ethics Lead, Peter Austin, explore the ethical role of technology providers in the defense domain. Future posts will explore Palantir’s work supporting defense workflows in the most consequential settings.
2023-04-07
commentary
The creators of ChatGPT decided to test whether their AI systems can teach someone to build and use nuclear and biological weapons. Was it enough?
2023-03-30
commentary
Pop culture influences how people think about artificial intelligence, and that spills over to how military planners think about war—obscuring the more mundane ways AI is likely to be used.
2023-03-28
commentary
As cutting-edge AI-powered chatbots like ChatGPT come online, observers have begun to worry about the implications of content producing AI in areas like employment and disinformation...
2023-03-24
speech
Thank you to Costa Rica and FUNPADEM for organizing this important conference. I will address some of the social and humanitarian consequences of autonomous weapons systems. By autonomous weapons...
2023-03-08
news
The push to prohibit and regulate autonomous weapons systems made significant progress last month when nearly every country in Latin America and the Caribbean endorsed a new communiqué calling for the “urgent negotiation” of a binding international treaty.
2023-03-06
news
Last month, members of the Stop Killer Robots campaign met in Costa Rica with 68 campaigners from 29 countries for their first in-person global conference since the Covid-19 pandemic. A central...
2023-03-03
analysis
We argue that looking at how responsibility for IHL violations is currently ascribed under international law is critical not only to ensuring accountability but also to identifying clearer limits and requirements for the development and use of AWS.
2023-03-02
analysis
Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.
2023-03-01
report
It is undisputed that the development and use of autonomous weapon systems (AWS) must comply with international humanitarian law (IHL). However, how IHL rules should be interpreted and applied in the context of AWS remains, in some respects, unclear or disputed. With a particular focus on human–machine interaction, this report aims to facilitate a deeper understanding of this issue. The report provides a baseline for policymakers to advance discussions around what types and uses of AWS are (or should be) prohibited or regulated under existing IHL.
2023-03-01
commentary
The logic supporting the development and deployment of autonomous weapons system (AWS) is a continuation of the escalatory deterrence strategy that characterized the Cold War, and fails to grasp how such systems will change the conduct of warfare.....
2023-02-23
policy
Government representatives meeting at the REAIM summit have agreed a joint call to action on the responsible development, deployment and use of artificial intelligence (AI) in the military domain.
2023-02-16
commentary
State Department and Pentagon officials hope to illuminate a contrast between the United States and China on AI.
2023-02-16
commentary
Increasing automation within nuclear weapon command systems means putting faith, and lives, in the hands of algorithms that may never fully understand.
2023-02-16
original paper
The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. H...
2023-02-14
news
A new United States Department of Defense directive concerning development of autonomous weapons systems is an inadequate response to the threats posed by removing human control from the use of...
2023-02-14
news
A new directive on autonomy in weapons systems issued on January 25, 2023 shows the United States Department of Defense (DoD) is serious about ensuring it has policies and processes in place to...
2023-02-14
original paper
Artificial Intelligence (AI) offers numerous opportunities to improve military Intelligence, Surveillance, and Reconnaissance operations. And, modern militaries recognize the strategic value of reducing civili...
2023-02-13
news
It is a time for innovation, especially in addressing the risks and dangers posed by autonomous weapons systems...
2023-02-02
speech
Remarks by NATO Secretary General Jens Stoltenberg at the CHEY Institute during his visit to the Republic of Korea
2023-01-30
policy
On January 26, 2023, NIST released the AI Risk Management Framework (AI RMF 1.0) along with a companion NIST AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives.
2023-01-26
commentary
In November 2021 NATO Watch published a critique of NATO's approach to the use of artificial intelligence (AI) for military purposes. This article provides a brief update to the critique following...
2023-01-06
research article
AI has numerous applications and in various fields, including the military domain. The increase in the degree of ... is the assignment of moral responsibility for some AI-based outcomes. Several authors claim tha...
2023-01-05
essay
This is the introduction to the report “Meeting China’s Emerging Capabilities: Countering Advances in Cyber, Space, and Autonomous Systems.”
2022-12-15
essay
This essay examines India’s key concerns about China’s growing technological prowess in the areas of cyberspace, outer space, and artificial intelligence and automation; the Indian response; and...
2022-12-15
essay
This essay explores the implications of the use of established and emerging technologies by the People’s Republic of China (PRC), the Philippines’ limited response in countering the PRC’s...
2022-12-15
report
In this NBR report, experts from Australia, India, Japan, the Philippines, Taiwan, and Vietnam discuss China’s emerging cyber, space, and autonomous weapons capabilities. They examine regional...
2022-12-15
essay
This essay examines Japan’s perceptions of and responses to major threats posed by China’s emerging capabilities in space, cyber, and autonomous weapons systems and considers policy options for...
2022-12-15
interview
World Geostrategic Insights interview with Fatima Roumate on the use of artificial intelligence in the Ukrainian conflict. Fatima Roumate Ph.D. is a Full Professor of International Law at the Faculty of Law, Economic and Social Sciences Agdal, Mohammed V University, Rabat, Morocco. Founding President of the International Institute of Scientific Research, Marrakech since 2010. She is a Member of …
2022-12-09
essay
Without a doubt, the most complex global governance challenges surrounding AI today involve its application to defence and security.
2022-11-28
essay
If ubiquitous sensors result in a tsunami of real-time data, AI might provide the analytic potency needed to anticipate an adversary’s next step, down to the very minute.
2022-11-28
essay
Each attack during 2022 has acted as a pertinent reminder of what happens when state-manufactured advanced weapons technologies fall — or are perhaps placed — into the hands of hostile non-state organizations.
2022-11-28
essay
Assessing cyberthreats and gaps in legal protection in the biosecurity sector would therefore gain from being considered by technical and legal experts in the field.
2022-11-28
essay
The incentives to network and link military systems have resulted in civilian objects...increasingly becoming dual-use and thus possibly targetable infrastructure.
2022-11-28
essay
Who is to be held accountable for civilians who are hurt or killed and civilian infrastructure that is damaged or destroyed?
2022-11-28
video
Current applications of automated systems to many aspects of war and conflict have opened a new Pandora’s box. Systems operating autonomously with little human intervention raise ethical and legal concerns. Ethicists, international legal experts and international affairs specialists have been sounding the alarm on the potential misuse of this technology and the lack of any regulations governing its use.
2022-11-28
video
When deciding how much power an autonomous system has, governments need to consider the impacts of international humanitarian law and ethics, because allowing AI complete, unregulated control could be a runaway nightmare.
2022-11-28
video
When states consider deploying modern autonomous systems powered by artificial intelligence (AI), they must consider the legal and ethical concerns in addition to the technical specifications of the tool.
2022-11-28
opinion
Humans have to decide what, when and where to engage, in particular when an application of military force could endanger human life.
2022-11-28
report
In a report for the Stockholm International Peace Research Institute (SIPRI), Marta Bo with Laura Bruun and Vincent Boulanin tackle how humans can be held responsible for violations of...
2022-11-24
article
Advanced autonomy is key to the Air Force’s future drone plans, but humans will still make key decisions like about when to fire weapons.
2022-11-19
commentary
The offices are meant to hone NATO's technological edge by working with private sector companies and academics. Their mandate is to engage with both high-tech startups and established companies to...
2022-11-19
report
The increasing autonomy of nuclear command and control systems stemming from their integration with artificial intelligence (AI) stands to have a strategic level of impact that could either increase nuclear stability or escalate the risk of nuclear use.
2022-11-18
research article
This article describes the traditional weapons review process and explains why this process may need to be modified to adequately evaluate autonomous weapon systems (AWS)
2022-11-18
policy
China, based on its own policies and practices and with reference to useful international experience, published the position paper in the aspects of regulation, research and development, utilization and international cooperation.
2022-11-17
speech
The United Nations Office for Disarmament Affairs (UNODA) and the European Commission co-hosted a workshop on "Ethics and Emerging Technologies in Weapons Systems" in April 2022. The director of Center for Long-term AI, Prof. Yi Zeng was invited as a speaker. The following is a recording of his speech.
2022-11-11
policy
The military applications of Artificial Intelligence (AI) have already introduced great risks and challenges to the world. As such, we should be vigilant about lowering the threshold of war due to the development of military AI , and actively work to prevent avoidable disasters. "Defense Artificial Intelligence and Arms Control Network" published the principles that the design, research, development, use and deployment of military AI throughout the whole life cycle should comply with.
2022-11-07
interview
World Geostrategic Insights interview with Fatima Roumate on the main opportunities, challenges, and concerns related to the application of Artificial Intelligence (AI) in international relations and global governance, as well as the malicious uses of AI and the impact of AI in the Russia-Ukraine war. Fatima Roumate Ph.D. is a Full Professor of International Law …
2022-10-31
article
The use of autonomous weapons is becoming one of the most significant threats to humanity in today’s society. One of the major issues confronting the use of autonomous weapons is that of command...
2022-10-28
press release
The Center for Strategic and International Studies (CSIS) is pleased to announce the formation of the CSIS AI Council.
2022-10-26
article
"A Manifesto on Enforcing Law in the Age of 'Artificial Intelligence'" was recently presented at a gathering in Rome, with a focus of design of ...
2022-10-25
commentary
Budget documents reveal plans for the Super Swarm project, a way to overwhelm defenses with vast numbers of drones attacking simultaneously.
2022-10-24
opinion
“The truth is that In this digital world, we all live in the prison and surveillance 24/7, Yes, in the prison of SmartPhones, Sims card, Social applications and definitely Artificial Intelligence (AI).” ~ Dr. Rana Danish Nisar ~ Welcome to digital cold war: truth is bitter and the world ready for this …
2022-10-22
commentary
DILEMA Lecture on the topic of ‘The Ethics of Human–Robot Interaction and Traditional Moral Theories’.
2022-10-20
statement
Read a copy of the statement delivered by Stop Killer Robots at the 77th UN General Assembly (UNGA) First Committee on Disarmament and International Security.
2022-10-14
opinion
Summary of NATO's Autonomy Implementation Plan
2022-10-13
report
Advances in artificial intelligence (AI) pose immense opportunity for militaries around the world. With this rising potential for AI-enabled military systems, some activists a...
2022-10-12
commentary
In a new podcast episode by On Air, Asser Institute researcher Taylor Woodcock discusses today’s ‘overshadowing focus on autonomous weapon systems (aws) in warfare’, and the consequential lack of...
2022-10-11
commentary
About damn time. That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White House’s science and technology advisory agency, had unveiled an AI Bill of Rights.
2022-10-10
white paper
This paper highlights ten weapons systems with features that might be informative to considerations around autonomy in weapons systems. It seeks to showcase the diversity of types of weapon systems...
2022-10-06
white paper
This paper provides an initial sketch of responses to AI and automated decision-making in wider society while contextualising these responses in relation to autonomy in weapons systems.
2022-10-06
policy
To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
2022-10-01
report
It is undisputed that humans must retain responsibility for the development and use of autonomous weapon systems (AWS) because machines cannot be held accountable for violations of international humanitarian law (IHL). However, the critical question of how, in practice, humans would be held responsible for IHL violations involving AWS has not featured strongly in the policy debate on AWS. This report aims to offer a comprehensive analysis of that very question.
2022-10-01
report
In this article we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human...
2022-09-26
commentary
In a new paper, Asser Institute researcher Marta Bo examines when programmers may be held criminally responsible for harms caused by self-driving cars and autonomous weapons.
2022-09-23
report
The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding...
2022-09-16
commentary
In a new paper, Asser Institute senior researcher Bérénice Boutin explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies.
2022-09-13
commentary
Berenice Boutin has recently published a new article entitled ‘State Responsibility in Relation to Military Applications of Artificial Intelligence’.
2022-09-13
paper
This article explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial...
2022-09-12
research article
“Autonomous weapon systems” (AWS) have been subject to intense discussions for years. Numerous political, academic and legal actors are debating their consequences, with many calling for strict regulation or e...
2022-09-02
report
If the United States is to maintain a constructive role in preventing the outbreak of a cross-Strait war, it will need to implement a strategy to deter Chinese aggression against Taiwan that is consistent with U.S. interests and capabilities, and that provides clarity around the existentially important matter of preventing nuclear escalation, in the event a conflict does occur.
2022-09-01
original paper
Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty ...
2022-08-24
original paper
Defense organizations are increasingly developing ethical principles to ... the design, development, and use of responsible AI, most notably for defense, security, and intelligence uses. While these ... lead to m...
2022-08-10
statement
ICRC statement following the final 2022 session of government experts on lethal autonomous weapons systems of the UN Convention on CCW from 25 to 29 July.
2022-08-08
commentary
Artificial intelligence (AI) increasingly executes tasks that previously only humans ... medical operation. However, as the very best AI systems tend to be the least controllable ... longer be morally responsible...
2022-07-29
article
Autonomous weapons are an immediate cause of humanitarian concern. Senior scientific and policy adviser at the ICRC, Neil Davison, explains.
2022-07-26
commentary
Advanced algorithms equipped with databases of terrain maps, weather, and navigation information can help an aircraft correct its flight path without human intervention.
2022-07-26
paper
Recent years have seen a spotlight aimed at new technologies and how they might be used by the military. Scholars and policymakers have given much attention to autonomous weapons systems and...
2022-07-22
review
Artificial intelligence and its societal and ethical implications are complicated and conflictingly interpreted. Surveillance is one of the most ethically challenging concepts in AI. Within the domain of artifici...
2022-07-19
commentary
The invasion of Ukraine has prompted militaries to update their arsenals—and Silicon Valley stands to capitalize.
2022-07-07
briefing paper
This briefing paper sets out a positive vision to encourage governments to commence negotiations on a new treaty on autonomous weapons systems.
2022-06-29
research article
The current reanimation of artificial intelligence includes a resurgence of investment in automating military intelligence on the part of the US Department of Defense. A series of programs set forth a technopolitical imaginary of fully integrated, ...
2022-06-23
policy
This strategy sets out how we will adopt and exploit AI at pace and scale, transforming Defence into an ‘AI ready’ organisation and delivering cutting-edge capability...
2022-06-15
commentary
If autonomous weapons are the future of warfare, then the United States has no choice but to grapple with their complexities.
2022-06-13
commentary
The United States must refine its investments to incorporate a deliberate and sustained campaign of mission engineering to accelerate and improve the delivery of trustworthy AI.
2022-06-12
report
In this chapter we provide an overview of Estonia’s current AI landscape, detailing a number of public sector use-cases and developments across both industry and the military to examine AI in a...
2022-06-08
commentary
The updated policy will hopefully reflect developments in the field and incorporate recent DoD initiatives, paving the way for what future governance of emerging capabilities should look like.
2022-06-08
commentary
In November 2012, the Department of Defense (DOD) released its policy on autonomy in weapons systems: DOD Directive 3000.09 (DODD 3000.09). Despite being nearly 10 years old, the policy remains frequently misunderstood, including by leaders in the U.S. military. For example, in February 2021, Colonel Marc E. Pelini, who at the time was the division chief for capabilities and requirements within the DOD’s Joint Counter-Unmanned Aircraft Systems Office, said, “Right now we don't have the authority to have a human out of the loop. Based on the existing Department of Defense policy, you have to have a human within the decision cycle at some point to authorize the engagement."
2022-06-06
policy
Dstl exploits the latest in robotics and AI to create effective and trustworthy uncrewed platforms and autonomous systems for the UK’s security and defence.
2022-06-01
statement
Below is the statement of the High Representative for Disarmament Affairs to the 23rd session of the Human Rights Council, on the topic of lethal autonomous robotics. It was delivered on behalf of the High Representative by Mr. Jarmo Sareva, Director of the Geneva Branch of UNODA
2022-05-29
commentary
Creating autonomous teams in contested environments will be a challenge of technology—and policy.
2022-05-25
report
Recent years have seen growing attention for the use of AI technologies in warfare, which has been rapidly advancing. This chapter explores in what ways such military AI technologies might...
2022-05-24
article
ABSTRACTThis article proposes an identity-based analysis of the Russian position in the global debate on autonomous weapons systems (AWS). Based on an interpretation of Russian written and verbal...
2022-05-19
original research
A human-centric approach to the design and deployment of AI systems aims to support and augment human ... But what could this look like in a military context? We explored a human-centric approach...
2022-05-18
commentary
As part of the Asser Institute research paper series, Asser researchers Berenice Boutin, Taylor Woodcock and Tomasz Zurek from the research strand ‘Regulation in the public interest: Disruptive...
2022-05-18
report
The concept of ‘meaningful human control’ (MHC) has progressively emerged as a key frame of reference to conceptualize the difficulties posed by military applications of artificial intelligence...
2022-05-13
commentary
Berenice Boutin and Taylor Woodcock have recently published a new Chapter entitled ‘Aspects of Realizing (Meaningful) Human Control: A Legal Perspective’.
2022-05-13
research article
This article discusses an important limitation on the degree of autonomy that may permissibly be afforded to autonomous weapon systems (AWS) in the context of an armed conflict: the extent to which international humanitarian law (IHL) requires that human beings be able to intervene directly in the operation of weapon systems in the course of an attack.
2022-05-11
report
In this article we address the possibility of using Lethal Autonomous Weapon Systems (LAWS) in compliance with the jus in bello principle of distinction. This principle requires that parties to an...
2022-05-06
report
In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapon systems (AWS). We begin our analysis with an account of the principle of necessity as it...
2022-05-06
report
In this article we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). We begin our analysis with a description of the ‘responsibility gap’ and the...
2022-05-04
paper
This research will firstly, try to analyze as well try to bring light on the recent entry of autonomous weapons together with the issues pertaining to the usage of these lethal weapons and the...
2022-05-02
commentary
America needs to enlist its oldest allies and new partners to build a safer and freer world for the AI era.
2022-04-30
article
ABSTRACTThe need for normative change is rarely self-evident but requires the sustained efforts of actors to create a demand for action. With emerging technologies such as autonomous weapons...
2022-04-29
commentary
It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to a...
2022-04-28
commentary
The race to build autonomous weapons will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century.
2022-04-15
report
This paper aims to provide a brief descriptive overview of potential scenarios enabled by AI for the development of authoritarian states by reviewing and discussing recent literature on the impact...
2022-04-15
report
The advent of autonomous weapons brings intriguing opportunities and significant ethical dilemmas. This article examines how increasing weapon autonomy affects approval of military strikes...
2022-04-08
report
In this article we focus on the scholarly and policy debate on autonomous weapons systems (AWS) and particularly on the objections to the use of these weapons which rest on jus ad bellum principles...
2022-04-04
commentary
If open-source analysts are right, a loitering munition capable of using AI to pick a target--a killer robot--was used in the Russia-Ukraine conflict. Autonomous weapons using artificial...
2022-03-15
statement
The International Committee of the Red Cross (ICRC) welcomes the continued work of the Group of Governmental Experts (GGE) and urges the High Contracting Parties to the CCW to take their important work forward in line with one of the main purposes of this Convention, namely "the need to continue the codification and progressive development of the rules of international law
2022-03-11
original research
Artificial Intelligence (AI) seems to be impacting all industry sectors ... a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attent...
2022-03-08
original paper
Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, a...
2022-03-02
commentary
Generally speaking, it is my view that international arms control law has lost too much of public support in the past decade, so it is one of my goals to make people aware of the relevance of the arms control field. In collaboration with political activists and government experts, I want to contribute to this field's potential to enhance international peace and security.
2022-02-28
commentary
Presentation by Kathleen Lawand, head of the arms unit, ICRC. Seminar on fully autonomous weapon systems, Mission permanente de France, Geneva, Switzerland.
2022-02-22
commentary
On June 24, 2014, the ICRC Vice-President, Ms Christine Beerli, opened a panel discussion on...
2022-02-22
commentary
Geneva (ICRC) – Addressing a meeting of experts at the United Nations in Geneva this week, the International Committee of the Red Cross (ICRC) will urge governments to focus on the issue of human control over the use of force in their deliberations on autonomous weapons.
2022-02-22
commentary
The ICRC spoke at the meeting of experts on lethal autonomous weapons systems held in the framework of the Conventional Weapons Convention in Geneva from 13 to 16 May 2014.
2022-02-22
report
Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous...
2022-02-22
report
The total people's defense and security system (Sistem pertahanan dan keamanan rakyat semesta-Sishankamrata) is an implementation of the total defense system in Indonesia. A lethal autonomous...
2022-02-22
report
Lethal Autonomous Weapon System (LAWS) is discussed and considered to the principle of International Humanitarian Law (IHL) and International Human Right Law (IHRL). In line with legal, moral and...
2022-02-22
report
The theme of self-produced weapons intertwines diversified ideas of an ethical, legal, engineering and data science nature. The critical starting point concerns the use of 3D printing for the self-...
2022-02-22
report
After a campaign calling for a ban of Lethal Autonomous Weapons Systems, an expert meeting was held in Geneva in May. The introduction of LAWS raise legal and ethical questions. It should be noted...
2022-02-22
report
Role of AI in cyber crime and hampering National Security“The development of full AI could spell the end of the human race.. It would take off on its own and re-design itself at an ever increasing...
2022-02-22
report
The advent of Lethal Autonomous Weapon Systems (LAWS) is rapidly a matter of scholarly and public interest. This research primarily focuses on LAWS' implications using Artificial Intelligence (AI)...
2022-02-22
commentary
The ethical and legal implications of the potential use of AI technologies in the military has been on the agenda of the United Nations, governments, and non-governmental organisations for several...
2022-02-22
white paper
The possible consequences have a particular scope: malicious attacks can manipulate AI systems - and thus also the actions of people who use the technology as the basis for certain decisions. Similarly, given a lack of safeguards, AI systems can be used to monitor people, for industrial espionage, or as weapons. Protecting AI systems from misuse by criminals, terrorists, competitors, or employers is therefore a highly relevant task for responsible use of the technology.
2022-02-22
article
The Stanley Center, in partnership with The Origins Project at Arizona State University and the Bulletin of the Atomic Scientists, will co-host a workshop to consider the risks and opportunities...
2022-02-22
article
Advances in artificial intelligence (AI), deep-learning, and robotics are enabling new military capabilities that will have a disruptive impact on military strategies. The effects of these...
2022-02-22
commentary
The conference will address the multiple challenges raised by the increasing use of artificial intelligence (AI) in the public sector. As AI is progressively deployed in various domains such as...
2022-02-21
article
The Harvard Strike in the spring of 1969 emerged out of what we students perceived as the university’s complicity in the Vietnam War. After Harvard ...
2022-02-18
essay
A few years back, the rapid progress of international efforts to ban lethal autonomous weapon systems (LAWS) left arms controllers amazed: only five years after the founding of the International Committee for ...
2022-02-16
analysis
This year marks the 160th anniversary of the publication of Henri Dunant’s classic text, ‘A Memory of Solferino’, in 1862. Dunant’s powerful book ...
2022-02-10
report
Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems...
2022-02-10
commentary
If an autonomous nuclear weapon concluded with 99 percent confidence a nuclear war is about to begin, should it fire?
2022-02-01
original research
Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial in...
2022-01-25
opinion
Keynote speech by NATO Deputy Secretary General Mircea Geoană at the Cybersec Global 2022 event
2022-01-25
research article
The emergence of autonomous weapons remains a hot topic in international humanitarian law. Much has been said by States, international organisations, non-governmental organisations and academics on the matter in recent years. However, no agreement has been reached on how best to regulate this nascent technology.
2022-01-21
analysis
Debates concerning the regulation of choices made by States in conducting hostilities are often limited ...
2022-01-20
analysis
Current practices related to the use of weaponised AI are already impacting European stability and security. The OSCE is a promising platform to build on the stalled discussions at the CCW, because it has a history of acting as a bridge between various perspectives of European security.
2022-01-18
commentary
At the Asser Institute, we start the new year with a brand-new research agenda (2022-2026), entitled ‘Rethinking public interests in international and European Law: Pairing critical reflection with perspectives for action’. It is organised around questions pertaining to the public interest in international and European public and private law.
2022-01-17
open forum
This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–s...
2022-01-11
commentary
The government’s three approaches will profoundly shape how algorithms are regulated within China and around the world.
2022-01-04
report
New and emerging technologies impact the ways in which military operations are conducted. Notable quantum leaps are being achieved in three fields: autonomous weapon systems, military use of...
2021-12-30
commentary
The Bulletin produced a lot of great coverage of biosecurity, lethal autonomous weapons, and more. Take a look at some of our best disruptive technology stories of the year.
2021-12-29
commentary
AI in particular is seen as a “game-changing” critical strategic technology.
2021-12-28
statement
ICRC Head of the Arms and Conduct of Hostilities Unit Laurent Gisel on humanitarian concerns raised by the use of certain conventional weapons at the 6th Review Conference of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons
2021-12-24
research
The EU’s pursuit of a single European defense market necessitates stronger democratic oversight. Members of the European Parliament and national legislative bodies should play a more proactive role...
2021-12-20
commentary
With Boeing's ATS, the operating air force would enlarge an enemy’s risk in entering airspace within the ATS’s radius. Either the enemy’s fighter force would be burdened with more escort work or vulnerable aircraft might just have to be kept out of the area.
2021-12-19
report
The emerging international regulatory framework for lethal autonomous weapon systems (LAWS) relies on the continuing applicability of international law and the maintenance of human control and...
2021-12-15
policy
The rapid development and wide applications of AI technology has profoundly changed the way people work and live, bringing great opportunities as well as unforeseeable security challenges to the world. One particular concern is the long-term impacts and potential risks of military applications of AI technology in such aspects as strategic security, rules on governance, and ethics.
2021-12-14
statement
Responsible choices about the future of warfare are needed, including clear and legally binding boundaries to prohibit autonomous weapons systems that are unpredictable or designed to target humans, and strict regulation of the design and use of all others.
2021-12-13
research article
Autonomous drones raise important judicial and ethical issues about responsibility for unintentional harm which will be discussed in this paper.
2021-12-08
opinion
Remarks by NATO Secretary General Jens Stoltenberg in a panel discussion at the Friedrich-Ebert-Stiftung Symposium in Berlin
2021-12-08
perspective
The publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the...signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation and... We pro...
2021-12-06
report
The 2020 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2020 to March 2021.
2021-12-01
commentary
The F-16 may soon operate within a complex digital ecosystem.
2021-11-19
report
This chapter outlines different implications of artificial intelligence for national security. It argues that AI overlaps with many challenges to the national security arising from cyberspace, but...
2021-11-18
commentary
The controversy over Project Maven shows the department has a serious trust problem. This is an attempt to fix that.
2021-11-16
original research
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential har...
2021-11-12
analysis
Intergovernmental discussions on the regulation of emerging technologies in the area of (lethal) autonomous weapon ...
2021-11-11
opinion
Keynote speech by NATO Deputy Secretary General Mircea Geoană at the GoTech World 2021 Conference
2021-11-11
briefing paper
This pamphlet provides guidance for policy makers around the world in developing a new international treaty to overcome the dangers posed by autonomy in weapon systems.
2021-11-10
article
The Sixth Review Conference of the Convention on Certain Conventional Weapons (CCW), in December 2021 in Geneva, is a key moment for High Contracting Parties to take stock of, and build on, the important role the CCW has played in minimizing suffering in armed conflict.
2021-11-08
commentary
The October 2021 meeting of NATO's Defence Ministers in Brussels (see NATO Watch Briefing no.87) saw Ministers agreeing to adopt NATO’s new strategy for Artificial Intelligence (AI). The strategy...
2021-11-08
research article
This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, ...
2021-11-01
commentary
An analysis of the NATO Defence Ministers Meeting, Brussels, 21-22 October 2021
2021-11-01
commentary
The Marker is expected to become the foundation for testing the interaction between ground robots, unmanned aerial vehicles and special operations forces.
2021-10-26
article
At their October 2021 meeting, Allied Defence Ministers formally adopted an Artificial Intelligence Strategy for NATO. Current and former NATO staff with direct involvement in the development and...
2021-10-25
opinion
Press conference by NATO Secretary General Jens Stoltenberg ahead of the meetings of NATO Defence Ministers on 21 and 22 October at NATO Headquarters
2021-10-20
commentary
The Asser Institute invites abstracts on the topic of ‘Law and ethics of artificial intelligence in the public sector: From principles to practice and policy’, for an interdisciplinary conference...
2021-10-18
commentary
The conference seeks to address the multiple challenges raised by the increasing use of artificial intelligence (AI) in the public sector. As AI is progressively deployed in various domains such as...
2021-10-18
research article
Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an ... a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Un...
2021-10-13
report
In this chapter, we explore a role for the North Atlantic Treaty Organization (NATO) in the emerging military artificial intelligence (AI) governance architecture. As global powers compete for...
2021-10-11
statement
Civil Society Statement to the UN General Assembly First Committee on Disarmament and International Security delivered on 8 October 2021.
2021-10-08
analysis
Alongside the urbanization of armed conflict lies a second trend: the increase in the use ...
2021-10-07
commentary
The development of autonomous weapons and robotics technology is rapidly advancing and poses hard questions about how their use and proliferation should be governed. Existing arms-control regimes may offer a model for how to govern autonomous weapons.
2021-10-05
commentary
Topics of interest within the scope of this lecture series include technical perspectives on military applications of AI, philosophical enquires into human control and human agency over technologies, analyses of international law in relation to (military) AI, including international humanitarian law and international human rights law, and interdisciplinary contributions related to these topics.
2021-10-04
commentary
Here is how the weapons could be more destabilizing than nukes.
2021-10-01
analysis
For almost eight years now, the international community at the United Nations (UN) has been ...
2021-09-29
opinion
Remarks by NATO Deputy Secretary General Mircea Geoană at the AI & Cyber Conference titled “An Abundance of Potential”
2021-09-27
policy
The National Governance Committee for the New Generation Artificial Intelligence published the “Ethical Norms for the New Generation Artificial Intelligence”. It aims to integrate ethics into the entire lifecycle of AI, to provide ethical guidelines for natural persons, legal persons, and other related organizations engaged in AI-related activities.
2021-09-25
response paper
This paper sets out the Campaign to Stop Killer Robots’ response to the additional questions circulated by the Chair of the Group of Governmental Experts on 12th August 2021.
2021-09-12
report
The race for military AI is in full swing. Militaries around the world are developing and deploying various AI applications including tools for the advancement of surveillance, command and control...
2021-09-08
commentary
The IDF has broader ambitions to eventually integrate the Jaguar into its conventional warfighting capabilities.
2021-08-21
report
This draft Code of Conduct for AI-enabled military systems is the product of a two-year consultation process among Chinese, American,…
2021-08-18
original research
The rapid advances in the development and rollout of artificial intelligence (AI) technologies over the past years have triggered a frenzy of regulatory initiatives at various levels of government and the priv...
2021-08-17
open forum
This paper investigates four questions related to ethical issues associated with the involvement of engineers and scientists in 'military work', including the influence of ethical ... )-centred systems perspectiv...
2021-08-16
analysis
The rules and standards of war are not self-correcting. Contradictions, gaps, and ambiguities often endure until an external pressure makes them salient. This ...
2021-08-12
commentary
Supposedly Jaguars will assume routine patrol duties for the Gaza division, reducing by one battalion the forces deployed to guard the barrier.
2021-08-11
statement
The ICRC recommends that states adopt new, legally binding rules to regulate autonomous weapon systems to ensure that sufficient human control and judgement is retained in the use of force. It is the ICRC's view that this will require prohibiting certain types of autonomous weapon systems and strictly regulating all others.
2021-08-03
analysis
Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how to manage ethical risk in adopting military AI. The report compares their convergences and divergences, offering pathways for the United States, its allies, and multilateral institutions to develop common approaches to responsible AI implementation.
2021-08-01
analysis
The Department of Defense can already begin applying its existing international science and technology agreements, global scientific networks, and role in multilateral institutions to stimulate digital defense cooperation. This issue brief frames this collection of options as a military AI cooperation toolbox, finding that the available tools offer valuable pathways to align policies, advance research, development, and testing, and to connect personnel–albeit in more structured ways in the Euro-Atlantic than in the Indo-Pacific.
2021-08-01
advisory note
This advisory note, circulated to campaigners and diplomats, provides the basis for a prohibition on autonomous weapon systems that target humans.
2021-07-30
report
The evolution of artificial intelligence (AI) over the years has led to the realization of the dreams of robot-human interaction. This idea of a robot-human interaction on a whole new level has...
2021-07-28
commentary
With another year behind us, the Asser Institute wants to highlight our achievements of the last year in academic research, collaborations, events and publications through our 2020 Annual Report....
2021-07-28
commentary
A new podcast series 'Lethal Autonomous Weapons: 10 things we want to know' was launched with Asser researcher Marta Bo. The podcast series is a part of the LAWS & War Crimes research project at...
2021-07-27
article
The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming...
2021-07-21
article
There has been much speculation about the power and dangers of artificial intelligence (AI), but it’s been primarily focused on what AI will do to our jobs in the very near future. Now, there’s...
2021-07-02
article
Should we be scared of artificial intelligence (AI)? Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses.
2021-07-02
article
Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed...
2021-07-02
article
In this post I offer a quick trip through time to examine the origins of machine learning as well as the most recent milestones.
2021-07-02
article
Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer...
2021-07-02
report
This document interprets the key elements of a treaty through a Southeast Asian perspective, recognising the diversity of national interests in the region.
2021-07-01
report
An interdisciplinary bibliography of resources relating to lethal autonomous weapon systems produced by the LAWS & War Crimes research project team at the Graduate Institute of International and...
2021-06-29
statement
This statement was delivered to CCW participants at the informal discussions on autonomous weapon systems on 29 June 2021.
2021-06-29
commentary
The Jaguar's role in a border patrol and possibly anti-riot capacity will likely continue to receive scrutiny as public security services across the world explore deploying unmanned systems with offensive capabilities.
2021-06-27
analysis
Over the past decade, several States have begun to develop military cyber elements capable of ...
2021-06-24
research article
This paper analyzes the excessive epistemic narrowing of debate about lethal autonomous weapon systems (LAWS), and specifically the concept of meaningful human control, which has emerged as central to regulatory debates in both the scholarly literature and policy fora.
2021-06-23
commentary
AI safety is often overlooked in the private sector, but Deputy Secretary Kathleen Hicks wants the Defense Department to lead a cultural change.
2021-06-22
analysis
As the global geo-political landscape continues to experience increasing fragmentation, cyberspace grows in importance as ...
2021-06-17
analysis
In today’s armed conflicts, cyber operations are increasingly used in support of and alongside kinetic ...
2021-06-15
commentary
In an interview with the University of Amsterdam, project leader Dr Berenice Boutin discussed some of the challenges associated with military AI and how the DILEMA research project seeks to address them.
2021-06-14
policy
Stai, Nora Kristine & Bruno Oliveira Martins (2021) Norway’s Policy on Emerging Military Technologies: Widening the Debate on AI and Lethal Autonomous Weapon Systems, PRIO Policy Brief, 11. Oslo: PRIO.
2021-06-11
article
ABSTRACTThe rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose...
2021-06-04
commentary
After a recent UN report suggested that a Turkish-made Kargu-2 had autonomously hunted down retreating troops in Libya, numerous media outlets devoted coverage to the issue of so-called lethal...
2021-06-04
report
Compliance with international humanitarian law (IHL) is recognized as a critical benchmark for assessing the acceptability of autonomous weapon systems (AWS). However, in certain key respects, how and to what extent existing IHL rules provide limits on the development and use of AWS remains either subject to debate or underexplored.
2021-06-01
report
The intention of this study is to find audience amongst the policy making circles, academia, and those interested in the topic of future warfare. It will aim to elucidate and incorporate novel...
2021-05-28
advisory note
This advisory note, circulated to campaigners and diplomats, provides recommendations for the normative and operational framework for autonomous weapon systems.
2021-05-28
research article
During times of domestic turmoil, the use of force abroad becomes an appealing strategy to US presidents in hopes of diverting attention away from internal conditions and toward a foreign policy success. Weaponized drone technology presents a low cost and ...
2021-05-26
commentary
The Turkish made Kargu-2 drone can operate in autonomous mode and may have been used to attack retreating soldiers fighting against the UN-recognized government in Libya. There's an ongoing global...
2021-05-20
position paper
The International Committee of the Red Cross (ICRC) has, since 2015, urged States to establish internationally agreed limits on autonomous weapon systems to ensure civilian protection, compliance with international humanitarian law, and ethical acceptability. With a view to supporting current efforts to establish international limits on autonomous weapon systems that address
2021-05-17
report
The May edition of the Internationella Kvinnoförbundet för Fred och Frihet (IKFF) membership magazine focuses on autonomous weapons.
2021-05-17
commentary
Experts said the group’s unique stature might get governments to the negotiating table at last.
2021-05-13
position paper
With a view to supporting current efforts to establish international limits on autonomous weapon systems that address the risks they raise, ICRC recommends that States adopt new legally binding rules, in this position and background paper.
2021-05-12
statement
Speech given by Mr Peter Maurer, President of the International Committee of the Red Cross (ICRC), during a virtual briefing on the new ICRC position on autonomous weapon systems.
2021-05-12
article
Lethal autonomous weapon systems (LAWS) – robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed – will be capable of selecting targets and...
2021-05-06
report
Comprehending and analysing Artificial Intelligence (AI) is fundamental to embrace the next challenges of the future, specifically for the defence sector. Developments in this sector will involve...
2021-05-04
report
Swedish-language document introducing autonomous weapons and the moral, ethical, humanitarian, operational and legal challenges they present.
2021-05-01
report
These seven new principles concentrate on the responsible use of autonomous functionalities in armed conflict in ways that preserve human judgment and responsibility over the ...
2021-04-28
commentary
When it comes to future autonomous weapons, many governments say they want to ensure humans remain in control over lethal force. The example of the heavily automated air defense systems that...
2021-04-21
brief
This Brief outlines the major space threats and makes concrete suggestions on how space can support the EU's Strategic Compass.
2021-04-15
commentary
Drone swarms are getting larger and, coupled with autonomous capability, they could pose a real threat. Think “Nagasaki” to get a sense of the death toll a massive drone swarm could theoretically...
2021-04-05
commentary
The future is drones and modern warfare will never be the same.
2021-04-03
analysis
The law plays a vital role in how artificial intelligence can be developed and used in ethical ways. But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles concerning AI can help fill legal gaps. In this paper, CSET Distinguished Fellow James E. Baker offers a primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.
2021-04-01
report
L’objectif de la présente: Note d’analyse est d’aller plus loin en examinant les informations fournies par une demi-douzaine de producteurs d’armes autonomes, en particulier des munitions rôdeuses.
2021-03-29
commentary
Friday March 26, Janne E. Nijman, chair of the board and academic director of the Asser Institute, will convene the online closing plenary of the 2021 virtual annual meeting of the American Society...
2021-03-25
commentary
JAIC leader stresses that AI ethics guidelines don’t slow down the United States. In fact, they are essential.
2021-03-23
report
This document addresses the issues of international politics and the different positions and strategies of the main international actors regarding the evolution of Lethal Autonomous Weapons Systems...
2021-03-16
report
Lethal Autonomous Weapons Systems (LAWS) refer to military systems that employ human-made algorithms to independently identify, search for, and engage targets without human intervention. LAWS refer...
2021-03-10
report
Since 2017, the United Nations (UN) has regularly convened a group of government experts (GGE) to explore the technical, legal, and ethical issues surrounding the deployment of lethal autonomous...
2021-03-05
report
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be...
2021-03-03
report
Autonomous Weapon Systems (AWS) are defined as robotic weapons that have the ability to sense and act unilaterally depending on how they are programmed. Such human-out-of-the-loop platforms will be...
2021-03-03
commentary
If the international community doesn’t properly manage the development, proliferation, and use of military AI, international peace and stability could be at stake
2021-03-03
report
This paper is a primer for those seeking to engage with current debates on nuclear risk in Europe. It demystifies and contextualizes the challenges posed by emerging and disruptive technologies in the nuclear realm. It looks in detail at five significant and potentially disruptive technological developments—hypersonic weapons, missile defence, artificial intelligence and automation, counterspace capabilities, and computer network operations (cyber)—to highlight often-overlooked nuances and explain how some of the challenges presented by these developments are more marginal, established and manageable than is sometimes portrayed. By emphasizing the primacy of politics over technology when it comes to meeting nuclear challenges, this paper also seeks to provide a basis for targeted risk reduction and arms control, as well as normative recommendations for policymakers and professionals working across Europe.
2021-03-01
position paper
At a time of increasing conflict and rapid technological change, the International Committee of the Red Cross (ICRC) needs both to understand the impact of new technologies on people affected by armed conflict and to design humanitarian solutions that address the needs of the most vulnerable.
2021-03-01
book chapter
Reichberg, Gregory M. & Henrik Syse (2021) Applying AI on the Battlefield: The Ethical Debates, in von Braun, Joachim; Margaret S. Archer; Gregory M. Reichberg; & Marcelo Sánchez Sorondo, eds, Robotics, AI, and Humanity: Science, Ethics, and Poli...
2021-02-22
commentary
A data-illiterate culture in the military is widening the gap between the United States and its competitors. Success will require deeper and more direct congressional action.
2021-02-15
commentary
Azerbaijan showed during the battle for Nagorno-Karabakh that even an old Soviet-era crop duster could be repurposed and used effectively in drone warfare—another example of how militaries continue...
2021-02-10
commentary
Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes.
2021-02-05
analysis
The OSCE should develop CBMs for partially autonomous weapons systems. Such CBMs should provide information about AWS features and doctrine for their use, to increase transparency and build trust between states.
2021-02-04
analysis
The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.
2021-02-01
report
This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Cambodian policy and law.
2021-01-20
report
This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Indonesian policy and law.
2021-01-12
commentary
The Biden administration has an opportunity to foster international cooperation on military AI to reduce the risk of inadvertent conflict while still pursuing US military leadership in AI.
2021-01-12
commentary
While the main threat to military facilities may come from enemy ballistic and cruise missiles, it is time to consider the possibility of unconventional attacks involving small drones and infiltrators.
2021-01-11
report
This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Thai policy and law.
2021-01-11
report
This second edition of the primer lays out the basics and issues of lethal autonomous weapons and their relevance to Philippine policy and law.
2021-01-10
commentary
U.S. military installations, command and control centers and even air, ground and sea war platforms could themselves quickly fall victim to drone swarm strikes.
2021-01-09
report
This primer lays out the basics and issues of lethal autonomous weapons and their relevance to Nepalese policy and law.
2021-01-09
report
This paper argues for a legally-binding instrument on lethal autonomous weapons systems (LAWS) and for strong positive obligations to ensure meaningful human control over the use of force.
2021-01-01
report
The United Nations and the North Atlantic Treaty Organization (NATO) have put in place systems governing the integration and deployment of Artificial Intelligence (AI) in their Peace Support...
2020-12-31
commentary
The service will need newer, better high-tech drones to help fight future conflicts.
2020-12-30
commentary
DILEMA Lecture on the topic of ‘Remote, Autonomous Weapons and Human Agency’.
2020-12-18
commentary
In a contribution to international law blog OpinioJuris, Asser researcher Marta Bo writes that international criminal law could provide guidance for operationalising the concept of meaningful human...
2020-12-17
commentary
What does it mean to have autonomy in the age of AI? How are remote, autonomous weapons regulated under international law? On Monday 22 February 2021 (16.00 CET / 15.00 GMT), Professor Bill Boothby...
2020-12-15
report
The 2019 annual report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2019 to March 2020.
2020-12-15
report
This paper looks at the role of multilateral verification bodies in dealing with compliance and enforcement, the extent to which they achieve ‘agency’ and ‘influence’ in doing so, and whether and how such capacities might be enhanced.
2020-12-14
article
The deputy head of NATO’s Innovation Unit lays out current efforts to develop Artificial Intelligence policy at NATO.
2020-11-24
essay
Although COVID-19 has highlighted new and incredible challenges for our globalized society, foreign influence operations that capitalize on moments of global uncertainty are far from new.
2020-11-23
essay
AI now exceeds our performance in many activities once held to be too complex for any machine to master.
2020-11-23
essay
Public-private collaboration is essential to creating innovative governance solutions that can be adapted as the technology develops.
2020-11-16
essay
Legislatures across the globe should be preparing to amend their laws, and possibly adopt new ones, governing autonomous technologies.
2020-11-16
essay
The oft-used phrase that data is the new oil is, in the context of AI, probably wrong.
2020-11-16
essay
Despite the headlines and the catchy titles, the nature and the extent of the AI arms race are hard to discern at this stage.
2020-11-09
commentary
While AI applications are expected to have a significantly positive impact on our lives, those same applications will also likely be abused or manipulated by bad actors.
2020-11-09
essay
The dream of the intelligent machine now propels computer science, and therefore regulatory systems, around the world.
2020-11-09
book review
In recent years, AI has become a hotly debated topic across different disciplines and fields of society. Rapidly advancing technological innovations, especially in areas such as machine learning (as well as increasingly widespread uses of AI-based systems), have brought about a growing awareness of the need for AI ethics, whether in politics, industry, science, or in society at large.
2020-11-06
commentary
The DILEMA Project, led by Asser senior researcher Dr Berenice Boutin, is launching a new lecture series on legal, ethical, and technical perspectives on human agency over military Artificial Intelligence (AI).
2020-11-05
commentary
The DILEMA Lecture Series will regularly invite academics and other experts working on issues related to the project to present their work and share reflections with a general audience comprising...
2020-11-05
policy brief
This policy brief highlights the need for states to outline parameters of unacceptability for autonomous weapon systems and steps to ensure meaningful human control over the use of force.
2020-11-02
statement
This statement highlights the challenges that autonomous weapons present and the urgent need for a treaty.
2020-11-02
report
In 2018 the United Nations Secretary-General identified responsible research and innovation (RRI) in science and technology as an approach for academia, the private sector and governments to work on the mitigation of risks that are posed by new technologies.
2020-11-01
report
The military use of artificial intelligence (AI) has become the focus of great power competition. In 2019, several European Union (EU) member states called for greater collaboration between EU member states on the topic of AI in defence. This report explores why the EU and its member states would benefit politically, strategically and economically from developing principles and standards for the responsible military use of AI. It maps what has already been done on the topic and how further expert discussions within the EU on legal compliance, ethics and technical safety could be conducted. The report offers concrete ways for the EU and its member states to work towards common principles and best practices for the responsible military use of AI.
2020-11-01
commentary
All the service branches understand that both their legacy fixed facilities (such as Guam) and new expedient bases will be subject to an expanded array of threats. Given this, they must be rendered more secure.
2020-10-24
research article
In this Connexions essay, we focus on intelligent agent programs that are cutting-edge solutions of contemporary artificial intelligence (AI). We explore how these programs become objects of desire that contain a radical promise to change organizing and ...
2020-10-22
report
This report outlines how legal and policy precedent can serve as a foundation for constructing a legally binding instrument without starting from scratch.
2020-10-20
commentary
As countries around the world race to incorporate AI and greater autonomous functionality into weapons, the years-long debate at the United Nations over what if anything to do about lethal...
2020-10-16
statement
This statement was delivered to delegates attending the 75th UN General Assembly (UNGA) First Committee on Disarmament and International Security on 13 October 2020.
2020-10-13
discussion
Panelists discuss how cyber became the weapon of choice for nonstate actors and states alike. Directed by John Maggio and based on the book of the same name by David Sanger, The Perfect Weapon explores the rise of cyber conflict as a primary way in which nations now compete with and sabotage one another.
2020-10-09
commentary
This comes in the backdrop of growing interest in global technology cooperation.
2020-09-26
statement
This statement was delivered to delegates attending the CCW meeting on lethal autonomous weapons systems on 24 September 2020.
2020-09-24
statement
This statement was delivered to delegates attending the CCW meeting on lethal autonomous weapons systems on 21 September 2020.
2020-09-21
report
This Spanish-language publication explores the potential consequences of lethal autonomous weapons among marginalized populations from an intersectional Latin American perspective.
2020-09-20
research article
This article discusses how the first-person genre, especially a Gazan wartime diary, allows both writer and reader to imagine new possibilities for understanding contemporary colonial drone warfare, which is instrumental in the strategic silencing and ...
2020-09-19
report
A report covering the key concerns that development of artificial intelligence, emerging technologies, and autonomous weapons presents to the Asia region.
2020-09-18
report
This article discusses how risks normally associated with battlefield considerations of the Law of Armed Conflict must be considered and addressed during the design of autonomous platforms'...
2020-09-11
original paper
The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology ...
2020-08-29
commentary
It’s a simple question, should robots kill by themselves? The technology is here. Unmanned systems, both ground and air robots, can autonomously seek, find, track, target and destroy enemies without human intervention.
2020-08-19
report
This report elaborates country positions on banning fully autonomous weapons and retaining human control.
2020-08-18
analysis
The international market in armed drones is booming, creating risks of widespread proliferation, especially to non-state actors or states known for their lack of respect for the laws of warfare. This paper analyses these proliferation risks and formulates recommendations on how to mitigate them.
2020-08-10
commentary
Even if AI development becomes Russia’s highest priority, Moscow has no chance of catching up with Washington and Beijing in this field. Under favorable conditions, however, Russia is quite capable...
2020-08-05
report
This primer lays out the basics and issues of lethal autonomous weapons, and other emerging technologies in the field of weapons development, and their relevance to Philippine policy and law.
2020-08-02
article
This from the International Security Program examines the proliferation, development, and use of armed drones.
2020-07-30
research
The European Parliament should be an important source of democratic oversight and accountability as the EU continues to pursue greater security and defense integration.
2020-07-20
report
A prominent recent development in governance of artificial intelligence is the White House Office of Management and Budget’s 2020 Guidance for Regulation of Artificial Intelligence Applications...
2020-07-07
report
In this paper, I argue that there is no theoretical bar to the development of autonomous weapon systems, and that their practical benefits must be considered. Further, I argue that meaningful human...
2020-07-02
report
Data and national security have a complex relationship. Data is essential to national defense — to understanding and countering adversaries. Data underpins many modern military tools from drones to...
2020-07-01
research
As fears rise over disinformation and influence operations, stakeholders from industry to policymakers need to better understand the effects of such activity. This demands increased research...
2020-06-25
commentary
Manuals on the law of armed conflict come in different guises. The most common one is the military manual, which is a publication issued by a State’s Ministry of Defence or a branch of the armed...
2020-06-23
original paper
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy ...
2020-06-17
publication
The development of autonomous weapon systems raises the prospect of the loss of human control over weapons and the use of force.
2020-06-12
publication
The ICRC convened an international expert meeting on autonomous weapon systems from 26 to 28 March 2014. It brought together government experts from 21 States and 13 individual experts, including roboticists, jurists, ethicists, and representatives from the United Nations and non-governmental organizations.
2020-06-10
article
Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control
2020-06-02
report
This report aims to offer the reader a concrete understanding of how the adoption of artificial intelligence (AI) by nuclear-armed states could have an impact on strategic stability and nuclear risk and how related challenges could be addressed at the policy level. The analysis builds on extensive data collection on the AI-related technical and strategic developments of nuclear-armed states. It also builds on the authors’ conclusions from a series of regional workshops that SIPRI organized in Sweden (on Euro-Atlantic dynamics), China (on East Asian dynamics) and Sri Lanka (on South Asian dynamics), as well as a transregional workshop in New York. At these workshops, AI experts, scholars and practitioners who work on arms control, nuclear strategy and regional security had the opportunity to discuss why and how the adoption of AI capabilities by nuclear-armed states could have an impact on strategic stability and nuclear risk within or among regions.
2020-06-01
report
There is wide recognition that the need to preserve human control over weapon systems and the use of force in armed conflict will require limits on autonomous weapon systems (AWS).
2020-06-01
article
Revolutionary technologies hold much promise for humanity. When taken up for military uses, they can affect international peace and security. The challenge is to build understanding among...
2020-06-01
report
Technology does not exist in a vacuum; it is mediated by individual and institutional choices about development and use. In the case of autonomous weapon systems (AWS), which select military...
2020-05-28
research article
Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and prio...
2020-05-15
commentary
The report argues that the crossovers between these technologies, such as with data, AI and autonomy, would be highly influential on the development of future military capabilities. Commenting on...
2020-05-09
article
Exponential technological progress, especially in the digital domain, is affecting all realms of life. Emerging mainly from the commercial sector, it has led to a democratisation of technologies...
2020-05-05
report
Artificial intelligence offers great promise for national defense. For example, a growing number of robotic vehicles and autonomous weapons can operate in areas too hazardous for soldiers. But what are the ethical implications of using AI in war or even to enhance security in peacetime?
2020-04-28
commentary
A threat to NATO?
2020-04-24
briefing paper
A Frequently Asked Questions paper expands on the Campaign’s position and addresses questions raised by the Key Elements of a Treaty proposal.
2020-04-21
research article
Automated technologies populating today’s online world rely on social expectations about how “smart” they appear to be. Algorithmic processing, as well as bias and missteps in the course of their development, all come to shape a cultural realm that in turn determines what they come to be about. It is our contention that a robust analytical frame could be derived from culturally driven Science and Technology Studies while focusing on Callon’s concept of translation. Excitement and...
2020-04-17
report
This edited volume is the third in a series of three. The series forms part of a SIPRI project that explores regional perspectives and trends related to the impact that recent advances in artificial intelligence could have on nuclear weapons and doctrines, as well as on strategic stability and nuclear risk. This volume assembles the perspectives of eight experts on South Asia on why and how machine learning and autonomy may become the focus of an arms race among nuclear-armed states. It further explores how the adoption of these technologies may have an impact on their calculation of strategic stability and nuclear risk at the regional and transregional levels.
2020-04-01
original paper
Cybersecurity protects citizens and society from harm perpetrated through computer networks. Its task is made ever more complex by the diversity of actors—criminals, spies, militaries, hacktivists, firms—opera...
2020-03-31
research article
Most artificial intelligence technologies are dual-use. They are incorporated into both peaceful civilian applications and military weapons systems. Most of the existing codes of conduct and ethical principles on artificial intelligence address the former while largely ignoring the latter.
2020-03-31
report
The project brought together 198 experts from 75 different countries to discuss the technical, military and legal implications of introducing autonomy in various steps of the targeting cycle.
2020-03-31
commentary
A remote control tank could be in the near future, but a killer robot tank is likely still many years away.
2020-03-24
report
What should be the role of law in response to the spread of Artificial Intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly...
2020-03-24
report
The 2018 activity report provides an overview of activities carried out by the Campaign to Stop Killer Robots from April 2018 to March 2019.
2020-03-15
brief
Digital technologies can vastly improve the operational readiness and effectiveness of Europe’s armed forces. As this Brief shows, however, the EU needs to better understand the risks and opportunities involved in the digitalisation of defence and it needs to financially invest in its technological sovereignty.
2020-03-11
report
Autonomous weapons systems have presented an accelerated development in recent years. The use of this type of weapon in scenarios of armed conflict is not expressly regulated...
2020-03-09
brief
In the wake of the Artificial Intelligence Strategy unveiled by the US Department of Defense in 2019, this Brief examines the implications of the initiative for Europe and for transatlantic defence cooperation. It argues that Europeans need to develop a strategy for military innovation, including Artificial Intelligence (AI), while the transatlantic partners need to design a common approach to AI governance.
2020-03-05
toolkit
A toolkit for new and existing campaigners to get an overview of the key issues and steps to take action to prohibit autonomous weapons.
2020-02-27
report
This new PAX action kit provides background reading and resources in order to take action and save universities from killer robots.
2020-02-26
commentary
The Pentagon adopted a set of ethical guidelines on the use of AI.
2020-02-25
commentary
Sources say the list will closely follow an October report from a defense advisory board.
2020-02-18
report
This report investigates how universities are contributing to the development of autonomous weapons.
2020-02-12
toolkit
This publication explores why intersectionality is important when we are discussing killer robots and racism.
2020-02-06
analysis
As the old year bids farewell and the new year takes shape, we tend to ...
2020-01-21
report
A new resource guide for the WILPF network and broader public about autonomous weapon systems, also known as killer robots, bringing a gender lens to the issue.
2020-01-17
commentary
Although activists are calling for an international ban on lethal autonomous weapons, incorporating AI into weapons systems may make them more accurate and result in fewer civilian casualties...
2020-01-10
commentary
On AI’s potential, military uses, and the fallacy of an AI “arms race.”
2019-12-31
commentary
Some say trying to use the Convention on Certain Conventional Weapons to pre-emptively ban lethal autonomous weapons systems has failed—and consequently should be abandoned. This argument is wrong.
2019-12-18
report
This chapter focuses on the legal challenges posed to States by new technologies in relation to the education and training of the personnel of the armed forces. In recent decades, new technologies...
2019-12-14
commentary
The United States should apply lessons from the 70-year history of governing nuclear technology by building a framework for governing AI military technology. An AI for Peace program should articulate the dangers of this new technology, principles to manage the dangers, and a structure to shape the incentives for other states.
2019-12-13
analysis
Today we launch the 33rd International Conference of the Red Cross and Red Crescent, a ...
2019-12-10
news release
As more Artificial Intelligence (AI) technologies are being integrated in different areas of our lives, and as much benefit that brings to our societies, there are also challenges. In the military context, which is the focus of Boutin’s project, AI technologies have the potential to greatly improve military capabilities and offer significant strategic and tactical advantages. At the same time, the increasing use of autonomous technologies and adaptive systems in the military context poses profound ethical, legal, and policy challenges.
2019-11-27
statement
Convention on prohibitions or restrictions on use of certain conventional weapons which may be deemed to be excessively injurious.
2019-11-14
statement
This statement was delivered to delegates attending the CCW annual Meeting of High Contracting Parties on 14 November 2019.
2019-11-14
report
This report analyses developments in the arms industry, pointing to areas of work that have potential for applications in lethal autonomous weapons and shows the trend of increasing autonomy in...
2019-11-11
commentary
Active US engagement in negotiating a relatively modest treaty offers the best hope for mitigating the humanitarian risks of autonomous weapons.
2019-11-07
commentary
AlphaStar cooperated with itself to learn new strategies for conquering the popular galactic warfare game.
2019-10-30
original research
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already r...
2019-10-24
commentary
Policymakers are under the false impression that they are coming too late and that there is no time for regulating new dual-use technologies. But that impression is misleading.
2019-10-23
commentary
AI warfare is beginning to dominate military strategy in the US and China, but is the technology ready?
2019-10-21
statement
Statement to UN General Assembly First Committee: General debate on all disarmament and international security agenda items
2019-10-18
statement
This statement was delivered to delegates attending the 74th UN General Assembly (UNGA) First Committee on Disarmament and International Security on 18 October 2019.
2019-10-18
commentary
Sovereignty and strategic autonomy are felt to be at risk today, being threatened by the forces of rising international tensions, disruptive digital transformations and explosive growth of cybersecurity incide...
2019-10-11
commentary
Modern warfare will never be the same.
2019-10-10
report
This edited volume is the second of a series of three. They form part of a SIPRI project that explores regional perspectives and trends related to the impact that recent advances in artificial intelligence could have on nuclear weapons and doctrines, as well as on strategic stability and nuclear risk. This volume assembles the perspectives of 13 experts from East Asia, Russia and the United States on why and how machine learning and autonomy may become the focus of an arms race among nuclear-armed states. It further explores how the adoption of these technologies may have an impact on their calculation of strategic stability and nuclear risk at the regional and transregional levels.
2019-10-01
analysis
Those wishing to control the spread and use of autonomous weapons systems generally favour pre-emptive ...
2019-09-26
report
The rhetoric of the race for strategic advantage is increasingly being used with regard to the development of artificial intelligence (AI), sometimes in a military context, but also more broadly....
2019-09-25
report
The militarization of artificial intelligence (AI) is well under way and leading military powers have been investing large resources in emerging technologies. Calls for AI governance at...
2019-09-20
commentary
It is only natural that advances in the intelligent autonomy of digital systems attract the attention of Governments, scientists and civil society concerned about the possible deployment and use of lethal autonomous weapons. What is needed is a forum to discuss these concerns and construct common understandings regarding possible solutions. ...
2019-09-10
commentary
Today we are at the dawn of an age of unprecedented technological change. In areas from robotics and artificial intelligence (AI) to the material and life sciences, the coming decades promise innovations that can help us promote peace, protect our planet and address the root causes of suffering in our world. ...
2019-09-10
position paper
This position paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (i) how to foster research and development efforts toward socially beneficial app...
2019-09-03
analysis
As we marked the 70th anniversary of the Geneva Conventions last month, I want to ...
2019-09-03
research
The array of new technologies emerging on the world stage, the new threats they can pose, and the associated governance dilemmas highlight a set of common themes.
2019-08-28
article
As harsh as it may sound, what do you think is "better"? Being killed by a human being or by a robot? If international humanitarian law (IHL) applies to humans and they are obliged to respect it, what body of law prohibits armed drones or robots from killing people? In the context of cyber warfare and autonomous weapons, is IHL still relevant? Or, is it too old to adapt to the
2019-08-26
q&a
Cortney Weinbaum studies topics related to intelligence and cyber policy as a senior management scientist at RAND. In this interview, she discusses challenges facing the intelligence community, the risks of using AI as a solution, and ethics in scientific research.
2019-08-23
commentary
Additive manufacturing is being adopted by nuclear programs to improve production capabilities, yet its impact on strategic stability remains unclear. This article uses the security dilemma to...
2019-08-22
statement
This statement was delivered to delegates participating at the CCW GGE meeting on lethal autonomous weapons systems on 20 August 2019.
2019-08-20
research
Artificial intelligence, or AI, has become a major source of economic value, contributing as much as $2 trillion to today’s global economy. Sophisticated machine learning technology is driving this...
2019-08-05
report
This entry puts forth a proposed definition of autonomous weapons, explains the basis of that definition, distinguishes autonomous weapons from drones and explains how autonomous weapons are not...
2019-07-19
report
Military technology continues to outpace the law. Recent developments in cyber warfare, space warfare and air and missile warfare have generated creative initiatives by groups of lawyers,...
2019-07-03
commentary
The Syrian civil war gave Russia the chance to test and purportedly improve its robotic and autonomous weapons. Weapons makers showcased some of their products at a recent convention in Moscow.
2019-06-28
commentary
The future is now?
2019-06-27
policy
In order to promote the healthy development of the new generation of AI, better balance between development and governance, ensure the safety, reliability and controllability of AI, support the economic, social, and environmental pillars of the UN sustainable development goals, and to jointly build a human community with a shared future, all stakeholders concerned with AI development should observe the following principles
2019-06-17
report
Lethal autonomous weapons — machines that might one day target and kill people without human intervention or oversight — are gaining attention on the world stage. While their development,...
2019-06-16
commentary
Asser Institute and Graduate Institute researcher Dr Marta Bo and Taylor Woodcock argue, in a blog post written for The Global, that there is a lack of discussion on autonomous weapons and criminal...
2019-05-28
report
While the applications of artificial intelligence (AI) for militaries are broad and go beyond the battlefield, autonomy on the battlefield, in the forms of lethal autonomous weapon systems (LAWS),...
2019-05-23
commentary
The Air Force Artificial Intelligence Incubator aims to develop technologies that serve the “public good,” not weapons development.
2019-05-20
commentary
The rapid pace of advances in technology, from artificial intelligence to miltiary robotics, raises the question of whether it is too late to begin regulating emerging technologies.
2019-05-09
commentary
The United Nations has debated whether to ban lethal autonomous weapons for years now. As countries make rapid progress in the autonomous capabilities of weapons systems, will any ban be too late...
2019-05-09
analysis
Arguably, international humanitarian law (IHL) evolves with the development of emerging technologies. The history of ...
2019-05-02
report
This edited volume focuses on the impact on artificial intelligence (AI) on nuclear strategy. It is the first instalment of a trilogy that explores regional perspectives and trends related to the impact that recent advances in AI could have nuclear weapons and doctrines, strategic stability and nuclear risk. It assembles the views of 14 experts from the Euro-Atlantic community on why and how machine learning and autonomy might become the focus of an armed race among nuclear-armed states; and how the adoption of these technologies might impact their calculation of strategic stability and nuclear risk at the regional level and trans-regional level.
2019-05-01
report
Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...
2019-04-26
commentary
This post is the second entry in the blog series Transformative Technology, Transformative Governance, which examines the global implications of emerging technologies, as well as measures to mitigate their risks and maximize their benefits.
2019-04-24
analysis
Editor’s note: For those interested in the topic of legal reviews of weapons, it is ...
2019-04-18
analysis
Editor’s note: In this post, Tess Bridgeman continues the discussion on detention and the potential use of ...
2019-04-08
analysis
The meeting of the Lethal Autonomous Weapon Systems (LAWS) Group of Governmental Experts (GGE) has been taking place in Geneva this week. This ...
2019-03-28
analysis
Editor’s note: In this post, as part of the AI blog series, Lorna McGregor continues the discussion on ...
2019-03-28
commentary
One Defense Department advisor suggests that “constructive engagement” will be more successful than opting out.
2019-03-26
analysis
Editor’s note: As part of this AI blog series, several posts focus on detention and the ...
2019-03-25
analysis
What are some of the chief concerns in contemporary debates around legal reviews of weapons, ...
2019-03-21
analysis
Recent advances in artificial intelligence have the potential to affect many aspects of our lives ...
2019-03-19
commentary
Ever since 9/11, drones have been amongst the most visible, and often controversial, signs of American power in the Middle East and beyond. But as regional powers look to chart their own course, a new generation of cheaper unmanned aerial vehicles - Chinese or locally built, with far fewer restrictions on their use - are taking to the skies.
2019-03-06
report
Technological advances in the biological sciences have long presented a challenge to efforts to maintain biosecurity and prevent the proliferation of biological weapons. The convergence of developments in biotechnology with other, emerging technologies such as additive manufacturing, artificial intelligence and robotics has increased the possibilities for the development and use of biological weapons.
2019-03-01
commentary
In 1997, IBM’s “Deep Blue” computer defeated grandmaster Gary Kasparov in a match of chess. It was an historic moment, marking the end of an era where humans could defeat machines in complex strategy games. Today, artificial intelligence (AI) bots can defeat humans in not only chess, but nearly every digital game that exists.…
2019-02-18
commentary
Through an executive order, President Donald Trump launched the American AI Initiative, further underscoring the importance of a group of technologies that are reshaping everything from medical...
2019-02-11
commentary
A new study on naval drones warns the real problem with autonomous drones isn’t going berserk, but rather the inability to adapt to the unexpected.
2019-02-10
commentary
A new report shows that a more literal AI arms race is also under way.
2019-02-07
commentary
Is China open to arms control over AI weapons development? The United States should find out.
2019-02-01
commentary
On Wednesday 13 February, the Asser Institute will be hosting a panel discussion on the topic of ‘Human Control over Autonomous Military Technologies’ from 14:30 to 17:00. The event is organised in...
2019-01-29
commentary
The impact of cyberweapons on strategic stability is a growing problem that extends well beyond the security of the control and communication systems of nuclear forces.
2019-01-21
analysis
At the turn of the twentieth century, many engineers with fertile imaginations—from France’s Gustave Gabet to America’s Orville Wright—hoped that their inventions would ...
2019-01-18
commentary
Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.
2019-01-11
report
Background: This paper aims to move the debate forward regarding the potential for artificial intelligence (AI) and autonomous robotic surgery with a particular focus on ethics, regulation and...
2019-01-10
commentary
Last year a string of controversies revealed a darker (and dumber) side to artificial intelligence.
2019-01-07
analysis
The debate about the way the international community should deal with autonomous weapon systems has ...
2018-12-11
article
ABSTRACTTwo categories of ethical questions surrounding military autonomous systems are discussed in this article. The first category concerns ethical issues regarding the use of military...
2018-12-11
commentary
The Air Force and DARPA are now testing new hardware and software configured to enable 4th-Generation aircraft to command drones from the cockpit in the air, bringing new levels of autonomy, more attack options and a host of new reconnaissance advantages to air warfare.
2018-12-10
commentary
Is artificial intelligence, AI, a threat to our way of life, or a blessing? AI seeks to replicate and maybe replace what human intelligence does best: make complex decisions. Currently, human...
2018-12-07
commentary
Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979.
2018-12-02
news release
Should a weapon system be able to make its own “decision” about who to kill?
2018-11-20
commentary
How can civilian agencies in the national-security space leverage artificial intelligence to fortify security interests, with far fewer resources than their heavyweight military and intelligence counterparts...
2018-11-13
commentary
Advances in computer power, processing speed and AI are rapidly changing the scope of what platforms are able to perform without needing human intervention.
2018-11-12
commentary
Andrew Moore says getting the technology to work in businesses is a huge challenge.
2018-11-08
commentary
Know this: if autonomous weapons are developed and introduced into the world’s arsenals, then they are unlikely to immediately revolutionize warfare.
2018-10-21
discussion
A panel discussion entitled Retaining Meaningful Human Control of Weapons Systems was held on the side of the First Committee on Disarmament and International Security.
2018-10-18
statement
United Nations General Assembly, 73rd Session, First Committee. Statement delivered by Ms. Kathleen Lawand, Head of Arms Unit, ICRC.
2018-10-17
report
Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...
2018-10-12
analysis
Terrorist groups, illicit organisations, and other non-state actors have a long fascination with advanced weapons technologies. However, international efforts to restrict proliferation of such weapons are currently lagging behind the emergence of new, possibly as-destructive, technologies. In particular, the last few years have marked the rapid development of lethal autonomous weapons systems (LAWS).
2018-10-01
commentary
The Air Force and DARPA are now testing new hardware and software configured to enable 4th-Generation aircraft to command drones from the cockpit in the air.
2018-09-13
commentary
DARPA, the US Defense Department’s research arm, will spend $2 billion over the next five years on military AI projects.
2018-09-10
analysis
This week, the Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) is holding their third meeting at the UN Certain ...
2018-08-29
commentary
What today's campaigners against the battlefield use of A.I.-powered autonomous robots can learn from the successful antinuclear movements of yesteryear.
2018-08-29
analysis
Automated decision algorithms are currently propagating gender and race discrimination throughout our global community. The ...
2018-08-28
analysis
International humanitarian law (IHL) regulates the use of force in armed conflict. It inherently provides ...
2018-08-23
commentary
The Air Force and DARPA are now testing new hardware and software configured to enable 4th-Generation aircraft to command drones from the cockpit in the air, bringing new levels of autonomy, more attack options and a host of new reconnaissance advantages to air warfare.
2018-08-16
analysis
For the second time this year, States will come together in the UN Convention on ...
2018-08-15
commentary
The author of a new book on autonomous weapons says scientists working on artificial intelligence need to do more to prevent the technology from being weaponized.
2018-08-14
report
This paper addresses the challenge of accountability that arises in relation to autonomous weapon systems (AWS), a challenge which focuses on the hypothesis that AWS will make it impossible to...
2018-08-09
commentary
Until Congress straightens its never-ending fiscal rollercoaster and Army leadership demonstrates that it has learned from its past, the success of Futures Command remains dubious.
2018-07-29
analysis
Concerns about ensuring sufficient human control over autonomous weapon systems (AWS) have been prominent since ...
2018-07-18
analysis
The article discusses artificial intelligence, human activities and future autonomous weapons systems.
2018-07-12
commentary
India has a vast talent pool and a burgeoning start-up scene which, if properly tapped and encouraged, could not only provide indigenous military solutions, but could also create significant...
2018-07-01
analysis
The existing national and international tools used to control the emergence and use of weapons that may contravene international humanitarian law (IHL) have ...
2018-06-28
report
Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...
2018-06-11
commentary
The event brought together speakers from both Geneva and The Hague, to explore and highlight the role of these two UN Cities in linking research to policy in the areas of peace and justice.
2018-06-11
interview
Paul Scharre, senior fellow and director of the technology and national security program at the Center for a New American Security (CNAS), discusses autonomous weapons and the changing nature of warfare with CFR's James M. Lindsay.
2018-06-01
commentary
AI Weapons: China and America Are Desperate to Dominate This New Technology
2018-05-30
commentary
In recent years numerous developments have again highlighted the importance of Weapons Law for preventing and regulating armed conflict. The use of chemical weapons in Syria, the ups-and-downs of...
2018-05-29
research article
In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are seen as the key operations of predictive policing and what impacts they might have in our current culture and society.
2018-05-02
report
Over the past decade there has been much written on lethal autonomous weapons systems (LAWS) commonly known as “killer robots”. This includes legal, ethical and moral concerns as well as issues...
2018-04-29
commentary
If machines that autonomously target and kill humans are fielded by one country, it could be quickly followed by others, resulting in destabilizing global arms races. And that’s only a small part...
2018-04-27
commentary
Why UN discussions on the management of lethal autonomous weapons need greater participation by the scientific and research communities and representatives of the private sector. Statements of...
2018-04-26
commentary
Amid the published angst about AI and its hypothetical threats, more attention ought to be given to the threat that AI-enabled entertainment poses to our brains and our civilization.
2018-04-25
commentary
Over the course of this week, the Bulletin, in partnership with the Stanley Foundation, is publishing top experts on how to manage the explosion of military AI research and development around the...
2018-04-24
commentary
The promise of AI—including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making—is driving militaries around the world to...
2018-04-23
report
Autonomous Weapon Systems (AWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently fielded nor...
2018-04-17
analysis
For the fifth year in a row, government delegates meet at the United Nations in ...
2018-04-11
commentary
For this first edition, the Winter Academy will include general sessions on legal and theoretical perspectives on AI (including on legal personality, collective agency, human control,...
2018-04-11
analysis
In the opening scene of Christopher Nolan’s Dunkirk, six British soldiers, looking for food and ...
2018-04-10
statement
Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems, statement of the ICRCThe International Committee of the Red Cross (ICRC) is pleased to contribute its views to this second meeting of the Group of Governmental Experts on “Lethal Autonomous Weapon Systems”.
2018-04-09
commentary
Government indifference toward AI could let the US lose ground to rival countries. But what would a good AI plan actually look like?
2018-04-06
commentary
If your university partners with a defense contractor to research autonomous weapons, do not expect AI researchers to sit still for it.
2018-04-06
commentary
Tech firms and universities interested in building AI-powered weapons for lucrative military contracts are, predictably, facing some significant pushback.
2018-04-04
article
As part of continuing reflections on the legal and ethical issues raised by autonomous weapons systems, the ICRC convened a round-table meeting in Geneva from 28 to 29 August 2017 to explore the ethical aspects.This report - "Ethics and autonomous weapon systems: An ethical basis for human control?" - summarizes discussions and highlights the ICRC's main conclusions:
2018-04-03
analysis
The requirement for human control The risks of functionally delegating complex tasks—and associated decisions—to sensors ...
2018-04-03
article
As major militaries progress toward the introduction of artificial intelligence (AI) into intelligence, surveillance, and reconnaissance, and even command systems, Petrov’s decision should serve as a potent reminder of the risks of reliance on complex systems in which errors and malfunctions are not only probable, but probably inevitable.
2018-04-01
report
This paper is an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies.
2018-03-28
commentary
The Army is massively speeding up its early prototyping of weapons and technology for its Next-Gen Combat Vehicle.
2018-03-12
research
Rather than use Cold War principles, nuclear states should shift their nuclear doctrines and capabilities to strategic deterrence as needed by the twenty-first century.
2018-03-06
commentary
The controversies surrounding autonomous weapons must not obscure the fact that like most technologies, AI has a number of non-lethal uses for militaries across the world, and especially for the...
2018-03-05
report
LAWS are a threat to humanity, and after an objective analysis without a preconceived attachment to a particular outcome they are prohibited by the lex lata. The analysis is not conducted in a...
2018-02-26
report
This chapter, in the forthcoming book Complex Battlespaces: The Law of Armed Conflict and the Dynamics of Modern Warfare (published by Oxford University Press) explores the application of a key...
2018-02-21
article
The United Nations Office for Disarmament Affairs published a collection of articles: "Perspectives on Lethal Autonomous Weapon Systems"
2018-01-31
video
Dr Hugo Slim, Head of Policy and Humanitarian Diplomacy at the ICRC, visited New Delhi this week to speak at the Raisina Dialogue organised by the Ministry of External Affairs of India and the Observer Research Foundation 16-18 January 2018.
2018-01-19
commentary
It is necessary to move past the idea of artificial intelligence being a replacement for humans across the board, and begin having a deeper conversation about its effectiveness as a tool in the...
2018-01-11
interview
A former Army Ranger—who happens to have led the team that established Defense Department policy on autonomous weapons—explains in a Bulletin interview what these weapons are good for, what they’re...
2018-01-10
report
This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such...
2017-12-28
commentary
Looking back at our best coverage in 2017 of emerging technological threats.
2017-12-21
commentary
If you never dreamed that toy-like drones from off the shelf at the big-box store could be converted—with a bit of artificial intelligence and a touch of shaped explosive—into face-recognizing...
2017-12-05
report
Article 36 of the 1977 Additional Protocol to the 1949 Geneva Conventions imposes a practical obligation on states to determine whether ‘in the study, development, acquisition or adoption of a new weapon, means or method of warfare’ its use would ‘in some or all circumstances be prohibited by international law’. This mechanism is often colloquially referred to as an ‘Article 36 review’.
2017-12-01
perspective
Perspectives on Lethal Autonomous Weapon Systems
2017-11-30
commentary
Given the importance of artificial intelligence (AI) in the coming years, India must keep a wary eye on Chinese developments in this field, and develop its own strategic vision of how AI...
2017-11-17
report
the interaction of cyber operations and increasingly autonomous physical weapon systems may give rise to new security challenges, as these interactions can multiply complexity and introduce new vulnerabilities.
2017-11-16
statement
The ICRC welcomes this first meeting of the Group of Governmental Experts on "Lethal Autonomous Weapons Systems".
2017-11-15
commentary
As multiple militaries have begun to use AI to enhance their capabilities on the battlefield, several deadly mistakes have shown the risks of automation and semi-autonomous systems, even when human...
2017-11-15
analysis
Ethics evolves, the law changes. In this way, moral progress may occur. Yet the relation ...
2017-11-14
report
The Mapping the Development of Autonomy in Weapon Systems report presents the key findings and recommendations from a one-year mapping study on the development of autonomy in weapon systems.
2017-11-01
discussion
On 16 October 2017, the Permanent Mission to the United Nations of Mexico partnered with the International Committee for Robot Arms Control, Human Rights Watch, Seguridad Humana en Latinoamérica y el Caribe and the Campaign to Stop Killer Robots to host a panel discussion entitled “Pathways to Banning Fully Autonomous Weapons” as part of the First Committee side event series for the 72nd Session General Assembly.
2017-10-23
commentary
As technology progresses, particularly in the realm of autonomous systems, many wonder if a laser-drone weapon will soon have the ability to find, acquire, track and destroy and enemy target using sensors, targeting and weapons delivery systems – without needing any human intervention.
2017-10-23
discussion
On 5 October 2017, the United Nations Institute for Disarmament Research (UNIDIR) hosted a side event, “Autonomous Weapons Systems: Learning Algorithms and Bias” at the United Nations Headquarters in New York.
2017-10-13
commentary
Should legal and regulatory norms be adjusted to address the threat of hyperintelligent autonomous weapons in the future? Maybe—but dumb autonomous weapons are altering norms right now.
2017-10-12
analysis
In this blog post, I look at the ethical and legal ramifications of distance in ...
2017-10-06
analysis
Autonomous weapon systems & the dictates of public conscience: An ethical basis for human control? On 28–29 August 2017, the ICRC convened a ...
2017-10-04
report
Conceived as a comprehensive introduction to a field central to the work of the United Nations, Disarmament: A Basic Guide aims to provide a useful overview of the nuanced challenges of building a more peaceful world in the twenty-first century.
2017-10-01
commentary
Will Beijing build them?
2017-09-24
research article
How will the robot age transform warfare? What geopolitical futures are being imagined by the US military? This article constructs a robotic futurology to examine these crucial questions. Its central concern is how robots – driven by leaps in artificial ...
2017-08-31
commentary
Other countries are competitive when it comes to artificial intelligence and robotics, and much of the skill and technology is available in the private sector - not controlled by governments.
2017-08-31
research article
This paper relates the results of deliberation of youth juries about the use of autonomous weapons systems (AWS). The discourse that emerged from the juries centered on several key issues. The jurors expressed the importance of keeping the humans in the decision-making process when it comes to militarizing artificial intelligence, and that only humans are capable of moral agency.
2017-08-30
commentary
“Lethal autonomous weapons threaten to become the third revolution in warfare.”
2017-08-24
commentary
It is necessary to be open-eyed and clear-headed about the practical benefits and risks associated with the increasing prevalence of artificial intelligence.
2017-08-07
report
The development of military technology during the 20th Century is increasing the capabilities of the machines and computers while downgrading the number and complexity of tasks conducted by the...
2017-08-01
article
Nuclear weapons have been around for 70 years. They are an old technology, and the norms and institutions that govern them are fairly well established. Emerging technologies, however, could create...
2017-08-01
article
The possibility of life-or-death decisions someday being taken by machines not under the direct control of humans needs to be taken seriously. Over the last few years we have seen a rapid...
2017-07-28
commentary
The present trajectory of AI advancement indicates that future economies and national security will be defined by it, making it among a handful of technologies that will shape global politics.
2017-07-21
report
This Article explores the interaction of artificial intelligence (AI) and machine learning with international humanitarian law (IHL) in autonomous weapon systems (AWS). Lawyers and scientists...
2017-07-05
report
The focus of scholarly inquiry into the legality of autonomous weapon systems (AWS) has been on compliance with IHL rules on the conduct of hostilities. Comparably little attention has been given...
2017-05-24
report
The legality of autonomous weapon systems (AWS) under international law is a swiftly growing issue of importance as technology advances and machines acquire the capacity to operate without human...
2017-05-05
commentary
The failure of the chemical weapons ban in Syria is not a strike against a proposed global ban on autonomous weapons. Bans derive their strength from morality, not practicality.
2017-04-24
commentary
No one really knows how the most advanced algorithms do what they do. That could be a problem.
2017-04-11
commentary
The legalities for the use of Autonomous Weapon Systems (AWS) in space warfare are examined. Currently, there are manuals for air and missile warfare, naval warfare and cyber warfare, a clear gap in the literature is that there is no manual for space warfare.
2017-04-07
report
One of the few convergent themes during the first two United Nations Meeting of Experts on autonomous weapons systems (AWS) in 2014 and 2015 was the requirement that there be meaningful human...
2017-03-15
article
ABSTRACTThe US and Chinese militaries are starting to test swarming drones – distributed collaborative systems made up of many small, cheap, unmanned aircraft. This new subset of independently...
2017-02-28
commentary
The future looks HUGE for the U.S. Air Force.
2017-02-23
commentary
A big development is almost here.
2017-02-22
analysis
How has warfare changed over the past 100 years? Is the international community still sufficiently equipped to reasonably minimize its negative effects on ...
2017-02-16
commentary
Thanks to DARPA and BAE Systems.
2017-02-15
commentary
There is a new arms race taking shape, centered around three interconnected technologies: autonomous weapons, swarms, and cyberwarfare.
2017-02-06
commentary
Given the pace of progress in AI development, the expanding scope for its application, and the growing intensity of the current research effort suggests that it may not be too soon to revisit and...
2017-02-05
commentary
India’s elevation as chair of a group designed to kick-start talks on lethal autonomous weapon systems gives it the unique opportunity to take a leadership role in global debates on the issue.
2017-01-05
report
Automated warfare including aerial drones that are extensively used in ongoing armed conflicts is now an established part of military technology worldwide. It is only logical to assume that the...
2017-01-04
commentary
In addition to using drones for reconnaissance in Iraq, ISIL has been sending them out with bombs attached.
2017-01-03
commentary
Algorithms are progressing to the point wherein they will be able to allow an Abrams tank crew to operate multiple nearby “wing-man” robotic vehicles in a command and control capacity while on the move in combat.
2016-12-27
report
The legality of autonomous weapon systems (AWS) under international law is a swiftly growing issue of importance as technology advances and machines acquire the capacity to operate without human control. This paper argues that the existing laws are ineffective and that a different set of laws are needed. This paper examines several issues that are critical for the development and use of AWS in warfare.
2016-12-22
report
The ongoing international humanitarian law (IHL) discussion predominantly centers on whether States’ development and employment of AWS can comply with certain fundamental obligations contained in...
2016-12-19
report
Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). The discussion is still at an early stage, with most states parties still in the process of understanding the issues at stake—beginning with the fundamental questions of what constitutes ‘autonomy’ and to what extent it is a matter of concern in the context of weapon systems and the use of force. A number of states parties have stressed that future discussions could usefully benefit from further investigation into the conceptual and technical foundations of the meaning of ‘autonomy’.
2016-12-01
report
Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed internationally under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). Thus far, the discussion has remained at the informal level. Three informal meetings of experts (held in 2014, 2015 and 2016) have been convened under the auspices of the CCW to discuss questions related to emerging technologies in the area of LAWS. Several delegations have, however, already indicated that they have concerns as to the impact that a new protocol on LAWS could have on innovation, particularly in the civilian sphere, since, arguably, much of the technology on which LAWS might be based could be dual use.
2016-12-01
commentary
The U.S. military should balance Americans' ethical concerns over computers making life and death decisions with the need to maintain an edge in the face of rapid advances in artificial intelligence and machine learning across the globe.
2016-11-04
commentary
Ethical concerns over computers making life and death decisions are real, and they’re important
2016-11-03
commentary
The United States has put artificial intelligence at the center of its defense strategy, with weapons that can identify targets and make decisions.
2016-10-25
report
The long and tragic history of human warfare manifests an endless quest for more effective ways to conduct attacks and defeat adversaries. This has in turn driven innovation in means and methods of...
2016-10-16
article
International humanitarian law, its applicability to new weapons, means and methods of warfare and the influence of remote-controlled and autonomous weapon systems on international humanitarian law were among the topics on the agenda for a recent seminar in Seoul.
2016-10-11
analysis
Article 36 of the Additional Protocol I to the Geneva Conventions (AP I) states that each State Party is required to determine whether ...
2016-10-06
commentary
What should me make of the 'AI' cruise missile?
2016-09-05
commentary
Today, the Defense Science Board (DSB) released a long-awaited study, simply titled Autonomy. Since the late 1950s, the DSB has consistently been at the forefront of investigating and providing policy guidance for cutting-edge scientific, technological, and manufacturing issues.
2016-08-23
commentary
If you’re a dictator who can’t trust your own people in the military, you can still trust a machine to do your dirty work.
2016-07-29
commentary
The defense community has already begun a healthy dialogue about the ethics of AI in combat systems
2016-07-18
commentary
There's a lot of talk about regulating autonomous weapons, but thoughtful, effective policy will be difficult to make if we can’t even agree on what they are
2016-06-24
report
The United Nations (UN) Convention on Certain Conventional Weapons (CCW) discussions on lethal autonomous weapons (LAWS) have been confused, not constructive, and largely for the same definitional...
2016-06-19
analysis
As weapon systems take over more and more functions in the targeting cycle that used to be fulfilled by humans, it is increasingly ...
2016-06-01
report
This entry in the Encyclopedia of Public International Law explores the development toward and the international legal challenges posed by autonomous weapon systems (AWS). The article covers...
2016-05-30
article
In April 2016, head of ICRC arms unit Kathleen Lawand was invited to London to give the keynote address at the annual meeting of the International Committee for Robot Arms Control (ICRAC) Summit, held at Goldsmith University. In her presentation, Lawand presented the ICRC's views on autonomous weapon systems, i.e. weapons that can select and fire upon targets on their own,
2016-05-06
commentary
Air-launched laser weapons firing from fighter jets and drones would quickly incinerate a wide range of on-the-move enemy air and ground targets such as aircraft, drones, vehicles, buildings and ground forces.
2016-05-04
article
As a contribution to ongoing discussions in the CCW, this paper highlights some of the key issues on autonomous weapon systems from the perspective of the ICRC, and in the light of discussions at its recent expert meeting.
2016-04-11
article
Autonomous weapon systems is one emerging category of weapon of particular concern in Africa. As rapid advances continue to be made in new and emerging technologies of warfare, notably those relying on information technology and robotics, it is important to ensure informed discussions of the many and often complex challenges raised by these new developments.
2016-04-11
article
Convention on Certain Conventional Weapons - statement of the ICRC, read at the Meeting of Experts on Lethal Autonomous Weapons Systems.
2016-04-11
analysis
In recent years, a wide array of new technologies have entered the modern battlefield, giving rise to new means and methods of warfare, ...
2016-04-08
report
Nations from around the world met at the United Nations in Geneva, Switzerland to discuss autonomous weapons, potential future weapons that would select and engage targets on ...
2016-04-07
report
In most circumstances AWS are incapable of complying with rules of International Humanitarian Law and International Human Rights Law leading to violations of important rights like the right to...
2016-03-31
report
States have an obligation to conduct a legal review of all new weapons to ascertain the legality of the weapons and also to determine whether their use will be in all or some circumstances violate...
2016-03-28
report
The emerging notion of ‘Meaningful Human Control’ (MHC) was suggested by NGO Article 36 as a possible solution to the challenges that are posed by Autonomous Weapon Systems (AWS). Various states,...
2016-03-27
report
In this paper I discuss the relevance of the African notion of ‘ubuntu’ or humanity to the current on-going AWS debate. After tracing the notion of ubuntu back to the pre-colonial time in Zimbabwe...
2016-03-27
report
Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed within the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). The discussion is at an early stage, with most states still in the process of understanding the issues at stake. Extended discussions will be necessary to resolve contentious issues and generate a constructive basis for any potential formal negotiation.
2016-03-01
report
20YY Future of Warfare Initiative Director Paul Scharre examines the risks in future autonomous weapons that would choose their own targets and the potential for catastrophic ...
2016-02-29
research article
The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders, from Stephen Hawking to Elon Musk, recently signed a letter urging the world to ban ...
2016-02-16
report
This paper applies the author's previously published model for evaluating weapons' susceptibility to attempts to generate international regulations on autonomous weapons. The paper concludes that...
2015-11-11
report
Article 36 of Additional Protocol I of the 1949 Geneva Conventions requires states to conduct legal reviews of all new weapons, means and methods of warfare in order to determine whether their use is prohibited by international law. However, reviewing the legality of weapons with automated and autonomous features presents a number of technical challenges. Such reviews demand complex procedures to test weapon performance and to evaluate the risks associated with unintended loss of control. As such assessments require significant technical and financial resources, there is a strong incentive for deepening cooperation and information sharing between states in the area of weapon reviews. Increased interaction can facilitate the identification of best practices and solutions to reduce costs associated with test and evaluation procedures.
2015-11-01
report
Various States are developing increasingly autonomous weapon systems which promise vast changes in the conduct of armed conflict over the coming years, but there remain significant unanswered...
2015-10-26
research article
Autonomous weapons would have the capacity to select and attack targets without direct human input. One important objection to the introduction of such weapons is that they will make it more difficult to identify and hold accountable those responsible for ...
2015-10-01
article
Observations on the ‘Robots-don’t-Rape’ argument.
2015-08-05
commentary
The debate over using artificial intelligence to control lethal weapons in warfare is more complex than it seems.
2015-08-03
article
Technological advances in weaponry mean that decisions about the use of force on the battlefield could increasingly be taken by machines operating without human intervention. A recent event in Canberra, Crossing the Rubicon: the path to offensive autonomous weapons, focused on the range of issues associated with the potential use of these types of systems. Following the event,
2015-07-10
commentary
Autonomous weapons could be a military game changer that many want banned. Before considering such a move, we need to refine the debate—and America must demonstrate leadership.
2015-06-26
report
While robots are still absent from our homes, they have started to spread over battlefields. However, the military robots of today are mostly remotely controlled platforms, with no real autonomy....
2015-05-05
video
Autonomous weapons are an emotive subject, with the potential to change the whole nature of warfare.Could machines one day be able to carry out killing without human control and what should we do about the legal and moral implications of that possibility?A five-day long conference in Geneva has been looking at just such issues. Kathleen Lawand, Head of the ICRC's arms unit and
2015-04-17
statement
Convention on Certain Conventional Weapons (CCW), Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), 13 - 17 April 2015, Geneva. Statement of the ICRC
2015-04-13
report
The prospect of the use of so-called autonomous weapon systems has raised significant legal and moral concerns. This chapter contributes to the debate by providing an alternative perspective to the...
2015-04-13
commentary
To forestall threats from future killer robots, don’t ignore today’s “semi-autonomous” weapons
2015-04-12
report
CNAS experts Paul Scharre, Michael Horowitz, and Kelley Sayler provide a pimer for UN delegates on autuomous weapons....
2015-04-07
report
Although many are concerned that autonomous weapon systems may make war “too easy,” no one has addressed how their use may alter the distribution of the constitutional war power. Drones, cyber...
2015-02-26
commentary
The campaign against fully autonomous weapons may be a road map for confronting tomorrow’s dangerous technologies
2015-02-22
report
20YY Warfare Initiative Director Paul Scharre and Adjunct Senior Fellow Michael Horowitz discuss future military systems incorporating greater autonomy....
2015-02-13
commentary
The very first resolution of the General Assembly of the United Nations, in January 1946, addressed the problems raised by the discovery of atomic energy. Despite civil society's efforts, led by scientists and women's peace organizations, leaders of the United States and the Soviet Union rejected measures to curb nuclear ambitions. ...
2015-01-28
commentary
China's military is certainly developing some deadly capabilities. Here are five ways it could become even deadlier.
2015-01-21
report
This comment explores the legal implications and predicted efficacy of autonomous weapon systems, as well as the effects that their use might have on both international humanitarian law and the...
2015-01-21
report
This article briefly describes why the State parties to the Convention on Certain Conventional Weapons rejected human rights groups’ call for a ban on so called “killer robots.” This article...
2015-01-10
commentary
"We are standing at the cusp of a momentous upheaval in the character of warfare, brought about by the large-scale infusion of robotics into the armed forces."
2014-11-20
report
The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders from Stephen Hawking to Elon Musk...
2014-11-13
report
This UNIDIR paper examines what may be understood by “meaningful human control”, its strengths and weaknesses as a framing concept for discussions on autonomy and weapon systems...
2014-11-13
article
A challenge to human control over the use of force. Technological advances in weaponry mean that decisions about the use of force on the battlefield could increasingly be taken by machines operating without human intervention. Here, we examine the potential implications of such a profound change in the way war is waged, and caution against the use of such weapons unless
2014-11-12
report
CNAS experts Michael Horowitz, Paul Scharre and Kelley Sayler examine the issues facing U.N. delegates, along with recommendations for action....
2014-11-10
report
Expert meeting report The ICRC convened an international expert meeting on autonomous weapon systems from 26 to 28 March 2014. It brought together government experts from 21 States and 13 individual experts, including roboticists, jurists, ethicists, and representatives from the United Nations and non-governmental organizations. The aim was to gain a better understanding of
2014-11-01
report
Although remote-controlled robots flying over the Middle East and Central Asia now dominate reports on new military technologies, robots that are capable of detecting, identifying, and killing...
2014-06-11
article
Sandvik, Kristin Bergtora & Nicholas Marsh (2014) Lethal Autonomous Weapons: Issues for the International Community, Security & Defence Agenda, 9 May.
2014-05-09
commentary
The campaign against lethal autonomous weapons is making progress, but to succeed it will have to answer some questions, including: What distinguishes killer software from non-killer software?
2014-05-07
article
On April 10th 2014, the American Society of International Law and the International Law Association organized a joint Conference in Washington DC on autonomous weaponry and armed conflict. The panel addressed the legal, ethical and political challenges posed by the development of increasingly autonomous weapons systems. Analyzing automated weapons systems through the lenses of
2014-04-10
policy
Marsh, Nicholas (2014) Defining the Scope of Autonomy, PRIO Policy Brief, 2. Oslo: PRIO.
2014-02-22
commentary
The decisions US leaders make now over unmanned aerial vehicles will have enormous consequences.
2013-11-22
conference paper
Sandvik, Kristin Bergtora (2013) Unmanned Aerial Vehicles and Autonomous Weapons-, presented at PhD Course: Emerging Military Technologies - New Normative Challenges, Oslo, 13/11/13 – 15/11/13.
2013-11-13
report
Scientific research on fully autonomous weapons systems is moving rapidly. At the current pace of discovery, such fully autonomous systems will be available to military arsenals within a few...
2013-08-22
report
Even though most robotics experts predict fully autonomous weapon systems will not be available for use on the battlefield for a number of years, the debate over the lawfulness of such systems has...
2013-07-21
report
Once confined to science fiction, killer robots will soon be a reality. Both the USA and the UK are currently developing weapons systems that could be capable of autonomously targeting and killing...
2013-07-10
commentary
Robotics is akin to gunpowder, the steam engine, or the computer. It’s a game-changing technology not merely because of its power, but because of its impact both on and off the battlefield.
2012-10-26
commentary
The inherent advantages of U.S. Medium Altitude Long Endurance (MALE) drones—like the Predator, Reaper, and Global Hawk—for spying on potential adversaries and attacking suspected militants has made them the default counterterrorism tools for the Obama administration.
2012-09-04
commentary
Two recent conferences in Europe highlight the growing concerns among civil society about the military use of unmanned aerial vehicles (UAVs), known as drones. NATO Watch shares these concerns and is calling for a global moratorium on the deployment of armed drones and for urgent international discussions to develop an arms control regime to regulate the use, development and transfer of these weapon systems.
2010-09-27
report
This Review Essay surveys the recent literature on the tensions between of autonomy and accountability in robotic warfare. Four books, taken together, suggest an original account of fundamental...
2010-06-07
report
The impact of information technology in the field of military decision-making is superficially less visible than that of a number of other weapon developments, but its importance has grown steadily since the beginning of the 1980s. It is now the focus of special interest and efforts because of its potential role in modern weapon systems and the prospect of its inclusion as an essential ingredient in many military projects such as the Strategic Defense Initiative.
1987-01-01
report
What is the present likelihood of war? Which elements need to be taken into account when making such an assessment? As the world enters the post-nuclear, or 'second' nuclear age, it has to take account of a new revolution in military affairs.
1987-01-01