top of page

William A. Schabas on Lethal Autonomous Weapon Systems and existing legal frameworks

Autonomous weapons can be programmed to abide by international humanitarian law, and if they are, they will. Human beings are programmed to abide by international humanitarian law, but they do not all the time abide by it. In that sense, autonomous weapons are superior.

William A. Schabas is professor of international law at Middlesex University in London, emeritus professor of international criminal law and human rights at Leiden University and distinguished visiting faculty at the Paris School of International Affairs, Sciences Po. He is also professor emeritus at the National University of Ireland Galway and honorary chairman of the Irish Centre for Human Rights. He is the author of more than twenty books in the fields of human rights and international criminal law, his most recent being The Customary International Law of Human Rights and the published version of the course he delivered at the Hague Academy of International Law in January 2021 entitled Relationships between International Criminal Law and Other Branches of Public International Law. Professor Schabas prepared the quinquennial reports on the death penalty for the United Nations Secretary-General in 2020, 2015 and 2010. He was a member of the Sierra Leone Truth and Reconciliation Commission. Professor Schabas is an Officer of the Order of Canada and a member of the Royal Irish Academy, a laureate of the Vespasian Pella Medal, and he holds several honorary doctorates.


Interview conducted by Yasmin de Fraiture (2022)

 

The interview was conducted in the spring of 2022 and has been edited for clarity


Imagine going about your day, but then suddenly becoming the victim of a Lethal Autonomous Weapon System. How realistic is it that this will happen? And if you are a victim, where can you go? Which legal frameworks are in place to help you? Artificial Intelligence (AI) is increasingly used to improve weapon systems and there has been a significant amount of innovation in weapon technology over the last decades. Some believe that we are moving towards a time where warfare will be more and more automated. This has made several Non-Governmental Organizations (NGOs) gravely concerned about the development of weapons that can function without interference of human beings, and that could cause harm or death to people, within or outside of a war.


In this interview, William A. Schabas discusses the status of Lethal Autonomous Weapon Systems (also known as LAWS or ‘killer robots’) in International Humanitarian Law and his opinion on the topic in relation to the role of humans, victims and responsibility. He also considers whether victims' rights are currently properly addressed in existing legal frameworks, meaningful human control of weapon systems, whether a machine can commit a crime, and the lack of intent within a system itself.


Do you find that the phenomenon of LAWS is properly addressed in international law and if so, to what extent?


I do not think it is really addressed in international law, except in the sense of the general prohibition on weapons that cause unnecessary suffering, superfluous harm or that are indiscriminate. I think that it is one of those weapons that is not specifically regulated, although there have been calls to do.


Are those strong calls, is it something of concern?


For some. This is a question, where I do not have a firm view myself, because there are some people who think this is really a very important issue that needs to be resolved, and I am not convinced by that argument. First of all, I think there have always been weapons that are to some extent autonomous, an anti-personnel mine is really an autonomous weapon. It is targeted in a sense, but then it is left, and when it is employed, it is not controlled by a human being. So it is not a new phenomenon, although it is obviously much more sophisticated now. The fact is that weapons can be autonomous, and people raise the point that by depriving them of human judgment they cannot show mercy. They cannot be humanitarian because they cannot employ human emotions into their decision making. At the same time, I think the opposite can be said, and that is that they cannot be unnecessarily cruel either.


Autonomous weapons can be programmed to abide by international humanitarian law, and if they are they will. Human beings are programmed to abide by international humanitarian law, but they do not all the time abide by it. In that sense, autonomous weapons are superior.


Then do you think that the laws that exist now, that address certain elements of it, suffice?


It depends on what the laws would do. You can have a law that prohibits the use of certain types of weapons. Personally, I would prohibit all weapons, but International Humanitarian Law cannot do that. It cannot prohibit all weapons, because it would be prohibiting war and International Humanitarian Law regulates war rather than prohibits it.


So then the issue is on what basis you would prohibit it. I would question whether there is really a strong basis for prohibiting it any more than there is for prohibiting any other weapon.


How do you look at look at the element of intent, that you need to establish responsibility for committing a war crime? A machine does not have this mens rea [1] element, so where do we find the responsibility? A machine cannot commit a crime.


There would be levels. At some point there will be stages where the machine is programmed. We are talking in generalities now, but if we take anti-personnel mine as a very simple version of an autonomous weapon, somebody has to target the anti-personnel mine by placing it in a location where then they are arguably indiscriminate or cause unnecessary suffering. You could consider the person who targets it. I do not think it is the manufacturers, although they might be accomplices in war crimes.


They might have known what it was made for?


Well yeah, but that is similar to the manufacturer of a firearm. And you cannot say that the manufacturer commits a war crime unless they are directly involved in developing the firearm, giving it to somebody, knowing that they will likely commit a war crime, and then you can start talk about criminal responsibility. However, I do not think we are going to do this for developing weapons which after all can be weapons for defensive purposes too.


Do you think, as some civil society organizations are pushing for, that it is realistic to have a full prohibition of these types of weapons or that this will never be an option?


We have not got a full prohibition on anti-personnel mines. We do not have a prohibition on cluster munitions. We have treaties on anti-personnel mines and cluster munition but they have not been ratified by many countries so whether it is realistic, it is hard to say.


Meaningful human control and human-machine interaction are terms that get a lot of attention within the debate on LAWS. Which elements would that entail? How can you define that human role?


You could look at an anti-personnel mine, if it is not used indiscriminately, it means that whoever places it knows where it is and can pick it up when they know it is no longer needed. And of course, a lot of the use of anti-personnel mines involves mapping where they are used, because the people using them need to know where they are, so they do not go walking in the minefield. I think like we said with cluster munitions, part of regulating those is making them detectable. If you do not use them, you can find them after the war is over. Otherwise, they are a danger, so I think you can do things like that.

It would seem to be part of it, knowing how to stop the thing, because if you got a robot out there, fighting a battle, how does the robot know when the war is over?


Perhaps someone can programme it to do a specific task, and then have it self-destruct when it is done?


Yes, but you still have the part when there is an agreement on a ceasefire and the robot is out there and one does not know how to stop it. I think that is probably an important thing in terms of using them, that you must be able to stop them, in order to operate. So if you have that, that means that they are not entirely autonomous robots.


If the war stops before they are done with their mission, you still have to stop them.


There are things like a guided missile that can steer itself. It is probably somewhat robotic, and they would be near impossible to stop, we are talking about a time horizon of minutes, but if you have a robot out into the battlefield, you have to be able to tell the robot that there is a ceasefire and that they have to stop fighting.


Maybe there could be human responsibility in the sense that they should intervene if they know an attack will take place that might be outside of this war. If you can stop this machine, you should do it. However, is this an element that can be addressed in a treaty? Is making non-intervention criminally punishable an option?


Absolutely. Then you could start talking about criminal responsibility. If you make it a requirement that there would be an ability to eliminate the threat, I am sure people have thought this through more than I have, but with an anti-personnel mine, it is the simple thing of knowing where it is and being able to remove it. With cluster munition, it is having the bomblets being identifiable and capable of being detected and removed once the battle is over. Although maybe you just have to be able to draw a map, draw a line around where you have used them, and say nobody should ever go there again. So it is about warning, I mean even if you are not in a war. If, as an experiment, you put some anti-personnel mines in Jardin de Luxembourg, and then you did not keep track of where they were, and you were just doing an experiment. Then when the experiment you controlled is over, and when you go retrieve them you could not find two of them. If you just walked away, and then a little kid stepped on it, you would be responsible for leaving that there. However, if you went to the park’s authorities, and you tell them that there are two bombs in that area that cannot be found, get a fence around it and warn people not to go there, then you are probably not going to be held criminally responsible. So it is that kind of level of responsibility.


Finally, there are people in Yemen who went to court in Germany because of United States drone attacks that happened in their village, and German intelligence was used to support this drone activity. Would it be likely to prosecute in a similar fashion if there were hypothetical victims of a LAWS attack? Considering there is a very high use of artificial intelligence within these drones, therefore it might be comparable in some ways to LAWS?


I do not know how autonomous drones are. I think most drones are controlled, targeted. It is like the debate of driverless cars. Personally, I look forward to driverless cars. Not only will I enjoy not having to drive anymore, but I think it will be safer.


People say, driverless cars, how will they be able to react to emergencies? They will have shortcomings, but they will not be able to get drunk on Saturday night and drive home because they think they are still able to drive.


They will not be able to fall asleep when they are driving a car across the motorway, so I do not know. I do not know how to regulate them. I think with drones generally, I do not see how you got to prohibit them or regulate them because all weapons are using artificial intelligence in some way or another. And they are still being controlled by people. Like most airplanes you fly now are pretty much running on artificial intelligence.

 

About the interviewer: Yasmin de Fraiture graduated from a Master in International Security with concentrations in Human Rights and Diplomacy at Sciences Po Paris. Previously, she obtained her Bachelor in Political Science from Leiden University. Her experiences include internships with the Dutch Ministry of Foreign Affairs, NATO and the OECD Nuclear Energy Agency. She is interested in the legal, societal and ethical consequences of increased use of Artificial Intelligence (AI) in weapon systems.

 

[1] Mens rea is the criminal intention or knowledge that an act is wrong (Collins Dictionary, 2022)




Post: Blog2_Post
bottom of page