Two major ethical or moral human factor issues involving the use of Unmanned Aerospace Systems (UAS) in remote warfare are responsibility and accountability. This is especially true the more autonomous UASs become and the farther human involvement seems to be in its operations, particularly in remote warfare. There are similarities between human control of unmanned and manned combat aircraft, the pilot in command (PC) makes the decision to engage the enemy in real time, the issues of accountability and responsibility rests with the pilot and, in most military operations, with those that gave the order to engage the enemy with lethal force.
But who has responsibility and accountability when more autonomous UASs decides on its own to fly into enemy territory to engage and kill what it perceives as the enemy? Is it the PC or his commander? Is it the programmer who developed the lines of code which the UAS operates on and operators program with? Or is it the UAS itself? In principle, humans retain overall control of, and responsibility for, the machines or “robots” they create. However, it is not that simple when trying to establish the same for highly autonomous machines. Johnson and Noorman’s study on “Responsibility Practices in Robotic Warfare” (2014) discusses these key issues as it pertains to different conceptions of machine autonomy – high end automation, other than automation, or collaborative automation (p14).
While low-end automation of machines leaves most of the operations and decisions with the human high-end automation accomplishes most of the machine operations with very little input from humans. These are processes or a series of processes that render the operations automatic (autopilot) and autonomy extends only as far as the process.
In the concept of autonomy as something other than automation the machine performs actions directed by humans without having to be told how to do it. Humans will not have to instruct the machine on the different processes to perform in order to achieve its goal. “Machine autonomy, from this perspective, refers to robotic systems that would somehow be more flexible and unpredictable, compared to automated systems, in deciding how to operate - given predefined goals, rules, or norms.” (Johnson & Noorman, 2014). Most categorize this type of autonomy as artificial intelligence.
Both concepts above may seem to take the human out of the loop. But in collaborative autonomy concept it is suggests that humans remain the key decision-maker in most machine functions. Most research into human-computer interaction focus on terms such as collaborative control, adaptive autonomy, or situated autonomy which stresses the receptiveness of machines to humans. In this the machine or robot seeks command from the human operator in major decisions, thereby keeping the human in the loop. This concept presents the most promise in maintaining human control ergo, responsibility and accountability, rests with the human operator of UASs used in remote warfare. UAS collaborative autonomous capabilities will greatly enhance the continued use of UAS in remote warfare in the years to come.
Johnson, D. G., & Noorman, M. E. (2014, May). Responsibility Practices in Robotic Warfare. Military Review, pp. 12-21. Retrieved from US Army Combined Arms Center.
But who has responsibility and accountability when more autonomous UASs decides on its own to fly into enemy territory to engage and kill what it perceives as the enemy? Is it the PC or his commander? Is it the programmer who developed the lines of code which the UAS operates on and operators program with? Or is it the UAS itself? In principle, humans retain overall control of, and responsibility for, the machines or “robots” they create. However, it is not that simple when trying to establish the same for highly autonomous machines. Johnson and Noorman’s study on “Responsibility Practices in Robotic Warfare” (2014) discusses these key issues as it pertains to different conceptions of machine autonomy – high end automation, other than automation, or collaborative automation (p14).
While low-end automation of machines leaves most of the operations and decisions with the human high-end automation accomplishes most of the machine operations with very little input from humans. These are processes or a series of processes that render the operations automatic (autopilot) and autonomy extends only as far as the process.
In the concept of autonomy as something other than automation the machine performs actions directed by humans without having to be told how to do it. Humans will not have to instruct the machine on the different processes to perform in order to achieve its goal. “Machine autonomy, from this perspective, refers to robotic systems that would somehow be more flexible and unpredictable, compared to automated systems, in deciding how to operate - given predefined goals, rules, or norms.” (Johnson & Noorman, 2014). Most categorize this type of autonomy as artificial intelligence.
Both concepts above may seem to take the human out of the loop. But in collaborative autonomy concept it is suggests that humans remain the key decision-maker in most machine functions. Most research into human-computer interaction focus on terms such as collaborative control, adaptive autonomy, or situated autonomy which stresses the receptiveness of machines to humans. In this the machine or robot seeks command from the human operator in major decisions, thereby keeping the human in the loop. This concept presents the most promise in maintaining human control ergo, responsibility and accountability, rests with the human operator of UASs used in remote warfare. UAS collaborative autonomous capabilities will greatly enhance the continued use of UAS in remote warfare in the years to come.
Reference
Johnson, D. G., & Noorman, M. E. (2014, May). Responsibility Practices in Robotic Warfare. Military Review, pp. 12-21. Retrieved from US Army Combined Arms Center.
No comments:
Post a Comment