ANU Coral Bell School of Asia Pacific Affairs – Seminar Series
Evan Schaefer
Technical Specialist, Simulation Services Delivery
Phillip Sammons
Mechatronics Engineer
Militaries around the world are increasingly using Artificial Intelligence (AI) in various aspects, including the management and decision-making processes of military systems. However, Dr Ben Zala argues that the rate of adoption and development of AI is outpacing the required oversight of those systems. This could potentially lead the world into the next 'Cuban Missile Crisis’, however, without the rational thinking of people like Vasily Aleksandrovitch Arkhipov.
Introduction
As further improvements are being made in the development of AI, at what point does this cause concern over how it is utilised by a Nations Military in the management and potential use of both conventional and nuclear capabilities? Dr Ben Zala argues that that the time of concern is now, as the progress and integration of AI into military systems rapidly outpace any possibility of control over its use and/or the understanding of decision makers of how it could affect the accuracy of the information presented to them.
As Dr Ben Zala goes on to further explain, since the 1960’s the world has never been in such a disarray. There are now no longer any effective nuclear treaties between the Major Powers, especially since Russia has recently pulled out of the New Strategic Arms Reduction Treaty (New START). There are now no longer agreements in place globally between the Major Powers to reduce the amount of nuclear arms in the world.
With the increasing prevalence of AI on the battlefield, as highlighted in the recent Ukraine conflict, should AI be given access to or be involved in military decisions over the use of conventional and nuclear arms during conflict? Dr Ben Zala thinks we should think about it carefully.
What is Artificial Intelligence (AI) and how is it used by Military?
In the most simplistic terms, AI is a computer code that can react to external stimuli and make decisions on those inputs. AI is generally broken down into 4 different classifications.
Reactive Machines – Computer code with no memory, designed to be task specific. Examples include autonomous systems that are specifically programmed to react to inputs in a specific way or filter data based on programmed algorithms.
Limited Memory Machines – Computer code that can remember some of its previous actions and use this information to pre-determine the outcomes. This category has given rise to the large language models and image recognition systems that have contributed to the explosion of AI use.
Theory of Mind – The first of the theoretical types of AI where the computer code has the potential to understand the world and its inhabitants, identifying that everything has emotions and can be affected by the actions of others.
Self-awareness – The last of the theoretical types of AI. This is where the computer code is so complex that it can become completely aware of its surroundings and think like a human.
In the Military, AI is used in multiple aspects on the battlefront, from systems used for decision making over the movement of personnel and equipment, through to systems designed to automatically target and shoot down incoming missiles. In the context that Dr Zala makes reference to, AI could play a role in the decision making cycle and potential use of Nuclear weapons by Military Commanders and/or Commanders in Chief of a Nation.
Where is the world now with AI and what are the concerns over it with Nuclear weapons?
As the world has progressed the development of AI and begun the process of integrating it with everyday systems like mobile phones and robotic vacuums, Dr Zala argues that there is the potential for individuals to become complacent over what and how it can be used. Without any prior knowledge of the computer code that first made up the AI machine, we are integrating it with whatever we can under the assumption that it will always operate in a way we expect and trust, rather than going to the effort of gaining the information for ourselves.
Instead, we utilise AI to gather and process the information on our behalf, trusting that the recommendations presented by AI are accurate or in accordance with our own expectations, in order to reduce the time taken to make decisions, such as whether to press the big red launch button or not.
But what happens if the AI has been altered or fed incorrect information in the first place? Should we then trust that what the AI is presenting us is the truth, or actually an altered view of the truth?
What could be done and by when?
To avoid another situation like the Cuban Missile Crisis of the 1960’s, Dr Zala recommends that the world work towards developing new Arms controls or unilateral arms controls, in an environment of open communication between Nations, supported with the less formal disarmament agreements and a show of an “I’ll go first” approach in the reduction of Nuclear arms.
So, should AI stay or should it go?
If appropriately utilised and clearly understood, AI can and has been of great assistance in the future development of technology for the modern age. Similarly, how the invention of electricity transformed the way we live today, AI has the potential to greatly enhance everyday living, from something as simple as choosing a TV show to watch through to driving us to the shops. However, as with all emerging and disruptive technology, it is crucial that we understand what it is and how it works, and probably just don’t give it the keys to a Nuke.
Meikai Group
Meikai is a Professional Services Consultancy dedicated to facilitating and solving capability problems and challenges for our clients. Meikai specialises in the provision of engineering, project management and program delivery services to support the implementation of emerging and disruptive technology within the ICT, simulation, and training domains.
Meikai holds a Futures portfolio, that looks to explore emerging technology. It fosters cutting edge thinking, skills and competence in our workforce, to continue to providing value and quality to our clients. Meikai conducts research into Blockchain, Web 3.0 and NFTs as part of the Futures portfolio.
About the Authors
Evan Schaefer – Technical Specialist, Simulation Services Delivery
Evan Schaefer is a Simulation Services Delivery Specialist with over a decade of experience in IT Service Management (ITSM) and Simulation systems in Defence. His extensive hands on experience and qualifications in both Military Simulation and the various IT Service Management systems in Defence and Industry has allowed him to be deeply involved in the design and management of complex distributed simulation concepts to support military training.
Philip Sammons – Mechatronics Engineer
Philip Sammons is a highly skilled professional at Meikai Group in Australia with a Bachelor of Engineering (Mechatronics) and a Master of Engineering. His extensive experience spans over a decade in various domains such as Autonomous Robots, Simulation integration, Business Process Automation, Simulation-Enabled Training, Building Security, Systems Engineering, Information and Communications Technology (ICT) Architecture and Delivery and Management.
References
Office of the Historian. “The Cuban Missile Crisis, October 1962”. Milestones: 1961-1968. https://history.state.gov/milestones/1961-1968/cuban-missile-crisis
ANU. Coral Bell School of Asia Pacific Affairs. Discussing AI, Automated Systems and the Future of War Seminar Series. “Should AI Stay or Should AI Go? First Strike Incentives & Deterrence Stability in the Third Nuclear Age”. 4 Dec 2023. https://bellschool.anu.edu.au/event/should-ai-stay-or-should-ai-go-first-strike-incentives-deterrence-stability-third-nuclear-age
The Atlantic. “Never Give Artificial Intelligence the Nuclear Codes”. 2 May 2023. https://www.theatlantic.com/magazine/archive/2023/06/ai-warfare-nuclear-weapons-strike/673780/
Coursera. Articles. “4 Types of AI: Getting to Know Artificial Intelligence”. 30 Nov 2023. https://www.coursera.org/articles/types-of-ai
Comments