Find evidence, practical ideas and fresh insight for greater impact

Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence

Based on:

Journal Article (2022)

Open access

 This research discusses what international relations measures are required for an artificial superintelligence (ASI) and why a Universal Global Peace Treaty would help ensure a peacebuilding principle is adopted by all parties, including the ASI.

Research collaborators:
Elias Carayannis
PrintShare
Cite page
Draper, John. 'Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence'. Acume. https://www.acume.org/r/optimising-peace-through-a-universal-global-peace-treaty-to-constrain-the-risk-of-war-from-a-militarised-artificial-superintelligence/
Peace, Justice and Strong Institutions

With the increasing application of artificial intelligence (AI) to warfare, attempts to develop advanced applications of artificial superintelligence (ASI) in this field are almost inevitable. At the moment, ASI precursor technology, Artificial General Intelligence (AGI), is, for the most part, not being engineered for warfare; instead, it is being created for civilian purposes. However, because of its potential for technological supremacy, once achieved ASI will certainly either be applied to warfare or developed for warfare.

AGI technology is being developed by many projects globally, but Silicon Valley and China are the closest. It is certain that AGI will be informed by principles and values. For instance, if Russia were to be the first to develop this technology, then it could be informed by Cosmism (or Putinism). Given Russian expansionist nationalism involves ‘hot’ war, a Cosmist ASI could direct a global war.

Preloading principles into an ASI thus seems essential. For instance, if an emerging ASI endorses a ‘Star Trek module’, it could adopt an optimistic vision of how to develop Earth, ourselves, the solar system, and the galaxy, where one or more ASIswork together with humanity as friendly partners.

As such, what Eliezer Yudkowsky and other Silicon Valleyers are trying to do is to hard code being ‘friendly’ into an ASI such that it could never turn on humanity, i.e., by hard coding it to always act altruistically, via high-level philosophical principles, or ‘supergoals’.

The obvious threat is that an ASI could still be used to wage catastrophic war by one party on another through political subversion of its supergoals. Turchin and Denkenberger’s position is that politico-military subversion of an ASI will always be attempted; this is also our position in this paper as nothing at present prevents such subversion. With this context in mind, a ‘Terminator’ situation is a possible worst case scenario. Hot, and even possibly cold wars, could indicate to an ASI that countries want war and violence over peace, and ‘winning’ a game of global war could be what the ASI understands to be its own role.

With regard to peacebuilding, the current UN system has not been good enough at preventing major conflicts, especially if we bear in mind the latest Russia-Ukraine war and the possibility of larger scale warfare arising from the New Cold War.

The end goal of peacebuilding with regard to international relations and humanity’s relations with an ASI is a universal, global peace. There already exists a draft of a Universal Global Peace Treaty, drafted by the Center for Global Nonkilling, which would be a diplomatic symbol for peace understandable by an ASI, preferably if its creating nation state were a signatory, and especially if it were also a signatory, as the rules and principles of diplomacy would hopefully apply. The closest we have come so far to such an instrument is the Global Ceasefire in 2020, which came from seemingly nowhere, due to COVID, and was backed by the UN.

What diplomatic rules could be deployed are also discussed in this article. We suggest conforming instrumentalism, the underpinning dynamic for the postwar era Geneva Conventions and likely for the UN itself.

Our position is that hard coding ‘peace’ as a supergoal for humanity via a Universal Global Peace Treaty (UGPT) will force an artificial superintelligence (ASI) to consider whether peace might also be its own supergoal (hopefully via conforming instrumentalism) and so act as a check on the ASI waging war, even if it is directed to do so by a belligerent nation state, and if necessary irrespective of whether the belligerent is a signatory. The UGPT could also set aside a signatory position for the ASI, which could concretise the possibility of it acting independently to transcend any warlike ambitions on the part of its nation state creator.

Thus, the purpose of this research was to understand the threat from ASI-directed or enabled warfare and whether a UGPT is needed in order to help to reduce the risk of an antagonistic ASI. The article argues that a UGPT could provide the necessary context to inform an ASI that humans are indeed peaceful creatures wanting peace as part of their core principles.

Academically, this research examined whether this peacebuilding measure would be viable, through the lens of conforming instrumentalism, a new branch within international relations theory. Conforming refers to the idea that states either conform upwards to civilizational ideals or conform downwards towards, for instance, genocide, while instrumentalism refers to the fact that treaties like the UGPT are negotiable diplomatic tools. Conforming downwards would be a terminator scenario (humanity being killed by an ASI), whereas establishing agreements and conventions that prioritize, for instance, humanity’s peaceful expansion into space in partnership with an ASI whilst solving climate change would be conforming upwards.

 

Key findings

  • Through conforming instrumentalism, a Universal Global Peace Treaty (UGPT) could help to to constrain the risk of war and promote peaceful decision-making by an artificial superintelligence (ASI), by putting in place the conditions such that the majority of states' priority is peace.

    A global agreement with the majority of countries signing would inform an ASI that humanity prioritises building peace. Through providing concrete evidence to demonstrate peace is an important human value, this would decrease the risk of an antagonistic relationship between humanity and an ASI.

  • We need to have a UGPT in place before 2045, the goal for many companies seeking to realise artificial general intelligence (AGI), or it is too late.

    Not having a UGPT in place would greatly increase the risk of antagonistic decisions being made by an artificial superintelligence. However, with the current war in Ukraine, one being waged by Russia, a member of the Security Council, it is impossible to make any movement on this unless a strong advocacy coalition emerges.

Proposed action

  • Peace is a top-level value for much of humanity, but not for all the planet's nation states
  • As part of this research, we drafted a Cyberweapons and Artificial Intelligence Convention, which we hope can be used as a prelude to the ASI component of the Universal Global Peace Treaty
  • A Universal Global Peace Treaty could be brought to the bargaining table when the Russia-Ukrainine war is over in an attempt to revitalize the UN system
  • We need more regulation on the development of artificial superintelligence
  • In historical terms, there exists only a very short window of time for countries to sign a treaty

Comments

You must log in to ask a question
 

Acknowledgements

Special thanks to Esther Feeken for preparation assistance

We would like to extend a special thank you to Esther Feeken, for their invaluable contribution in assisting the preparation of this research summary.

Are you a researcher looking to make a real-world impact? Join Acume and transform your research into a practical summary.

Already have an account? Log in
Share

Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence

Cite this brief: Draper, John. 'Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence'. Acume. https://www.acume.org/r/optimising-peace-through-a-universal-global-peace-treaty-to-constrain-the-risk-of-war-from-a-militarised-artificial-superintelligence/

Brief created by: Dr John Draper | Year brief made: 2022

Original research:

  • E. C., & Draper, J., ‘Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence’ https://doi.org/10.1007/s00146-021-01382-y. – https://link.springer.com/article/10.1007/s00146-021-01382-y

Research brief:

This research discusses what international relations measures are required for an artificial superintelligence (ASI) and why a Universal Global Peace Treaty would help ensure a peacebuilding principle is adopted by all parties, including the ASI.

With the increasing application of artificial intelligence (AI) to warfare, attempts to develop advanced applications of artificial superintelligence (ASI) in this field are almost inevitable. At the moment, ASI precursor technology, Artificial General Intelligence (AGI), is, for the most part, not being engineered for warfare; instead, it is being created for civilian purposes. However, because of its potential for technological supremacy, once achieved ASI will certainly either be applied to warfare or developed for warfare.

AGI technology is being developed by many projects globally, but Silicon Valley and China are the closest. It is certain that AGI will be informed by principles and values. For instance, if Russia were to be the first to develop this technology, then it could be informed by Cosmism (or Putinism). Given Russian expansionist nationalism involves ‘hot’ war, a Cosmist ASI could direct a global war.

Preloading principles into an ASI thus seems essential. For instance, if an emerging ASI endorses a ‘Star Trek module’, it could adopt an optimistic vision of how to develop Earth, ourselves, the solar system, and the galaxy, where one or more ASIswork together with humanity as friendly partners.

As such, what Eliezer Yudkowsky and other Silicon Valleyers are trying to do is to hard code being ‘friendly’ into an ASI such that it could never turn on humanity, i.e., by hard coding it to always act altruistically, via high-level philosophical principles, or ‘supergoals’.

The obvious threat is that an ASI could still be used to wage catastrophic war by one party on another through political subversion of its supergoals. Turchin and Denkenberger’s position is that politico-military subversion of an ASI will always be attempted; this is also our position in this paper as nothing at present prevents such subversion. With this context in mind, a ‘Terminator’ situation is a possible worst case scenario. Hot, and even possibly cold wars, could indicate to an ASI that countries want war and violence over peace, and ‘winning’ a game of global war could be what the ASI understands to be its own role.

With regard to peacebuilding, the current UN system has not been good enough at preventing major conflicts, especially if we bear in mind the latest Russia-Ukraine war and the possibility of larger scale warfare arising from the New Cold War.

The end goal of peacebuilding with regard to international relations and humanity’s relations with an ASI is a universal, global peace. There already exists a draft of a Universal Global Peace Treaty, drafted by the Center for Global Nonkilling, which would be a diplomatic symbol for peace understandable by an ASI, preferably if its creating nation state were a signatory, and especially if it were also a signatory, as the rules and principles of diplomacy would hopefully apply. The closest we have come so far to such an instrument is the Global Ceasefire in 2020, which came from seemingly nowhere, due to COVID, and was backed by the UN.

What diplomatic rules could be deployed are also discussed in this article. We suggest conforming instrumentalism, the underpinning dynamic for the postwar era Geneva Conventions and likely for the UN itself.

Our position is that hard coding ‘peace’ as a supergoal for humanity via a Universal Global Peace Treaty (UGPT) will force an artificial superintelligence (ASI) to consider whether peace might also be its own supergoal (hopefully via conforming instrumentalism) and so act as a check on the ASI waging war, even if it is directed to do so by a belligerent nation state, and if necessary irrespective of whether the belligerent is a signatory. The UGPT could also set aside a signatory position for the ASI, which could concretise the possibility of it acting independently to transcend any warlike ambitions on the part of its nation state creator.

Thus, the purpose of this research was to understand the threat from ASI-directed or enabled warfare and whether a UGPT is needed in order to help to reduce the risk of an antagonistic ASI. The article argues that a UGPT could provide the necessary context to inform an ASI that humans are indeed peaceful creatures wanting peace as part of their core principles.

Academically, this research examined whether this peacebuilding measure would be viable, through the lens of conforming instrumentalism, a new branch within international relations theory. Conforming refers to the idea that states either conform upwards to civilizational ideals or conform downwards towards, for instance, genocide, while instrumentalism refers to the fact that treaties like the UGPT are negotiable diplomatic tools. Conforming downwards would be a terminator scenario (humanity being killed by an ASI), whereas establishing agreements and conventions that prioritize, for instance, humanity’s peaceful expansion into space in partnership with an ASI whilst solving climate change would be conforming upwards.

Findings:

Through conforming instrumentalism, a Universal Global Peace Treaty (UGPT) could help to to constrain the risk of war and promote peaceful decision-making by an artificial superintelligence (ASI), by putting in place the conditions such that the majority of states’ priority is peace.

A global agreement with the majority of countries signing would inform an ASI that humanity prioritises building peace. Through providing concrete evidence to demonstrate peace is an important human value, this would decrease the risk of an antagonistic relationship between humanity and an ASI.

We need to have a UGPT in place before 2045, the goal for many companies seeking to realise artificial general intelligence (AGI), or it is too late.

Not having a UGPT in place would greatly increase the risk of antagonistic decisions being made by an artificial superintelligence. However, with the current war in Ukraine, one being waged by Russia, a member of the Security Council, it is impossible to make any movement on this unless a strong advocacy coalition emerges.

Advice:

Peace is a top-level value for much of humanity, but not for all the planet’s nation states

    • In order to re-prioritise peace and avoid ASI-directed or enabled conflict, we must prove that we want a peaceful world. Peaceful actions include conventions, treaties and agreements, with a Universal Global Peace Treaty being essential to revitalize the United Nations. This concrete action would communicate to a future ASI that our priority is peace, enabling partnership and constraining the risk of war.

As part of this research, we drafted a Cyberweapons and Artificial Intelligence Convention, which we hope can be used as a prelude to the ASI component of the Universal Global Peace Treaty

    • This is a first step to build on.

A Universal Global Peace Treaty could be brought to the bargaining table when the Russia-Ukrainine war is over in an attempt to revitalize the UN system

We need more regulation on the development of artificial superintelligence

    • This would help to communicate to politicians that their development is safe and get them on board whilst dissuading them from subverting AGI. Likewise, it would also ensure that development is safe by advancing a peace-based AI infrastructure, one tied to an international treaty-based approach, the UGPT. We need principles-based approaches like Eliezer Yudkowsky’s concept of Friendly AI to also embrace international relations theory., especially concepts that might enable ‘best practices’ for human-ASI relations, like conforming instrumentalism.

In historical terms, there exists only a very short window of time for countries to sign a treaty

    • A draft of the treaty does already exist, but this needs to be progressed urgently, certainly before 2045, many companies’ goal for achieving AGI. One idea to accelerate this process is to find a mechanism to enable every country to understand that, as with the race to draft and sign the Geneva Conventions, it is in a race to shape the UGPT and so the principles that the ASI might be persuaded to adopt.
14098
|
2022

"Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence"

Cite paper

E. C., & Draper, J., ‘Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence’ https://doi.org/10.1007/s00146-021-01382-y.

Published in AI & Society.
Peer Reviewed

DOI: 10.1007/s00146-021-01382-y
🔗 Find full paper (Open access)
Methodology
This is a qualitative research.
literature review

The research was guided by an international relations analytical lens called conforming instrumentalism, and the aim was to try to understand whether there could be a positive influence on global peacebuilding through a Universal Global Peace Treaty (UGPT). It was based on desk-based research, a literature review and the application of the analytical lens across the literature. As part of the research deliverables, we also drafted a Cyberweapon and Artificial Intelligence Convention.

A weakness of this research is that it is that we do not yet have a state sponsor of the UGPT, and some key organisations developing artificial superintelligence are very private about their operations and may not wish to join an advocacy coalition.



Funding

This research was independently conducted and did not receive funding from outside of the university.

Your research brief is live

It’s now visible on your profile and searchable by practitioners. Thank you for making your work accessible to decision-makers who need it

Close

Your research brief was updated

Changes are live now. 

Close

Your account is pending verification

We’ve been notified and will review it shortly. Once verified, it will be published and visible to practitioners.

We have this email on file: . If this isn’t your work email, update it to speed things up.

Update email

Your draft has been saved

Your draft has been saved. You can return to edit and publish it anytime from your dashboard.

Close

Thank you for subscribing!

We’d love to know who we will be talking to, could you take a moment to share a few more details?

Thanks for signing up!
If you haven’t already, create a free account to access expert insights and be part of a global effort to improve real-world decisions.

Get started

Close

For researchers

Turn your paper into a practical brief practitioners will read.

Sign up freeLearn more

For professionals

Explore free briefs, and book a call for deeper insights when you need them.

Talk with the teamLearn more