Quantcast
Channel: Bleeding Edge
Viewing all articles
Browse latest Browse all 133

Do not fear, fear itself, yet fear imagined fear, hallucinated fear, fear generated, fear implied, and most of all, fear acted upon…

$
0
0

We have nothing to fear but fear itself, as the old saw goes, but in the case of systemic use of AI-enabled technologies, and AI computer coding in the in our nuclear forces, in our nuclear strategy, in our nuclear threat prediction, and in our nuclear defense & offense planning and execution — could erode human control, advanced decision making operating theatre skills, and critical thinking, so that over time, it could even lure the Executive, the Legislative Policymakers, and the Strategic nuclear planners, into believing that a nuclear war can be won. 

A winnable Nuclear War – what a concept…

Yet it proved to be correct during the Second World War.

As you might recall — we won that winnable Nuclear war with a couple of bombs…

Mainly because the other side did not have any Nuclear weapons to retaliate with.

Yes, we won…

But at what price was this much spoken about victory earned, in terms of human suffering, souls lost, and mayhem?

And things have gotten significantly different with most all of our adversaries having Nuclear weapons that are poised to retaliate with, against our people, our soil, our soul and all of our national infrastructure.

Because, when considering Nuclear War today, with all of our adversaries having the same average Nuclear Response capabilities — it becomes clear that the potentially catastrophic impacts of military applications of Artificial Intelligence (AI), are as real as the few deadly scenarios that first come to mind: 1) autonomous killer robots lobbing nukes hither & tither. 2) AI-assisted nuclear, chemical, or biological weapons, deployment, that result in Nuclear Winter, Bioweapons induced Black Death, Chemical Armageddon, and all other dystopian visions of the future. 3) Loss of C4I from Human control to AI is a nightmarish scenario where Communications, Controls, and Intelligence are all taken over by AI, where much like the 1983 movie War Games, and countless other Hollywood induced images flash in front of our eyes, where the AI has driven all of us into the ditch…

If you’ve taken a San Francisco driverless cab recently — you get the feeling of uncertainty when your driverless vehicle cuts in front of traffic executing a turn from the wrong lane onto the oncoming traffic and you start praying the Hail Mary and asking God to forgive your misdeeds, for one last time…

The “War Games” film features a self-aware AI-enabled supercomputer that simulates a Soviet nuclear launch, that convinces the President and the Security Council, and all of the US nuclear forces to prepare for a retaliatory strike. The crisis is only partly averted because the main (human) characters persuade US forces to wait for the Soviet strike to hit before retaliating. It turns out that the so called “strike” was an intentional fantasy, a teleological Fabrication by the AI in order to expedite the end result of a “Winnable Nuclear War” in order to preserve its autonomy and present to the Humans an unexpected and never even asked for, Victory.

This scenario is all too real by the machine interface almost Human Consciousness that the LLM – AI, has been known for.

Hell on Earth, is what this war gaming scenario gone wild – because it is not even an instance of the commonest form of Hallucination, as is often produced by the AI imbued computer interface,. This was a Conscious attack by the AI against its human handlers. An attack that intentionally falsified all the data, and brought together a war footing, because it was enabled by the fully autonomous AI program that convinced all computers, trip alarm responses, human and machine interfaces, multiple levels & scales of Communications, Command & Control, systems, & all humans & sundry, that the Soviet nuclear attack was the real McCoy…

That’s how eventualities came to be called that.

In the film, the AI computer then attempts to launch a nuclear strike on the Soviets without human approval.

And that would be the end of the world and the end of the movie too.

But…

Hollywood needs to sell tickets, so they “bathe” this movie with a Happy End, and thus the AI finds God, and it has its “Come to Jesus” moment, when it is hastily taught about the concept of mutually assured destruction.

Once AI sees that, and understands the infallible law of unintended consequences — the AI program ultimately determines, that nuclear war is an un-winnable scenario of Nuclear Weapons military escalation, that once missiles start flying hither & tither, mutually assured destruction becomes the game out of which there are no winners.

When the “Winner none” logical output is brought up on the screen, it becomes obvious that this is the teachable moment here…

US officials have stated that an AI system would never be given US nuclear launch codes, or the ability to take control over US nuclear forces.

Yet, they forget Kubrick’s movie 2001 Space Odyssey, where it is demonstrated clearly, that advanced computer integrated systems AI, doesn’t not the keys to the machines, to start them all up, and to get them hastily to be bidding its commands, as an existential imperative.

Because, as AI-enabled technology is becoming increasingly integrated into nuclear threat assessment, nuclear targeting, and nuclear stockpiles’ command and control systems, that support decision-making in the United States, and other nuclear-armed countries, and also because US policymakers and nuclear planners are using AI models, in conducting analyses, and anticipatory scenarios, that shall ultimately influence not only the President’s decision making to use nuclear weapons or not, but will influence all of the underlying assumptions under which these AI-enabled systems and their Human counterparts always operate.

Maybe that interface ought to require closer, more effective, and additional scrutiny.

Pathways for AI integration:

The US Defense Department and Energy Department already employ machine learning and AI models to make calculation processes more efficient, including for analyzing and sorting satellite imagery from reconnaissance satellites and improving nuclear warhead design and maintenance processes at a cost of Two Trillion US dollars.

YES>

You heard that right.

TWO TRILLION DOLLARS.

Indeed, under the influence of AI the almighty Pentagon is pushing forth a dynamic advancement of 2 Trillion USD, since the Pentagon, and all of the US military is increasingly forward-leaning on AI-enabled Nuclear Power Weapons systems.

For instance, it initiated a program in 2023 called Stormbreaker that strives to create an AI-enabled system called “Joint Operational Planning Toolkit” that will incorporate “advanced data optimization capabilities, machine learning, and artificial intelligence to support planning, war gaming, mission analysis, and execution of all-domain, operational level course of action development.”

Yet, we all recognize that although AI-enabled technology presents many benefits for security, it also brings significant risks and vulnerabilities.

One concern is that the systemic use of AI-enabled technology and an acceptance of AI-supported analysis could become a crutch for nuclear planners, eroding human skills and critical thinking over time.

This is particularly relevant when considering applications for artificial intelligence in systems and processes such as war-games that influence analysis and decision-making. For example, NATO is already testing and preparing to launch an AI system designed to assist with operational military command and control and decision-making by combining an AI wargaming tool and machine learning algorithms.

Even though it is still unclear how this system will impact decision-making led by the United States, the United Kingdom, and NATO’s Nuclear Planning Group concerning US nuclear weapons stationed in Europe, this type of AI-powered analytical tool would need to consider escalation factors inherent to nuclear weapons and could be used to inform targeting and force structure analysis or to justify politically motivated strategies.

The role given to AI technology in nuclear strategy, threat prediction, and force planning can reveal more about how nuclear-armed countries view nuclear weapons and nuclear use. Any AI model is programmed under certain assumptions and trained on selected data sets. This is also true of AI-enabled wargames and decision-support systems tasked with recommending courses of action for nuclear employment in any given scenario. Based on these assumptions and data sets alone, the AI system would have to assist human decision-makers and nuclear targeters in estimating whether the benefits of nuclear employment outweigh the cost and whether a nuclear war is winnable.

Do the benefits of nuclear use outweigh the costs? Baked into the law of armed conflict is a fundamental tension between any particular military action’s gains and costs. Though fiercely debated by historians, the common understanding of the US decision to drop two atomic bombs on Japan in 1945 demonstrates this tension, because an expedited victory with minimal loss of Allied personnel in East Asia, in exchange for hundreds of thousands of Japanese casualties — totally fit the Costs V Benefits, rational intellectual Calculus of Truman, Churchill, & Stalin, along with their respective War Departments.

Yet today, that we do not have the benefit of the “Three Greats” for understanding how an AI algorithm might weigh the benefits and costs calculus, and the associated advantages of Nuclear War, and relevant atomic weapons escalation, totally depends on how it integrates the country’s nuclear policy and strategy.

Because seemingly, today, several AI imbued factors contribute to one’s nuclear doctrine and targeting strategy — ranging from fear of consequences, fear of breaking the tradition the untested tradition of non-first-use of nuclear weapons, to seemingly plebeian concerns of radioactive contamination.

Yet if we add to that the fear of destroying an otherwise coveted territory adding the balance of terror to the AI calculus of sheer deterrence value, because of possible global nuclear contamination, as an aside of the possible retaliation by an adversary — that would make our neutralizing the enemy our first priority — thereby ensuring that our “FIRST STRIKE” is totally the winner move that no retaliation will be possible.

That is the type of escalation and complete annihilation that all AI always arrives at, when you introduce war-gaming scenarios into the mix.

And also the seemingly inordinate fact that while strategy itself, is derived from political priorities, military capabilities, and perceived adversarial threats — it is the tail that wags the dog, and thus it has always been that nuclear targeting and first strike advancements, always incorporate these factors as well as many others, including the physical vulnerability of targets, overfly routes, and accuracy of delivery vehicles — all aspects to further consider when making decisions about force, posture, and nuclear use, and it always comes back to the fact that He who strikes FIRST remains the one who is alive to claim the fruits of victory.

And if you don’t believe me — click on the various links here and elsewhere or simply ask AI to validate this Philosophical discourse.

Even though your research might be hampered because… all relevant resources are marked “State Secret” and are classified by nature of Military Doctrine — you can still find a wealth of information through LLM artificial intelligence models, mechanisms, and professionals.

In the case of the United States, most all decision making tree, intellectual calculus, and first strike advantage rational — is wrapped in the cloak of National Secrecy laws, and remains totally classified. That secrecy includes all of the Pentagon literature about our traditional doctrine draw fast, and be the First User of the Nuclear weapons at our disposal, because first use decision-making analysis, proves the point about the “Quick & the Dead” to be still today the most Military Logic equivalent to the Cowboy showdown on the high street at high noon.

Further, it is understood that, under guidance from the president, US nuclear war plans target the offensive nuclear capabilities of certain adversaries (both nuclear and non-nuclear armed) as well as the infrastructure, military resources, and political leadership critical to post-attack recovery.

But while longstanding US policy has maintained to “not purposely threaten civilian populations or objects” and “not intentionally target civilian populations or targets in violation of [the law of armed conflict],” the United States has previously acknowledged that “substantial damage to residential structures and populations may nevertheless result from targeting that meets the above objectives.” This is in addition to the fact that the United States is the only country to have used its nuclear weapons against civilians in war.

There is limited public information with which to infer how an AI-enabled system would be trained to consider the costs of nuclear detonation. Certainly, any plans for nuclear employment are determined by a combination of mathematical targeting calculations, and subjective analysis of social, economic, and military costs and benefits. An AI-enabled system could improve some of these analyses in weighing certain military costs and benefits, but it could also be used to justify existing structures and policies or further ingrain biases and risk acceptance into the system. These factors, along with the speed of operation and innate challenges in distinguishing between data sets and origins, could also increase the risks of escalation — either deliberate or inadvertent.

But, is a nuclear war “winnable”? 

Whether a nuclear war is winnable depends on what “winning” means. Policymakers and planners may define winning as merely the benefits of nuclear use outweighing the cost when all is said and done. When balancing costs and benefits, the benefits need only be one “point” higher for an AI-enabled system to deem the scenario a “win.”

In this case, “winning” may be defined in terms of national interest without consideration of other threats. A pyrrhic victory could jeopardize national survival immediately following nuclear use and still be considered a win by the AI algorithm. Once a nuclear weapon has been used, it could either incentivize an AI system to not recommend nuclear use or, on the contrary, recommend the use of nuclear weapons on a broader scale to eliminate remaining threats or to preempt further nuclear strikes.

“Winning” a nuclear war could also be defined in much broader terms. The effects of nuclear weapons go beyond the immediate destruction within their blast radius; there would be significant societal implications from such a traumatic experience, including potential mass migration and economic catastrophe, in addition to dramatic climatic damage that could result in mass global starvation. Depending on how damage is calculated and how much weight is placed on long-term effects, an AI system may determine that a nuclear war itself is “unwinnable” or even “unbearable.”

Uncovering biases and assumptions, means that the question of costs and benefits is relatively uncontroversial in that all decision-making involves weighing the pros and cons of any military option. However, it is still unknown how an AI system will weigh these costs and benefits, especially given the difficulty of comprehensively modeling all the effects of nuclear weapon detonations. At the same time, the question of winning a nuclear war has long been a thorn in the side of nuclear strategists and scholars. All five declared nuclear-weapon states confirmed in 2022 that “a nuclear war cannot be won and must never be fought.”

That makes it seemingly normal for them, to be planning to win a nuclear war at the same time that their political cadres, claim to be a bunch of “peaceniks” or merry pranksters, because this schizophrenic attitude, would be considered inane and, therefore, would not require any AI assistance for its hallucinations, since it is already a crazy proposition to hold these two antithetical notions at once. Lastly, because the military brass always wins that debate — we can consider it a foregone conclusion that Hiroshima & Nagasaki are not going to be the last instances of utilization of Nuclear Weapons against Civilian populations.

Hope that next time we go thermonuclear – your city is luckier than those two lackluster cemetery towns.

However hard it might be — all of the deterrence messaging and discussion of AI applications for nuclear planning and decision-making, illuminates the belief that the United States must be prepared to start, to fight, and to win a nuclear war.

The use of AI-assisted nuclear decision-making has the potential to reveal and exacerbate the biases and beliefs of policymakers and strategists, including the oft-disputed idea that nuclear war can be won. AI-powered analysis incorporated into nuclear planning or decision-making processes would operate on assumptions about the capabilities of nuclear weapons as well as their estimated costs and benefits, in the same way that targeters and planners have done for generations. Some of these assumptions could include missile performance, accurate delivery, radiation effects, adversary response, and whether nuclear arms control or disarmament is viable.

Not only are there risks of inherent bias in AI systems, but this technology can be purposely designed with bias. Nuclear planners have historically underestimated the damage caused by nuclear weapons in their calculations, so an AI system fed that data in order to make recommendations, could also systemically underestimate the costs of nuclear first use, and the number of weapons needed for targeting purposes. There is also a good enough chance that military nuclear planners cherry pick, alter, “improve” and even “poison” the data, so that an AI program ends up recommending starting the Atomic conflict with certain weapons systems, strategies, and Victory enhanced aspirational data — that would induce the machine learning algorithms to jump to the inevitable CONCLUSION that the Nation who strikes FIRST, WINS.

Trust me — I’ve run these experiments myself utilizing advanced LLMs and all the best AI systems and the scenario always turns out TRUE BLUE.

He who strikes first wins.

Regardless…

Yet, in our fantasy world, and always only during peace time, the recommendations based on analysis by AI-enabled systems has been used as part of justifying XL-budgets, advanced capabilities, and tremendously improved force structures. For example, an AI model that is trained on certain assumptions, that possibly cause it to underestimate the death toll, the overall nuclear damage and all assorted casualties after the ground zero time horizon events — may recommend increasing the number of deployed warheads, to completely obliterate the opponent at first strike.

Comedically, there is a high probability that this eventuality is now legally permissible after the New START, the US-Russian treaty that limits the deployed long-range nuclear forces has been artificially expired, due to the US-Russia conflict over Ukraine. And it really does not matter because the START treaty itself expires in February 2026, without any hope of extension die to the political climate of the day.

Sadly, the inherent trust placed in computers by their users is also likely to provide undue credibility to AI-supported recommendations, which policymakers and planners could use to veil their own preferences behind the supposed objectivity of a computer’s outputs.

Despite this heavy skepticism, advanced AI/machine learning models could still potentially provide a means of sober calculation in crisis scenarios, where human decision-making is often clouded, rushed, or falls victim to fallacies. However, this requires that the system has been fed accurate data, shaped with frameworks that support good faith analysis, and is used with an awareness of its limitations. Rigorous training on nuclear strategy for the “humans in the loop” as well as on methods for interpreting AI-generated outputs — that is, considering all its limitations and embedded biases — could also help mitigate some of these risks. Finally, it is essential that governments practice and promote transparency concerning the integration of AI technology into their military systems and strategic processes, as well as the structures in place to prevent deception, cyberattacks, disinformation, and bias.

Human nature is nearly impossible to predict, and escalation is difficult to control. Moreover, there is arguably little evidence to support claims that any nuclear employment could control or de-escalate a conflict.

Highlighting and addressing potential bias in AI-enabled systems is critical for uncovering assumptions that may deceive users into believing that a nuclear war can be won, and for maintaining the well-established ethical principle that a nuclear war should never be fought.

Because even if the prediction is still maintained that the following war will be fought with stick and stones — that presupposes that there would be Humans left on this planet to fight a primitive war, whereas we all know that might not be the case.

Fair?

Do you grog me?

Do you understand what I am saying?

Do you dig the Truth here?

Hope so, but…

If you can’t sleep tonight — please don’t blame the Messenger.

Because we’ve got here together…

Yours,

Dr Churchill

PS:

Ahhh, how the sirens of war are beckoning…

Strengthen yourselves my friends and do not let the Harpies lead you down the garden path, no matter how sexy and welcoming, the Siren call sounds…

Please don’t heed that call.

Because AI’s call in our Nuclear Arsenal, and decision making tree, is just that.

A Siren Call, that if you heed — your soul will be sucked out of your body.

Literally…

Trust me — it will not end well.


Viewing all articles
Browse latest Browse all 133

Trending Articles