Genesis: Artificial Intelligence, Hope, and the Human Spirit

December 2024

In Henry Kissinger's last book before his passing, written with technologists Eric Schmidt and Craig Mundie, the trio combines the statesman's mastery of applied history with the scientific knowledge necessary to envision the future. The rapid pace of disruption makes practical frameworks necessary to reap the benefits of AI without committing irreversible errors that could mean nothing less than the end of humanity.

Just three years ago, in 2021, when the first volume, The Age of AI, was published, Kissinger and Schmidt introduced the reader to then-obscure products like GPT-3. A lot has happened since the chatbot texted itself into the public consciousness. Some people already see an end to the AI hype, but the authors argue that we are still at the very beginning—the genesis of something truly transformational.

With an intellectual capacity surpassing any human polymath and future machine scientists acting autonomously, AI systems could bridge the gap between seemingly unrelated fields by finding the hidden truths of the universe. Connecting the social to the natural sciences, these discoveries are both promising and frightening to society. From curing diseases to unprecedented military threats, our world has never been more potent yet fragile. In order to evolve, "Homo Technicus" will have to live in symbiosis with machine technology, neither succumbing to fatalistic submission nor outright rejection.

By developing systems for distribution, participation, and education, the benefits of AI could be shared with the whole population. The authors imagine that in an abundant world, people will work for pleasure and pride instead of paychecks. In fact, the prevalence of machines could give new value to authentic, unassisted human achievements. But what about working with AI systems?

One such early experiment was Project Cybersyn by the socialist Chilean government from 1971–1973, employing a network of telex machines, software for monitoring factory performance, and an economic simulator to develop self-regulation of factories. Still relying on human operators to interpret the data and make decisions, it nevertheless shared many conceptual similarities with humanity's visions for AI.

That this project never came to fruition is partly Kissinger's fault, according to some. Fearing a pro-Marxist government, the then-Secretary of State supported efforts to provoke a military coup. After these failed and the new president, Salvador Allende, assumed office, the National Security Decision Memorandum 93, written by the author, suggested cutting off economic cooperation, almost incidentally creating the climate for an indigenous coup that was to follow.

It will only be a matter of time until AI systems are widely implemented in governmental processes. The authors warn of both over-restrictive policies, currently exemplified by the European Union¹. and irresponsible innovation that some Silicon Valley companies covertly envision. The former would withhold the great prosperity that could come to a wide population if redistributed correctly², while the latter leaves out the human element. What is left unwritten are the different incentives driving public and private actors. Even partnerships between profit-driven companies such as OpenAI and Microsoft might be doomed to fail in a "Winner Takes All" economy. Either way, successful revolutions in the past have been incremental and not everywhere all at once.

Proposals by an administrative division during the year of the French Revolution in 1789 are amusing in retrospect. In the name of progress and science, the country was split into 81 geometrically perfect squares³, resembling a chessboard. Not taking into account previous borders such as rivers and mountains or any cultural factors, it was rightfully rejected by the Assembly. Silicon Valley often supposes that its technical brilliance is applicable everywhere.

In the medium to long term, AI will be able to change not just the natural and social environment of humans but also our biology, possibly turning us into superhumans. Are we then to become pawns in the chess game of an unknown entity, divided by those who play by the rules and those who fall victim to them? Or should we resist this change and view machines as nothing more than inanimate objects akin to literary characters?

Assuming an AI can feel as much emotion as Hamlet—only existing in the abstract—it can nevertheless write its own story without Shakespeare. Kissinger compares this loss of human control to eighteenth-century European leaders unlocking the mechanical forces of self-interest. The enlightened absolutists from that era fit Plato’s philosopher-king vision, as a layer of idealism was added to Realpolitik. The early Islamic thinker Al-Farabi argues that it would be highly unlikely for one person to fulfill both philosopher and king roles; therefore, they must be split between two people. Aristotle goes further. His participation of the masses fits our modern understanding of statecraft. The question is which model, if any of the selection, is suitable for machines surpassing us in almost all cognitive tasks but acting on our behalf.

The organizational structure is also up for debate. Stable institutions are typically hierarchical and might be the most effective way to organize a low-trust society uncertain about the future. But while armies win wars, they are inherently uncreative. A network structure with the capacity for self-regulation and without central coordination lends itself better to the technologically driven innovation of the 21st century. Is the latter model of shared agency through different human and machine actors the most beneficial for society as a whole, or can shared power also corrupt absolutely? Schmidt notes that AI proliferation leads to centralization in the current model. Whether this is good or bad remains up for debate.

To prevent a misalignment of AI, it is crucial, the authors argue, to create a common doxa of human truths in addition to feeding it with formal laws⁴. The instilling of doxa is achieved through observation rather than articulation of human behavior. What is left open are the measurements AI systems would take if confronted with no written laws and conflicting unquestioned truths. Within a society, disagreements such as the Sunni-Shia divide offer no agreed-upon solution. Should the position of an AI at an international conference be equal to the distribution of the populations in question (that is, 90% Sunni and 10% Shia), or, when this is inherently contradictory, should it take a secular outside perspective which might enrage all parties?

Where the book has its shortcomings is in the assessment of human traits that make us different from AI and therefore worth protecting. The authors propose a Kantian concept of dignity as one of those pillars, in which mortal creatures “programmed” for survival can nevertheless act autonomously against their evil instincts. While acknowledging that this good versus bad judgment is subjective and that mentally handicapped people still deserve respect, it is hard to say if any of our choices are truly autonomous, and who makes that judgment.

This should not open the age-old debate about free will. Humans have a variety of options to choose from, but all of these might not be so unnatural.

Let us imagine a seemingly selfless act: a mother jumps in a turbulent river to save her infant son from drowning. Knowing the high risk of being submerged and killed herself, she manages to grab the boy and pushes him to the shore while succumbing to the water's forces. The natural instinct for self-preservation seems to have been overwritten by the human instinct for altruism.

But what if her intuitive calculation, made in a split second, was just as rational as her possible decision to refuse the self-sacrifice⁵? Depending on the scope, preserving life can mean preserving bloodlines but also nations (ideological warfare) or all of nature (environmentalism). The latter illustrates the ironic conclusion of people refusing to preserve bloodlines for the supposed greater good. Do these different conclusions have anything in common, and, if so, how can it be quantified to distinguish us from AIs?

From the general input “preserve life,” ingrained in our DNA, follows the benign addition “and reduce suffering,” which requires a complex pain awareness only found in mammals and birds⁶. Throughout history, great suffering has been inflicted because of different interpretations of the worthiness of certain life or because of rigid obedience. Pure sadism, an evolutionary dead end, is only rarely exhibited.

If we take Kissinger’s centennial life as a baseline for the average life expectancy of future generations, we can easily make comparisons between the value of life that, free from theological sanctity, offers a practical framework for biological and artificial decision-makers. When faced with the trolley problem, this system could add up the number of expected years remaining and save the group with the higher number. A corrupted system might add in factors such as creditworthiness or political affiliation.

While life and death is the purest binary distinction, decisions of that kind will fortunately remain rare in an era of abundance. Instead, smaller units of suffering that I would call discomforts will shape the day-to-day operations.

Bureaucracies that require citizens to fill out time-consuming and arbitrary forms when opening a business may be dissolved if no value is gained. Applied to policy which requires assessments without all information available, Kissinger’s doctoral dissertation, A World Restored from 1954, noted that bureaucracies’ quest for calculability leads to the decision-makers becoming prisoners of events. The disastrous choices during the first decade of US involvement in Vietnam illustrate this. However, I would argue that discomforts offering valuable learning experiences for the public or even delaying certain irreversible decisions for government officials could be nudged by a well-meaning system.

For qualitative assessment, I would recommend the introduction of Special Zones, operating with reduced or partially randomized laws to test out which incentives and restrictions work best for greater adaptation. Zones would also allow the free movement of citizens inside a geographic area with different belief sets. Religious individuals who see the modification of the human body as sinful, for example, could relocate to the nearest zone where this is outlawed instead of inciting violent uprisings.

The pace of change has accelerated. From taking thousands of years between the Stone, Bronze, and Iron Ages to taking 20 years from the Computer Age to the Networked Age⁷, major breakthroughs nowadays can happen in hours and even minutes. By segregating the innovations to Special Zones for beta testing and restricting the global distribution of transformative technology at first, societal confusion can be avoided and powerful applications withheld from malign actors.

Cyber attacks will reach a new level of danger once Artificial General Intelligence is available in the next 5 to 10 years, as such systems could find vulnerabilities invisible to the human eye. Bringing down the global financial system might be only one command away. Because of the devastating effects, it is unlikely that great powers will resort to such measures. The fact that the Cold War remained bloodless in the main theaters shows the ironic logic of mutually assured destruction. Schmidt therefore sees a threat not in antagonistic superpowers but in nihilistic terrorists.

I would recommend the implementation of trace marks embedded in the metadata so that if damage couldn’t be prevented, the individual or entity can be located by intelligence agencies. Every operation by the AI system would be logged in an immutable ledger that links the output with the responsible entity, timestamp, and method of generation. This would not only ensure accountability but also enable content verification and prevent misinformation. The intelligence agencies with access to the sensitive information must be democratically elected in a supranational body such as the United Nations in order to prevent abuse of power by authoritarian regimes.

Genesis is more than armchair philosophy. Its broad questions and awareness of the unknown unknowns make it a timeless guide, helping readers adjust their inner compass toward the promised land. Rather than attempting to predict every stone or bend in the path, it embraces the reality that the natural contours of the journey—like rivers to be crossed or mountains to be circumnavigated—may necessitate unexpected detours. It teaches us to navigate with purpose and adaptability, without being shackled by the illusion of total foresight.

The book ties in with Kissinger’s first thesis, The Meaning of History, written over 70 years ago but asking similar questions about how technology changes human perception. If the book wasn’t a collaborative effort with tech optimists Schmidt and Mundie, its "Hope and the Human Spirit" might have been replaced with "Despair and Loss of Humanity." Although the avid reader may guess who wrote what, it is nevertheless a cohesive work that benefits from each contributor.

Footnotes

¹ The EU AI Act requires, for example, that systems be able to explain what they do, which is currently impossible. Brexit could inadvertently give the UK an economic advantage.

² One solution proposed would be an AI patent system in which great profits can still be made, but the invention would go into the public domain after a certain number of years.

³ Each square was further divided into nine districts, with each district containing nine square cantons. Every Frenchman should be able to reach the departmental capital within a day by horse (assuming the geography allowed it).

⁴ The writer of this article is currently working on a Cultural Taxonomy to formalize all recursive human endeavors.

⁵ Opening yet another debate: In which situations is inaction an active choice with agency? Does it have to be conscious, or can it be intuitive and even subconscious? Literary critics of Bartleby, the Scrivener might have the answers.

⁶ The somatosensory and neural systems responsible for this are less developed in reptiles, amphibians, and fish and are almost nonexistent in most invertebrates (the octopus being an exception).

⁷ Starting in the early 1990s, this age is marked by the evolution of connectivity, data, and interactivity beyond mere computation. Better name suggestions, apart from Networked Age, can be sent to the author of this article.

Next
Next

Entertainment Spaces