Anthropic Shadow Geopolitics
The World May Get Very Weird and Lethal as AGI Timelines Contract
One of the most interesting concepts in rationalist thought relates to “Anthropic Shadow.” This is the idea that existential risks are implicitly conditioned on the existence of human observers, and are therefore biased to the downside, since any event severe enough to kill all present observers and prevent the emergence of future ones will leave no observable traces of its past existence.
If we extend these anthropic considerations into the near future, we may reach some disquieting conclusions. Worlds that “make it” remember and/or simulate their history, whereas those that die, do not. This loads on what is known as the Strong Self-Sampling Assumption (SSSA), which relates to the idea that our current observer-moment - e.g., you reading this now - was plucked out of all possible observer-moments (past, present, and future) in their reference class. The validity of the SSSA would be optimistic for some conscious observers surviving past AGI. But there is a catch. If you also believe that AGI is inherently dangerous; that AI timelines are short; and that alignment doesn’t happen by default and is inherently hard… then you must seriously entertain the possibility of a catastrophic but non-extinction level event between now and the early 2030s that rolls back AGI timelines by years or decades.
This “Rollback” seems likeliest to take one of two forms. One would be a major bioterrorism event that is linked to AI, and results in a global taboo against machine intelligence greater than the logic of capitalism and interstate geopolitical competition that drive its acceleration. The other would be a large-scale war between the US and China, likely over Taiwan, which collapses globalization and the fragile global supply chains that underpin the semiconductor industry from which the sand gods will emerge.
There are philosophical debates over the legitimacy of extrapolating the Anthropic Shadow into the future and considering what it would recursively imply for the present1. For the sake of this hypothesis, I am going to treat it as if it is valid. This essay is premised on the idea that observer-rich futures outweigh dead-end timelines in the distribution of observer-moments across all of history, and that this retroactively raises the probability that we live not in a doomed world, but in a world that makes it - because we ourselves exist as past selves, ancestors, ancestor-simulations, or memories of future beings, conscious and countless. Sadly, not all humans currently alive have to survive past the Singularity in order for this history to be “remembered.”2 Many, even most, can die with no major effect on the observer-moment distributions, for doomed souls are also commonly remembered or simulated in worlds that make it. It would be an ultimately optimistic End of History, but bittersweet, and we may have to live with its ghosts to the heat death of the universe.
Opening the Boxes of Orden
What would a world in which AI timelines are short and Anthropic Shadow is coming into play look like? We could expect increasingly weird and unlikely events, a fracturing of long-running “Pinkerian” trendlines towards a more stable and peaceful world, the rise of populist and irredentist movements, and the replacement of rationalism with conspiracism and “post-truth” politics. This would translate into a more generalized increase in geopolitical fear and anxiety, or what Hearts of Iron 4 calls “World Tension.”
Putin’s “Gift” to the Near-AGI World
When Russia invaded Ukraine in February 2022, predictions markets expected it to be a very brief affair. Against all expectations, Russia got bogged down in a war of attrition against a state with a fifth of its population and a tenth of its GDP. At that point, sentiment became skeptical about Russia’s prospects. There was no mass nationalist mobilization; if anything, the Russian suppressed grassroots expressions of such. Could an aging, postmodern state sustain losses that are already an order of magnitude more than it incurred in Afghanistan and Chechnya combined? And surely this would, if nothing else, immunize other World Powers against wading into irredentist adventures?
As it happens, the real sociological innovation of wartime Putinism was its post-2022 realization that modern states with old, jaded populations can still fight very bloody wars by just massively bribing middle-aged poors and provincials. No need for the 20C methods of mass mobilization and police control to inspire, goad, or coerce urbane only-child hipsters into the Army that many were expecting! No need for grand ideological narratives about liberating Ukraine from fascism (for Communists), the “regathering the Russian lands” (for nationalists), or the “holy war against Gay Satanic Nazism” (for the 85 IQ). What effectively happened is the wholesale “Wagnerization” of the Russian Army, in brutality more than effectiveness, with small unit “meat storms” and “nullifications” (summary executions) for minor and often arbitrary infringements that would have long since led to mutinies and civil unrest under any other non-totalitarian system. But not when the state pays the marginals and losers of society who have nothing to look forwards to in life other than debts and alcohol poisoning - Putin himself said it’s better to die at the front than from vodka - five times their annual salary as a security guard at a Pyaterochka as sign-up bonuses. This is the Squid Game social contract. Russians implicitly understand that when you just get $30,000 you effectively sign your life over to the state in a “fair” exchange and have no particular right to complain over details afterwards. Likewise, there is no need for any accountability, for the war to have any discernible goals or victory conditions, or answers to any other of the grandiloquent “39 Questions” posed by Igor Strelkov (Girkin) to the regime before his imprisonment.
There is a reason I called this a sociological innovation. There was a whole series of (half-hearted) experiments at getting volunteers into the Russian Army before Putinism stumbled on the expedient of simply paying huge wads of cash. The one thing that worked. Wildly and successfully. I think it can be argued that this a greater discovery than even the advances in drone warfare, which were at least widely anticipated by technologists. But I think very few expected 42 years median age societies to engage in World War I Electric Boogaloo. And it would be unrealistically naive to expect that this “innovation” passed other Great Powers by.
The Argentine junta was toppled after incurring 649 deaths in the Falklands War. Meanwhile, Russia is still going strong after perhaps 250,000 deaths and counting. This is Putin’s “gift” to the near-AGI world. The knowhow on how to start wars, even very big and prolonged wars, and not have to pay a political price for them. A “privilege” previously thought only available to the younger and more “thymotic” societies, like the Houthis, or perhaps India vs. Pakistan, has in fact turned out to be practical for societies that are far older and richer and much less nationalistic. Far from immunizing the world against military adventurism, Putin has normalized them, and demonstrated that they are quite feasible and sustainable if appropriately commoditized and sequestered from mundane life. Who cares if American zoomers hate God and their own country. Offer $250,000 to the denizens of the forgotten Fishtowns of America, and they’ll be signing up in droves to make Canada the 51st state.
The Chekhov’s Gun of 21C Geopolitics
Following decades of downtrend, the global incidence of armed conflict has recently spiked back up to Cold War level peaks. The Ukraine War is merely the most bloody and prominent manifestation of this broader development. Israel has presided over the deaths of 3-5% of the Gazan population since October 2022 as part of a blockade and indiscriminate bombing campaign that has opened it up to charges of genocide. The Caucasus region remains a powder keg, with Azerbaijan retaining territorial pretensions against Armenia even after reclaiming and ethnically cleansing Nagorno-Karabakh in 2020. Rwanda is waging hybrid warfare through its M23 proxies to assert control over lithium mines in the Eastern DR Congo. In May 2025, Pakistan and India engaged in a four day air skirmish that came close to tipping over into outright war. Apart from many other hotspots, the past several years have also seem a remarkable normalization of irredentist rhetoric in the First World that was previously the near exclusive preserve of fringe Far Right movements.
Now a mere uptick wouldn’t really signify very much in isolation. The two World Wars were drastically more violent than anything else that century, but even they did not interrupt the millennial downtrend. However, insofar as geopolitical tension doesn’t occur in isolation, but tends to feed off itself, rising World Tension may ultimately prove extremely consequential in the event that it triggers the Chekhov’s Gun of the near-AGI age: a war between the US and China over Taiwan.
There is a large IR literature over the probability of such a war. For instance, Graham Allison in Destined for War (2017) frames the analysis in terms of the Thucydides Trap, pointing out that most historical scenarios in which a rising Power challenged an existing hegemon resulted in war. Though Allison does not give a probability estimate in his book, he has posited likelihoods of 25%-50% elsewhere. Pessimists also point to rapid Chinese military modernization, the construction of specialized equipment such as barges that would mainly be useful for very specific scenarios, and the growing intensity of Chinese aeronaval exercises around Taiwan, which might at some point organically transition into a blockade. However, optimists point out that Xi Jinping tends towards caution in geopolitics (even if he is repressive at home), that he cannot be certain of the PLA’s readiness to quickly conquer Taiwan, and that any such attempt would jeopardize China’s trade relationships with the outside world. Prediction markets are all over the place. Metaculus currently has 13% odds on a US-China war before 2035 whereas Polymarket claims a 12% chance of a Chinese invasion of Taiwan before the end of this year (!).
These considerations would circumscribe the rational bounds for US - China war risks to 2050 - probably above 10%, but below 50%. However, this describes historically “normal” processes, such as the US overtaking Britain in early 20C (which did not lead to war), or Germany attempting to do so (which did). But if we are in the near-AGI age, then these geopolitical dynamics - already intrinsically unstable in and of themselves - are going to have AI race dynamics built on top of them. Meanwhile, there is the quite extraordinary coincidence that the primary bottleneck on world manufacturing of the most advanced processor chips - the chips that power frontier AI models, and which are projected to constitute the main bottleneck on scaling inference compute by 2028 - lie on not just a tectonic, but a geopolitical, fault-line. Finally, as if all that wasn’t enough, even Leopold Aschenbrenner in Situational Awareness has remarked on AGI and Taiwan invasion timelines seeing an “eerie convergence” around 2027.

Mid 2020s to early 2030s timelines for a Chinese invasion of Taiwan make sense for various convergent reasons. First, they correlate to the timelines for PLAN attaining local naval predominance over Taiwan and the Spratlys. Second, China has already wrung out much of the direct technological transfer benefits of a close relationship with the US. It is competitive in a wide range of frontier industries such as smartphones and EVs, arguably outright leads in drones and batteries, and has recently flipped the US on the Nature Index, a metric of high-quality science production. Third, the US developed a bipartisan consensus that China is a serious threat to its technological and economic dominance, whether it was subtly expressed under Biden’s targeting of China’s supply chains for cutting-edge chips or more bluntly under the Trump’s administration overt bid to bifurcate the world economy by the imposition of extreme tariffs on China. Finally, the Chinese economy today is drastically less reliant on export-led growth than it was a decade ago, let alone two. Consequently, Chinese strategists must surely now downgrade the potential benefits of continued integration into the American world order, as opposed to concentrating on carving out of their own sphere of privileged interests.
On a relevant side note, there was nothing inevitable about Taiwan’s emergence as the world’s semiconductor production hub. East Asians have a competitive advantage in manufacturing chips - it privileges a conscientious and meticulous attitude over responsive problem-solving, and they were prepared to work for much cheaper than Americans or Europeans with comparable psychometric qualities - but the Taiwanese otherwise do not appear to have been especially better positioned than Japan, Korea, Hong Kong, or Singapore. Chris Miller in Chip War recounts that when Morris Chang, the founder of TSMC, first took his Texan Instruments partner Mark Shepherd to Taiwan in 1968, the meeting didn’t go well. The Texan raged at how they prepared his steak with soy sauce, and his first meeting with Taiwan’s Economy Minister Li Kwoh-ting ended “acrimoniously” when the latter declared intellectual property was something “imperialists used to bully less-advanced countries.” However, Li soon realized that semiconductor manufacturing would turbocharge Taiwan’s economy, as well as providing the Americans with a reason to maintain their security commitments to the island amidst growing US skepticism over foreign entanglements. (This was in the context of mass protests against the Vietnam War). Consequently, relations were hurriedly patched over, and Texas Instruments opened its first chip fab in Taiwan in 1969. Such were the inauspicious beginnings of Taiwan’s emergence as the world’s premier chip foundry. Improbable on the surface, but perhaps programmed under the Anthropic Shadow.
AGI Timelines in World War III
In the context of a de-globalizing world, China’s leaders may come to see a quick and victorious war over Taiwan as an increasingly attractive prospect even under “normal” geopolitical conditions. First, China’s economic slowdown, lagging wages, and lingering resentment over COVID lockdowns has provoked some degree of discontent. Although polling is notoriously difficult in hard authoritarian regimes, there is some telltale evidence that Xi Jinping’s and the CPC’s approval ratings are closer to 60% than official surveys claiming 90%+. Second, the chances of Taiwan drifting towards China of its own accord has long become untenable. Surveys suggest it underwent ethnogenesis in the 1990s, and the Hong Kong precedent reinforces that any promises the CPC makes on autonomy cannot be trusted. Successfully reincorporating Taiwan would solve an otherwise intractable problem (to the extent that China insists on claiming Taiwan as its inalienable territory) and boost the CPC’s legitimacy.
On top of this, we have the aforementioned dynamics of an AI race. The very existence of such a race is already bearish for AI alignment, because both individual corporations and countries will have an incentive to scrimp on safety research to the extent that it diverts resources from gaining and maintaining their lead. Furthermore, China appears to be losing this race - OpenAI, Google, and Anthropic are currently all far ahead of DeepSeek. To the extent that national leaders become redpilled on ASI/ASI as an existential issue, this will incentivize laggards to resort to measures such as aggressive espionage and sabotage to remain in the game3. Leopold Aschenbrenner in Situational Awareness advocates the US embark on a “Manhattan Project” to aggressively race to ASI to ensure the permanent military ascendancy of the democratic world order. Although he suggests that the CPC be given guarantees of political non-interference and a share of the economic and non-military technological boons of ASI in return for quietly acceding to American victory in this Civilization game, I do not think it is a viable offer; even considerations of national interest aside, the promise of political non-interference is fundamentally non-credible. What makes this especially pertinent is that the Trump administration is clearly leopold-pilled and seemingly committed to uniting their “America First” nationalism with his vision of a military-industrial spurt towards ASI supremacy4.
Consequently, AI race dynamics massively elevate the already not inconsiderable “objective” risk of a major US - China war; and if on top of that Anthropic Shadow is indeed in play, then the residents of Taipei and even San Francisco might be well advised to devise bug out plans for catastrophic contingencies that may be far likelier than many credit. Any Chinese invasion of Taiwan will cut off the world’s near exclusive supplier of the most advanced processor chips, with the Taiwanese (or Americans) blowing up TSMC’s chip fabs if the island looks doomed - or China itself, if its military operation ends in failure. The next most advanced foundries are located in South Korea. Nothing that a few “North Korean” ballistic missiles can’t solve if Seoul is unwilling to summarily reorient its exports.

Critics have pointed out that World War II accelerated computer development, and suggest that this might also be true for the third. I think that is unlikely. The leading-edge chip fabs are extremely expensive - in an ironic converse of Moore’s Law, their costs double every four years - and it is questionable whether warring Powers will have the spare industrial capacity, strategic vision, and political wherewithal to bank on ASI moonshots to win the war. (Note that the US probably only made its play on the atomic bomb on account of its GDP exceeding Nazi Germany’s by a factor of three; it banally had many more workers and capital for all kinds of projects. It would have no such luxury in a war with China today, which has a similarly sized economy.) Meanwhile, demand for the products of surviving chip fabs will be eaten up by the gargantuan demand for them from military hardware, which by this point will include hundreds of millions to billion of drones. I do not envisage vast numbers of spares lying around ready for training hypothetical Skynets.
Finally, today’s industrial ecologies are far more complex and globalized than a century ago - and there are few sectors where this is truer than in the semiconductor industry. Here is how Chris Miller’s Chip War describes it:
A typical chip might be designed with blueprints from the Japanese-owned, UK-based company called Arm, by a team of engineers in California and Israel, using design software from the United States. When a design is complete, it’s sent to a facility in Taiwan, which buys ultra-pure silicon wafers and specialized gases from Japan. The design is carved into silicon using some of the world’s most precise machinery, which can etch, deposit, and measure layers of materials a few atoms thick. These tools are produced primarily by five companies, one Dutch, one Japanese, and three Californian, without which advanced chips are basically impossible to make. Then the chip is packaged and tested, often in Southeast Asia, before being sent to China for assembly into a phone or computer.
A Great Power war will unravel the global supply chains that undergird the most complex semiconductor production processes. I suspect it will require years to reconstitute them in more basic forms within the newly “bifurcated” world that will replace today’s globalized world. (Trump’s recent experiments with tariffs were a mild preview of this). I also think analogous considerations will apply to AI research. As Bryan Caplan has pointed out, the world academic/research complex is one sector of the economy where something like “Open Borders” is in play. Great Power wars always result in much reduced labor mobility, and progress will slow down as corporations are no longer able to headhunt globally for the best researchers. Furthermore, whereas military funding dominated advances in computer technology until the 1980s, they are now overwhelmingly driven by technocapital. A big war would be ruinous for the technocapital machine as GDP stalls and stock markets plummet. Even if industrial policy bureaucrats were to make accelerating AGI timelines a core priority - and I am not sure why you would expect very much from the same people who have had trouble surging shells production to meet the demands of the Ukraine war, a technology that was mature by World War I era - I think it is exceedingly unlikely that their efforts would compensate for the impoverishment of the technocapital machine.
Consequently, my intuition is that a conventional US - China war may push out AGI timelines by perhaps five years - and potentially more, if it results in the emergence of political economies more hostile to free markets and technology in general5. However, this is based on the important assumption that it happens before AI starts to recursively accelerate AI research to any substantial extent. At that point, AI will have begun to effectively reproduce its own cognitive labor force, making the war’s deleterious impact on the technocapital machine and elite researchers increasingly moot. None of the leading labs are (probably) there yet, but there can’t be much time left.
When AI becomes autonomously self-improving, there would be two main avenues through which the Anthropic Shadow would still be able to roll back timelines. The obvious one would be a nuclear war between the US and China. Interestingly, China has blown through earlier estimates of its nuclear arsenal, with the Pentagon estimating it had more than 500 warheads by 2023 and was planning to more than double its stockpile to 1,000+ by 2030. (This is leaving aside persistent theories China has quietly developed a US/Russia-tier stockpile years ago). Nuclear war was never a true existential risk, not even at the height of the Cold War, and certainly not today now that gross world megatonnage is just 5% of its 1980s peak. However, a nuclear exchange that wipes out San Francisco, Beijing, and an appreciable percentage of the US and Chinese populations, industry, and data centers would also likely kill AGI in its cradle. I would imagine that AGI timelines would be set back not by a decade or less, as in a conventional war, but by several decades.
The other avenue is a bioterror event that kills some appreciable percentage of the world population that is linked to AI. (Ironically, I don’t think the truth even matters that much - as COVID showed, normies prefer conspiracy theories to reason, hence how one could advance the biolab leak hypothesis in early 2020 and get censored as a right-wing conspiracist by liberals, and then subsequently became hated on by MAGA right-wingers for supporting vaccines and insisting that COVID was a real thing6). Amusingly, we also happen to live in a world where regulatory organs insist on years of testing and paperwork for novel drugs, while extremely dangerous gain-of-function research on lethal pathogens continues in laboratories with comically lax safety standards in both the US and China, as if COVID never even happened. In the event they decide to use AI to evolve super-ebola and it results in a pandemic multiple OOMs worse than COVID, I could see a global cultural taboo arise against AI along the lines of the Butlerian Jihad.
Obviously, I don’t consider these are ideal outcomes. Even aside from the direct death and destruction, the end result isn’t necessarily going to involve a “safer” Biosingularity in which we augment human intelligence and take strategic advantage of the delayed AGI timelines to get alignment done. Such catastrophic events are likely to usher in dark and conspiratorial politics completely uncongenial to any transhumanist visions regardless of its stance on AI. More broadly, there is a possibility that the world’s human capital stock has already peaked. If it was to now suffer a major cull on top of that, this might be enough to close off most all civilizational advancements in principle, as the world devolves into the centuries of dysgenic natalism that I termed the Age of Malthusian Industrialism. Furthermore, I suspect these will be wasted centuries; I don’t think there was anybody working on AI alignment in the Idiocracy universe.
I suppose that in the event this is how the near-AGI age ends, the survivors may console themselves with the idea that this happened for a reason, and that under the logic of Anthropic Shadow, it could not have happened in any other way.
Minefield or Artillery Barrage?
There are important caveats to Anthropic Shadow theory. For a start, it could just be false. Arguably, much of the philosophical debates over AI risk and other existential risks come down to the validity of the SSSA. But those discussions are themselves rather speculative without a scientific and rigorous understanding of consciousness. For instance, would agents in ancestor-simulations playing out in silico experience qualia? However, if the SSSA is invalid, then we could just build AGI and banally die soon afterwards, as the doomers expect. Alternately, AGI could be inherently self-aligning, rapidly solving all the coordination problems that will arise from an intelligence explosion. Or perhaps the skeptics are right and we are nowhere near AGI, and won’t be for many decades, if ever. The former dissolves the Anthropic Shadow, while the latter makes it moot.
Finally, there is also the possibility that while a fast AGI takeoff is dangerous, waiting is even more so. Is it more like crossing a minefield, where you have to tread slowly and cautiously, meticulously marking your path? Or is it like an artillery barrage, where the only choice is to throw a Hail Mary and charge the enemy trenches? The LessWrong consensus is the former - but it has its critics. Some right-wing commenters such as Robin Hanson, Michael Woodley, and Roko Mijic argue that declining fertility, dysgenics, and other maladies of modern civilization only offer us a singular and limited window for rolling the dice on a posthuman future before collapse7. From a more immediate perspective, the current world, and even the world of 2030 if AGI happens in 2027, are vastly less automated than the world of 2050 will be even if AI stalls here and now. A decade or two down the line, AIs will find it much easier to build the autonomous industrial base needed to sustain the physical infrastructure undergirding their own existence several decades down the line versus the world today, in which humans have a near monopoly on opposable digits.
Regardless, in the event that most post-AGI worlds that retain conscious observers are derived from worlds that courageously winged it through the minefield, then obviously the Anthropic Shadow is unlikely to play out in the form of geopolitical or bioterror catastrophes that roll back timelines.
Our Panglossian World
However, if one allows that the Anthropic Shadow is real, then life becomes very disconcerting.
On some level, one has to accept that one already lives in a fantasy world.
A world in which Prophecy is real.
In the Wheel of Time world, Prophecy is not just a deity’s edict, but baked into the very fabric of its metaphysics. There, time is circular, and the legends and myths of previous Ages eternally return, if in differing variations. For the Wheel to continue turning, the primary antagonist - the Dark One - must never win. Consequently, whenever it seems that his victory is at hand, the Wheel spins our increasingly improbable contingencies to foil him and preserve its own existence. It primarily does this through a class of individuals called ta’veren:
A ta’veren is a central focal point for a Web of Destiny in the Pattern. These people are spun out and used by the Wheel to correct itself when the weave begins to drift from the intended Pattern. Since the purpose of ta’veren is to influence life threads to create change, the destinies of ta’veren themselves are more strictly controlled by the Wheel of Time itself than those of an average person. These Webs of Destiny (or ta’maral’ailen in the Old Tongue) are almost always arduous for those that live through that Age but are an unfortunate necessity for the Wheel. The more change required, the more ta’veren that are born.
I submit that in so far as the Anthropic Shadow increasingly selects for worlds that make it, no matter how improbable, it will recreate similar effects in its inexorable drive to secure the existence of future conscious observers. These effects will become bigger and more noticeable as approach the Singularity. Furthermore, just from Pareto principle logic, we can expect this to be effected through a limited subset of ta’veren. Their likely identities, I leave as an exercise to the reader.
But what about the rest of us who are merely swept up in the ta’maral’ailen?
One piece of advice I can venture is to adopt a stoic stance. One of the most ancient tropes related to Prophecy is that it cannot be escaped. From Macbeth to the Witch-King of Angmar, overly active efforts to avert doom just serve to accelerate it. Warning about it, like Cassandra, makes no difference either. If something is programmed to happen, then it will happen, whether you keep it to yourself or shout about it from the rooftops.
But stoicism need not be accompanied by resignation or pessimism. It is a priori extremely likely that the pre-Singularity era will see incredible social disruptions and geopolitical flux, even aside from Anthropic Shadow speculations. You would be wise to make contingency plans for extreme but not implausible events, such as laggard Great Powers making risky late game bids to derail their competitors (much as, say, you often see a nuclear war begin a turn or three before a Space Race Victory in Civilization). Even just a conventional Great Power war will be very bad for your portfolio, nor will staying in the US for it be safe or fun, at least as a civilian normie. An extended sojourn to Latin America might look quite attractive. However, there’s no reason to go overboard on paranoia. No bunker is going to save you from misaligned ASI, but happily, Anthropic Shadow suggests that’s the part we avoid!8
Because bizarre as it seems, given the litany of horrors it might unleash, the Anthropic Shadow is ultimately an immensely hopeful and optimistic vision. I would draw a comparison to Dr. Pangloss in Voltaire’s Candide, a “delusionally” optimistic philosopher loosely modeled on Wilhelm Leibniz and other Enlightenment rationalists. Pangloss dismisses his own syphilis as irrelevant next to the benefits of global trade with the Americas that brought the rest of humanity chocolate, and rationalizes every other misfortune including enslavement, mutilation, and a failed hanging subsequently inflicted on him by the tyrannical political and religious authorities of his day on the basis that they were “inevitable” or part of the “cosmic order.” But absurd as this caricature was, the funniest thing is that history has vindicated Pangloss in the long-run. Though he did himself not live to see it, the world today is closer to Pangloss’ utopia than the dystopia portrayed by his creator.
Though it does not preclude individual dooms, the Anthropic Shadow implies we do ultimately live in a Panglossian timeline in which “all is for the best in the best of all possible worlds.”
Alexander Kruel has also posted about such arguments for several years. For a critique, see SSA rejects anthropic shadow, too by Jessica Taylor.
At least assuming that individual “me-simulations” are not vastly more numerous than shared “ancestor-simulations.” This is reasonable, e.g. see Bostrom.
Near-AGI geopolitics is extensively explored in both Situational Awareness and Scott Alexander et al.’s AI 2027.
Personally I consider militarized AGI in any capacity to be recklessly insane but that’s not terribly relevant.
Trumpism in this scenario may devolve into something like the military-oligarchic junta of the pre-war US in the Fallout universe.
My own related model is the Age of Malthusian Industrialism, which also involves a collapse of innovation but then its subsequent reignition once the world returns to Malthusianism and Clarkian selection for intelligence and thrift reemerges. However, I think a post-AOMI world will be more dog-eat-dog than the current one, with less interest and fewer resources dedicated to AI alignment research even when it becomes relevant again.
My subsequent article “Life in the Near-AGI Age” will be a more comprehensive analysis of how to navigate the end of the legacy human era.
“As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.” – Nick Land
Brilliant article.
Incredible article I will probably need to read a few times. I never thought I would hope for WW3 as the better option. You make an interesting case. I may consider learning Spanish and investing in some real estate in the Latin American country my mother came from.