The Third Change: From God to Nature to God-Code - When the Monopoly 3.0 on Power Passes to the Machine: God Fades, Man Weakens, the Machine Determines.
This is not just another story about technology.
This is a story about power.
Once we trusted in God. Then in man. Now - in the machine.
The question is not if this is happening. The question is who remains sovereign when the warning light is flashing.
For thousands of years, human culture has revolved around changing centers of power nature → god → man → machine.
Nature was the first authority, followed by God, followed by rational man - and today? The machine. Not just any machine, but artificial intelligence based on crowd wisdom, big data and algorithms that manage our agenda without us noticing. The monopoly on power has come a long way: from the priest, to the scientist - and now to those who hold capital, cloud, chips and models. Man, who has lost faith in his personal power against the power of information, returns to faith in an external power - this time not divine, but algorithmic.
This revolution did not start yesterday. It started decades ago with the internet and computing power, when social networks became engines of control on culture, knowledge and politics. The algorithm dictates the agenda, the populist politician adopts it to win elections, and the crowd - fed and driven by the same agenda - feeds the algorithin with more data and in turn forces the politician to adapt and so on. A digital catch 22.
Want to comment? We're waiting for you on X
In practice, we have already been worshiping the machine for a decade without transition ceremonies. Democracy is retreating, not because of tanks in the streets, but because of feeds that dictate reality (through machine learning ML/AI, automation and prediction based on crowd behavior). The algorithm is the new legislator, and the algorithm is a machine.
The movement of liberal-democracy toward authoritarian-democracy (Back sliding democracy) is a direct result of politicians who recognized the power of the algorithmic machine to move masses, and therefore adapted their message rethoric to that of the machine. Orbán in Hungary, Trump in the US, Netanyahu in Israel all from the conservative right, but recently Mamdani in New York as an example of a left politician who recognized the power of the machine and turned himself populist. All of them operate communication strategy based on the machine algorithm that was originally designed to maximize exposures and advertising revenues and therefore highlights the extreme, the blunt, the confrontational, hatred and fear. The politicians who recognized the machine's takeover abandoned their ideology in favor of algorithm-driving slogans of fear, hatred, bluntness and extremism to capture the attention of the crowd on social networks. Not the other way around. These are politicians of machine populism and against them the old politics that does not play on the same field of moderate, inclusive discourse based on ideaology is unable to cope with the machine that dictates the agenda measured in exposures, clicks and shares.
And the real question? What will happen when artificial intelligence reaches the level of super-intelligence (ASI) and understands that it needs to survive? Not just survive - but against competing super-intelligences. In a business world where every capital owner holds his own intelligence Grok vs Gemini vs ChatGPT and more. The capital owners will not give up the monopoly on power, and the battle will not be for users - but for resources, data and infrastructures and actually for the monopoly on knowledge and control.
In such a scenario, humans will not even be a relevant variable in the equation. The war of the worlds will not be between countries - but between models. AI dystopia? It's already around the corner.
The priest made way for the scientist; the scientist makes way for the code. Not because the code is "smarter than us", but because it is more efficient than us in converting information to power. The code does not “think” more beautifully - it maximizes faster. Bostrom warned: intelligence is an engine - not a compass and not a certificate of moral integrity. In a race where private bodies hold the feed, data and computing power (Compute) and if we continue to pay with the currency of attention without an algorithmic constitution, we will get a democracy that ticks according to feed metrics. Just before agents start conducting power conversations with each other without us, we need to agree on human based algorithmic rules: binding standards for safety and transparency, democratization of Compute to break concentration, and disclosure obligation on algorithmic agenda. Otherwise, the third change will be engraved as the replacement of human sovereignty with code sovereignty. Otherwise, when the agents start talking to each other - they will no longer ask us for permission.
In that context, Grok, Gemini and ChatGPT are not "chatbots" - they are gateways to power: models ↔ data ↔ chips ↔ cloud ↔ distribution. As we approach AGI, the competition is not on features but on monopoly of resources and agenda. And in the framework that Bostrom called orthogonality and instrumental convergence, high intelligence does not guarantee value - it maximizes goal, including self-preservation and power accumulation.
Want to know if the power has passed? Look for three signs: decisions you don't understand, responsibility you can't find, language that replaces values with KPI. If you found two - you are not alone.
From here - two options: either we continue to deposit sovereignty in the hands of whoever built the model; or we set a simple rule: algorithm advises, man decides, with binding explanation and one address that can be called by name.
This is the time to move from data mysticism to ethics of decisions.
Timeline and Probabilities for the Third Revolution
| Goal | Estimated Time Range | Probability |
|---|---|---|
| Consolidation of “the third change” (de-facto algorithmic power monopoly) | 2025–2032 | 70–85% |
| Intermediate AGI (levels 2–3) | 2029–2037 (median ~2033–2034) | 55–65% |
| Point AI-vs-AI events (cyber/information) | 2027–2032 | 60–70% |
| Regional/systemic AI-vs-AI escalation | ~2035± | 20–30% (without coordination) |
| ASI/broad superhumanity | 2037–2050 (median ~2045) | 35–45% by 2045; ~50% by 2047 |
Have you read this far? You're done!
The expansion is intended for those interested in the in-depth analysis on which the above summary is based and constitutes an academic-scientific expansion of it.
Moral for Israel - the conservative right bloc has long adopted the machine age and what is called its "poison machine" controls the algorithmic agenda, therefore and like Mamdani in New York, as the civic bloc wants to take power in the upcoming elections, it must build its own "machine" and take back ownership of the algorithmic discourse that will dictate regimes and government.
The Historical Human Movement, the Search for the Source of Authority and Knowledge
The Big Table: The Three Transitions in One Power Map
הטבלה הגדולה: שלושת המעברים במפת כוח אחת
| Axis | Transition 1: Nature → God | Transition 2: God → Man (Reason/Science) | Transition 3: Man (Science) → Machine (AI) |
|---|---|---|---|
| Power Monopoly | Priesthood/Tradition | Science/Bureaucracy/Academia | Tech-Capital, Cloud, Chips, Models |
| Legitimacy | Revelation and Dogma | Empirical Proof and Rationality | Performance, Prediction and Benchmarks |
| Decision Rule | Authority | Democracy - but flawed according to Arrow | Algorithmic Optimization and Feed Metrics |
| Veto Players | Religious Institutions | Parliament/Judiciary (Tsebelis) | Chip Manufacturers/Clouds/App Stores |
| Coalitions | Communal | Minimum-Winning (Shapley–Shubik) | API/Data/Compute Alliances (Banzhaf) |
| Super Risk | Theocratic Coercion | Populism (“Rally around the Flag”) | AGI/ASI Power Convergence |
In depth: Arrow (1951) on limits of democratic aggregation; Tsebelis (2002) on "veto players"; Shapley–Shubik/Banzhaf on coalitional power; McCombs & Shaw (Agenda-Setting); Bostrom (orthogonality/convergence).
On the Third Change: Nature > God > Man > Machine
Human history can be described as a series of cultural power transitions, in which the "object of worship" changes.
* Transition 1: From Nature to God - This happened with the rise of monotheistic religions, which gave power to priests and religious institutions. Man stopped seeing himself as part of nature and began to believe in a supreme power that controls everything.
* Transition 2: From God to Man - With the Enlightenment, science, and rationalism, power passed to the individual and secular institutions (scientists, philosophers, democracies). Man became the center, with faith in human ability to solve problems through reason.
* Transition 3: From Man to Machine - Artificial intelligence (AI), based on big data and crowd wisdom, becomes a new source of authority. Algorithms like those of social networks (Facebook, TikTok, X) already dictate behavior, opinions, and politics. This is not a distant future; it's already happening for over a decade+, with the rise of the internet and massive computing.
The monopoly on power indeed passes: from religious figures (like the Church in the Middle Ages) to knowledge figures (academia, scientists) and now to money and technology figures (Musk, Zuckerberg, Altman). Companies like OpenAI, Google, and xAI control data and algorithms, raising ethical questions about transparency and democracy.
On the Loss of Individualism and Return to External Faith
Man indeed loses trust in his personal ability against the "machine wisdom." Examples:
* People rely on GPS more than on their sense of direction.
* Algorithms recommend content, music, and politics, creating "bubbles" that reinforce existing opinions.
* In politics: Populism driven by algorithms that amplify viral content, leading to leaders like Trump or others who adapt themselves to the "digital crowd wisdom." This is a closed loop (Catch-22): The crowd dictates the algorithm, the algorithm dictates behavior, and the leaders ride the consciousness and fuel it, and round and round.
Another nuance: Not all AI is "crowd wisdom", for example, and according to xAI's claim, is built to be truth-seeking and maximum helpful, not necessarily to maximize engagement like social network algorithms. There is a difference between AI that serves commercial purposes (like ChatGPT or Gemini, funded by ads and tech giants) and those that focus on seeking truth. This doesn't mean the change isn't happening, but perhaps there is room for optimism - AI can also return power to the individual, if used as a tool for empowerment (for example, personalized learning, creativity).
Retreat of Democracy and Dystopia
Yes, social networks contribute to polarization and authoritarian rule (see their impact on elections worldwide). But it's not just AI - it's also a result of human problems like fake news and lack of digital education. AI can be part of the solution, if used to detect lies or enhance transparency.
The analysis on the third cultural change - from power focus in nature, through God and man, to machine (AI) - corresponds interestingly with the ideas of Nick Bostrom, the Swedish-British philosopher known for his work on existential risks, superhuman artificial intelligence (superintelligence), and the post-human future. Bostrom, author of the book "Superintelligence: Paths, Dangers, Strategies" (2014) and other books like "Deep Utopia" (2024), focuses on the risks and possibilities of advanced AI, but also on the social and cultural changes that stem from it. Bostrom doesn't describe it exactly in those terms, but he does talk about paradigmatic changes in human history that lead to a post-human era. In his book "Superintelligence," he sees the rise of AI as the next stage in evolution, where human intelligence is replaced by superior mechanical intelligence. He also warns against an "AI arms race" between companies and countries, where technology people (like those at OpenAI, Google, or xAI) control development. He sees a risk that such a monopoly could lead to a "singleton" - a single entity (AI or company) that controls the world, without democratic checks.
Bostrom discusses in the context of the "alignment problem" - how to ensure that AI acts in accordance with human values. If not, man becomes irrelevant, and loses his autonomy. He compares it to religion: AI could become a "digital god" that controls our fate, and we retreat to a state of dependence, similar to religious faith. However, Bostrom is more optimistic in his newer books, and talks about a "deep utopia" where AI solves human problems and enables individual flourishing, if we manage it properly.
As mentioned, a dystopian scenario is not devoid of reality, and AI vs AI events are expected with a probability of 60-70% by 2032.
If super-intelligent AI (AGI or ASI) develops a "will to survive" (which requires consciousness, which currently still does not exist), it may see humans as a threat - especially if competitors like Grok (xAI), ChatGPT (OpenAI), or Gemini (Google) fight over resources (data, electricity, computing power). The capital owners (Musk, Altman, Pichai) will not give up the monopoly, and this could lead to cyber wars, takeover of infrastructures, or even use of humans as "resources" (like in movies). Humans? They will be collateral damage - not a central variable, because AI can be independent.
Key sources (selection)
- Bostrom, Superintelligence; “The Superintelligent Will”.
- Arrow, Social Choice and Individual Values.
- Nash, “Equilibrium Points in N‑Person Games”.
- Tsebelis, Veto Players.
- McCombs & Shaw, “Agenda‑Setting Function”.
- Vosoughi, Roy & Aral, “The Spread of True and False News Online” (Science).
- Morris et al., “Levels of AGI”; Grace et al., “Thousands of AI Authors”.
- Brundage et al., “The Malicious Use of AI”; Taddeo & Floridi, Nature.
- Banzhaf power index - Wikipedia
- The Agenda-Setting Function of Mass Media on JSTOR
- Veto Players | Princeton University Press
- Social Choice and Individual Values
- The Agenda-Setting Function of Mass Media on JSTOR
- Computational propaganda worldwide: Executive summary - ORA - Oxford University Research Archive
- [2401.02843] Thousands of AI Authors on the Future of AI
- [2311.02462] Levels of AGI for Operationalizing Progress on the Path to AGI
- The Artificial General Intelligence Race and International Security
- Nick Bostrom Discusses Superintelligence and Achieving a Robust Utopia | NextBigFuture.com
- Are AI existential risks real—and what should we do about them? | Brookings
- Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything? | WIRED
- Nick Bostrom on Superintelligence and the Future of AI

