• Close
  • Subscribe
burgermenu
Close

Disruptive, not destructive: What AI should be and not be

Disruptive, not destructive: What AI should be and not be

Yuval Noah Harari’s warning at the 2026 World Economic Forum emphasizes that AI, like writing, is a transformative force that requires a balanced approach to governance, safety, and societal responsibility.

By Romaric Jannel | March 01, 2026
Reading time: 5 min
Disruptive, not destructive: What AI should be and not be

At the 2026 World Economic Forum Annual Meeting in Davos, Yuval Noah Harari warned that artificial intelligence (AI) could reshape humanity in ways that we may not be prepared to handle. Regardless of whether one shares his outlook, his warning is useful because it changes the way we approach such technologies.

 

A force for change

AI is not just another tool. It is a force capable of reorganizing knowledge, work, education, and power.

These types of concerns predate modern computing. When new technologies emerge, societies do not only debate convenience. They also debate the impact of technology on the mind, authority, or the bonds that hold a community together. The Greek tradition captured this fear. In Plato’s Phaedrus (274c–275b), Socrates recounts an Egyptian myth in which King Thamus criticizes writing itself. If people can rely on marks on a page, Thamus argues, they will stop exercising their memory. Even worse, they may start to appear wise — able to repeat what they read  without doing the hard work of understanding. Writing would become an aid to recollection, not a substitute for wisdom.

Throughout history, writing has proven invaluable. It has expanded knowledge beyond the limits of individual memory, supported science, preserved law, and enabled entire civilizations to coordinate on a large scale. However, the significance of writing goes even further. Writing is connected to cognitive habits and social structures. When knowledge is externalized, who controls access? Who gets to interpret it? Who benefits from it? What happens to the distinction between “having information” and “knowing”?

 

From writing to AI: The evolution of knowledge

AI is the next chapter of that story, yet with a twist. While writing stores and transmits information, AI can produce it as well. Generative systems do not just retrieve facts. They synthesize answers, craft arguments, and imitate expertise with remarkable fluency. Therefore, the risk is not only that individuals “forget,” but also that societies lose their reliable signals of trust. Who actually knows? Who verified? Who is accountable? When persuasive text, images, audio, and video can be produced on a large scale, the problem becomes both epistemic and political.

This is one reason why public debate has become so polarized. In one camp are catastrophic narratives: AI is an existential threat. In the other camp are dismissive narratives: AI is a wonderful productivity and efficiency booster, and critics are irrational or nostalgic. However, both sides miss something important. Disruption is real and sometimes necessary, but disruption without safeguards can be destructive. The relevant question is not “Should we use AI?” but rather, “Which capabilities should be deployed, by whom, under what constraints, for whose benefit, and with what recourse when harm occurs?”

 

AI's impact beyond chatbots: A deeper challenge

There is another reason why the debate often feels confusing. “AI” is often used as a synonym for chatbots. However, AI is not limited to large language models. Many systems do not generate reading, viewing, or listening content. Instead, they produce outputs such as predictions, recommendations, or decisions that influence real or virtual environments. While these systems may not appear wise, they can automate judgment and concentrate power.

This broader perspective changes the definition of responsible governance. If AI were only about, for instance, synthetic text, the main focus would be provenance, authenticity, and quality. However, when AI is used to rank job candidates, flag “risk,” or control machines, governance must cover not only what AI says, but also what it does, as well as the implicit decisions it makes on our behalf.

 

AI and governance: Ensuring oversight

The key to a sustainable approach is finding the right balance between panic and complacency. This means developing and implementing AI while treating governance as an integral part of the product, not an afterthought. Furthermore, governance cannot be a single, universal set of rules descending from “above.” In practice, it is a layered system comprising technical, institutional, and social elements because different actors control different levels.

Start with the technical layer. Safety by design is an ethical engineering practice. This involves rigorous pre-release evaluations and security measures to prevent theft or abuse. However, safety by design must also address the decision-oriented AI systems. This includes robustness, defenses against manipulation, and monitoring. Systems should support  rather than replace   human judgment. They should provide clear signals of uncertainty or override paths when needed.

Next is the institutional layer. Markets reward speed. In such a configuration, relying on voluntary “principles” is rarely sufficient. In high-impact areas such as security and healthcare, institutions require clear standards, including transparency regarding how systems are used and accountability for predictable harm. Governance also involves processes such as impact assessments before deployment and practical contestability , an intelligible process by which a person can challenge an outcome and have it reviewed by a human. When serious failures occur, institutions should treat them as safety incidents.

Finally, the social layer. King Thamus did not try to ban writing. Rather, he warned that writing cannot replace human memory or teaching that fosters real understanding. In the AI era, it is necessary to “teach verification”: how to check claims, trace sources, detect manipulation, and understand what these systems can and cannot guarantee. However, when AI makes decisions rather than merely generating content, verification is not enough. People also need literacy in rights and recourse.

Education is a clear example of the stakes. When used effectively, AI can tutor students and provide practice exercises at the appropriate level. However, when used poorly, AI becomes an essay machine that rewards surface fluency and trains students to outsource thinking  Thamus’ fear, updated for the age of AI. 

 

Striking the right balance: Innovation with responsibility

The point is not to demonize or glorify AI as an inevitable step in progress. It is to emphasize a distinction that is often overlooked: disruption can be valuable, but destruction is optional. AI should be as disruptive as writing and printing were. However, it should not destroy trust, accountability, human agency, or the condition of human life itself. The essential task is to design and achieve a feasible compromise: innovation with safeguards, power with responsibility, and progress compatible with an interconnected world.

    • Romaric Jannel
      French US AI expert