AI is forcing a complete reimagining and reconfiguration of education

A new study says AI is reviving ideological competition and forcing education to take on a new civic role.

Joseph Shavit
Shy Cohen
Written By: Shy Cohen/
Edited By: Joseph Shavit
Add as a preferred source in Google
New research argues AI is reshaping political legitimacy, national power, trust, and education worldwide.

New research argues AI is reshaping political legitimacy, national power, trust, and education worldwide. (CREDIT: Shutterstock)

A little more than three years after ChatGPT went public, the argument over artificial intelligence is no longer just about better tools or faster work. In a new paper, Professor Yilei Shao of East China Normal University argues that AI is starting to reopen a question many once thought was settled: which kind of political system can claim legitimacy in the modern world.

That claim sits at the center of a study published in ECNU Review of Education. Shao revisits Francis Fukuyama’s famous “End of History” thesis, the idea that liberal democracy had emerged from the Cold War as the final form of political and ideological development. Instead of reinforcing that outcome, Shao argues, recent AI breakthroughs are creating new alternatives, new tensions, and a widening gap between fast-moving technology and slower-moving institutions.

The paper frames the present moment as an “intelligence transition period.” Since late 2022, foundation models such as ChatGPT and DeepSeek have spread quickly, while the cost of reasoning and generation from large models has fallen and intelligent systems have moved deeper into industry and public life.

At the same time, governments and global bodies have started setting policy. The White House released America’s AI Action Plan in 2025. China’s State Council issued guidance in August 2025 to deepen its “AI+” initiative. The United Nations General Assembly passed its first global AI resolution in March 2024.

The three elements of the new national capacities. (CREDIT: ECNU Review of Education)

Where the old argument starts to crack

Shao’s point is that this wave of AI has not produced a stronger democratic consensus. It has done the opposite.

The paper argues that AI systems now do more than assist people. They shape attention, influence narratives, and are approaching the ability to carry out tasks on their own. In that setting, the key political contest is no longer only about ideals and institutions. It is also about technology and governance.

Shao describes two competing paths. One is a “liberal–algorithmic” model built around openness, transparency, privacy protection, and public participation. The other is an “authoritarian–algorithmic” model based on centralized data collection, black-box decisions, and uniform order over personal expression. In that sense, AI becomes more than a market technology. It starts to look like infrastructure for ideological competition.

The study also argues that national power is being redefined. Instead of relying only on industrial output, revenue, or military strength, countries are increasingly shaped by three strategic resources: compute, data, and intelligence.

A new kind of national power

Shao compares compute to coal and steel in the 19th century, data to oil in the 20th, and intelligence to a foundational force that will circulate through society in the decades ahead. Countries with strength in those areas, the paper says, can gain what it calls a new kind of “political surplus,” not from shipping more goods, but from shaping institutions, standards, and public narratives.

Technology–governance dual curve and the legitimacy gap. (CREDIT: ECNU Review of Education)

That shift also changes how individual people fit into society. Fukuyama’s “last man” was a figure dulled by comfort and consumption. Shao says AI raises a sharper possibility: the rise of the “predicted man,” a person made transparent by data trails, behavioral profiling, recommendation systems, credit scoring, and risk models.

In this account, freedom does not simply shrink through force. It narrows through prediction. People may find themselves guided by systems that seem efficient and convenient, while the space for uncertainty, self-direction, and resistance quietly weakens.

Trust becomes the real battleground

The paper argues that the biggest institutional problem in the AI era is trust. Technology has advanced quickly, from content generation to systems that steer attention and organize action. Governance has not kept up.

Shao points to a 2025 study by KPMG and the University of Melbourne covering 47 countries and 48,000 respondents. It found that although most people remained optimistic about AI, only about 46% said they were willing to trust AI systems. The public, the paper says, broadly believes AI needs oversight and governance.

That helps explain why major AI companies are increasingly competing not just on capability, but on trust. The paper cites Google’s use of risk taxonomies, prerelease evaluations, and mitigation measures, OpenAI’s global consultation on its Model Specification, and Anthropic’s Constitutional AI approach.

“Can we trust an AI?” Shao presents this as one of the central institutional questions of the age.

The answer, the paper suggests, lies in what it calls “algorithmic constitutionalism.” Instead of asking only who governs, societies also need to ask how algorithms govern, whether their decisions can be explained, audited, corrected, and appealed.

Latour's actor-network of trust in science and AI and society. (CREDIT: ECNU Review of Education)

Why the paper turns to education

Shao ultimately places education at the center of the response. The study says AI has opened three structural gaps: an interpretive gap, where facts multiply but meaning thins out; a normative gap, where models can act before societies decide what they should do; and an order gap, where institutions lack everyday ways to challenge or repair algorithmic harms.

That is why the paper treats education not as a side issue, but as the main site of repair. “At precisely this moment, more than ever before, we need new forms of explanation, ability, legitimacy, and governance to fill the void of thoughts, trust, and policy,” Shao said. “This is the fundamental reason for the humanities and social sciences, and education to reconstitute themselves as the ‘new-quality infrastructure’ of a human–machine symbiotic society in front of us all.”

The proposed fix is a dual shift in education: civic literacy for AI society, and the ability to work with AI systems without surrendering human agency. “Education must now shoulder the fundamental task of guiding societies through legitimacy crises, rebuilding public trust, and cultivating a new civic literacy for the AI era,” Shao concluded.

Practical implications of the research

The paper suggests that schools, universities, and lifelong learning systems may need to do far more than teach technical AI skills.

They may also need to prepare people to question algorithmic decisions, understand their rights, and take part in shaping the rules that govern AI in daily life.

In Shao’s view, the future of AI governance may depend as much on civic education and public trust as on the technology itself.

Research findings are available online in the journal ECNU Review of Education.

The original story "AI is forcing a complete reimagining and reconfiguration of education" is published in The Brighter Side of News.



Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Shy Cohen
Shy CohenScience and Technology Writer

Shy Cohen
Writer

Shy Cohen is a Washington-based science and technology writer covering advances in artificial intelligence, machine learning, and computer science. He reports news and writes clear, plain-language explainers that examine how emerging technologies shape society. Drawing on decades of experience, including long tenures at Microsoft and work as an independent consultant, he brings an engineering-informed perspective to his reporting. His work focuses on translating complex research and fast-moving developments into accurate, engaging stories, with a methodical, reader-first approach to research, interviews, and verification.