AI meets morality: Scientists are rethinking technology in ‘smart cities’

Researchers propose a moral framework to guide AI decisions in smart cities, ensuring fairness, safety, and transparency.

A new ethics model aims to teach artificial intelligence in smart cities how to make moral decisions that reflect human values.

A new ethics model aims to teach artificial intelligence in smart cities how to make moral decisions that reflect human values. (CREDIT: Shutterstock)

Every streetlight, traffic camera, and trash can in tomorrow's cities could be part of one massive digital nervous system. Already, these devices record data on traffic, air quality, and even trash to make life more efficient. Yet, as cities get "smarter," the greatest challenge is not merely amassing data — it's figuring out how to get technology to make moral choices.

That's what philosophers Daniel Shussett and Veljko Dubljević tried to examine in their study, "Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics," which was featured in Algorithms. Their paper asks a fundamental question: how can cities ensure artificial intelligence behaves sensibly and reflects human values?

As cities adopt smart systems to oversee everything from traffic management to law enforcement, the authors argue that technology itself must be guided by moral judgment. Smart cities, they say, need more than sensors and servers — they need a conscience.

Redefining What Makes a City "Smart"

"Smart city" has been a catchphrase for data-driven, automated urban hubs for decades. City planners typically describe the phrase as cities that use digital technologies to upgrade services and the quality of life. Shussett and Dubljević, however, warn that the name can be misleading. A city can be teeming with technology and still make unethical decisions if the technology is blind to fairness, inclusion, and sustainability.

"Smart city" has been a catchphrase for data-driven, automated urban hubs for decades. (CREDIT: Shutterstock)

Criticisms have found four ethical fault lines: privacy and surveillance, democracy and decision-making, social inequality, and environmental sustainability. Each demands judgment that goes beyond algorithmic calculation. The researchers assert these questions need moral reasoning — not purely technical fixes.

The Agent-Deed-Consequence Model

At the heart of the study is the ADC model, a combination of three ethical traditions into one. It draws on virtue ethics (the moral character of the person), deontology (does the action adhere to moral rules), and utilitarianism (how does the action affect others).

  • The model separates moral judgment into three parts:
  • Agent: Who is performing the action and what is his intention?
  • Deed: What is done, and is it within ethical parameters?
  • Consequence: What are the effects, and who loses or benefits?

A value is placed on each consideration for moral significance. They are then combined into a single judgment as to whether an act is right or wrong. The result is a moral decision-making process that can be measured — something algorithms can be programmed to follow.

Pre-history and history of smart cities. (CREDIT: MDPI Algorithms)

"The ADC model enables us to encode not only what is the case, but what is to be done," Shussett, a postdoctoral researcher at North Carolina State University, says. "That's significant because it leads to action and enables AI systems to choose between legitimate and illegitimate requests."

Turning Moral Reasoning Into Code

Using a form of logic called deontic logic, the ADC model translates human moral reasoning into mathematical formulas that a machine can interpret. This allows AI systems to make decisions in line with human ethics without sacrificing autonomy.

Dubljević, a professor of philosophy at NC State, uses a simple example to show the value of this type of approach: if an ambulance with flashing lights is approaching an intersection, an AI controlling traffic may recognize it as a legitimate emergency and change the lights. But if someone adds artificial lights in a bid to fool the system, the AI should refuse to comply.

"With humans, you can inform them what needs to and ought not to be done," Dubljević describes. "Yet computers need to have a chain of reasoning that explains that logic. The ADC model allows us to create that formula."

Symbolic representation of the ADC model and deontic operators. (CREDIT: MDPI Algorithms)

Bringing Ethics to Everyday Urban Systems

Smart cities rely on sensors, surveillance, and automatic response — often in situations with ethical ramifications. Should AI-activated cameras alert the police to loud noises that sound like gunshots? What if the system is wrong and innocents are harassed?

These are the kinds of ethical gray areas that Shussett and Dubljević hope cities would think through thoroughly. The ADC model helps governments formalize how tech needs to behave in situations where intent, action, and outcome are all equally significant.

In public security scenarios, say, more invasive surveillance could be justified where there's an immediate threat. But when enforcing minor infractions, like littering or parking, the same level of intrusion would be unethical. The model helps to differentiate between the contexts.

Keeping Humans in the Loop

One of the most important lessons from the study is that technology should never replace human judgment. Instead, the ADC model ensures that people remain central to moral decision-making while still using AI’s efficiency and consistency.

Practically, that would mean automated systems handling routine cases — like managing traffic lights or balancing energy use — and sending borderline cases to human overseers. The hybrid approach pairs automation with oversight and offers what Dubljević calls a "moral safety net."

Ethical Collaborations Between Humans and Machines

The study sees the human-technology relationship as a partnership, rather than a hierarchy. In the smart city, humans and AI systems can be a single decision-making entity. Humans bring empathy and context; AI brings speed and precision.

When the two work together as what the authors call a "group agent," responsibility doesn't disappear — it's expanded. The city itself is a moral agent, with a duty to act in the public's best interest.

This framework could improve how cities respond to emergencies, distribute resources, or react to inequality. By building ethical consideration into the process, cities can make decisions that address both short-term needs and long-term values.

The Road Ahead

While promising, the researchers recognize the challenges are formidable. Complicated moral principles are tricky to map onto machine logic. Cities must also decide how much weight to give intention, action, and outcome in a given situation. And, crucially, systems need clear boundaries for when humans should take over.

Still, the authors are optimistic. Their first order of business is to run simulations on a number of technologies — transportation, surveillance, and so on — to determine whether the model yields consistent, explainable results. If that is successful, they would love to try it on real city systems.

To quote Dubljević, "Our work provides a roadmap for how we can both specify what an AI's values ought to be and actually encode those values in the system."

Practical Implications of the Research

The ADC model can revolutionize how cities plan for and run technology. By embedding ethical consideration into AI, policymakers can create systems that are not only operationally effective but also promote fairness, privacy, and sustainability. It can lead to smarter policing software, more equal resource allocation, and greater public trust in automation.

More broadly, it offers a way of humanizing artificial intelligence — of making it more empathetic as it grows more potent.

Research findings are available online in the journal Algorithms.




Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Shy Cohen
Shy CohenScience and Technology Writer

Shy Cohen
Science & Technology Writer

Shy Cohen is a Washington-based science and technology writer covering advances in AI, biotech, and beyond. He reports news and writes plain-language explainers that analyze how technological breakthroughs affect readers and society. His work focuses on turning complex research and fast-moving developments into clear, engaging stories. Shy draws on decades of experience, including long tenures at Microsoft and his independent consulting practice to bridge engineering, product, and business perspectives. He has crafted technical narratives, multi-dimensional due-diligence reports, and executive-level briefs, experience that informs his source-driven journalism and rigorous fact-checking. He studied at the Technion – Israel Institute of Technology and brings a methodical, reader-first approach to research, interviews, and verification. Comfortable with data and documentation, he distills jargon into crisp prose without sacrificing nuance.