With over 25 years of leadership across Fortune 500 companies and more than 200 patents to his name, Mr. Shekhar Natarajan, Founder & CEO of Orchestro.AI, has been at the forefront of building large-scale, real-world technology systems. Today, he is also pioneering the concept of Angelic Intelligence, a forward-thinking approach to AI that embeds ethics, trust, and human values into the very architecture of intelligent systems. For him, the conversation on AI goes beyond optimisation and efficiency; it is about a more fundamental question: what should AI do, and who decides that?
For years, the technology world has operated on the long-held belief—data is power. Organisations have raced to collect, store, and process as much data as possible, assuming that scale alone would determine leadership in AI. To a large extent, that has been true. The more data you have, the better your models, the stronger your market position, and the deeper your influence.
But that phase, as Mr. Shekhar Natarajan points out, is reaching its limits. A new kind of war is silently taking shape. On one side is data power, focused on scale, speed, and optimisation. On the other is moral power driven by judgment, responsibility, and trust. It may not be a loud or visible conflict, but it will decisively shape the future of AI.
As Mr. Shekhar Natarajan points out, the next phase of AI will not be defined only by who has more data. It will be defined by who decides what that data is used for, and how those decisions impact human lives.
Today’s AI systems are efficient, fast, and highly capable. Yet, they are largely indifferent to consequence. They optimise for what they are told—cost, speed, conversion—without an inherent understanding of fairness or dignity. Ethics is often introduced after the system is built, as a form of compliance.
This is where the emerging tension between data power and moral power becomes impossible to ignore.
Data power pushes for faster decisions, greater automation, and scale at any cost. Moral power demands pause, accountability, and the ability to explain and justify decisions—especially to those most affected by them. One side optimises outcomes while the other questions whether those outcomes are just.
Today’s AI systems are highly capable, but they are largely amoral. They optimise for efficiency, engagement, and profitability. Ethics, in most cases, is introduced later as a layer of compliance. This approach may work in low-stakes environments, but it begins to fail in areas like healthcare, governance, and financial access, where decisions directly impact human lives.
This is why, in Shekhar Natarajan’s view, the idea of moral power becomes important. Moral power is not abstract philosophy, but design discipline. It is about embedding values such as fairness, dignity, and accountability into the architecture of AI systems from the beginning. Just as a bridge is designed for load-bearing from the start, and not reinforced after cracks appear, AI systems must be built to carry ethical weight from day one. This philosophy forms the basis of the Angelic Intelligence framework he is building—where systems are designed not just to perform, but to act responsibly at scale.
In the Indian context, Mr. Shekhar Natarajan believes this shift has deeper relevance. Our diversity, scale, and socio-economic complexity demand systems that are inclusive by design. A one-size-fits-all algorithm built on narrow datasets cannot serve a billion people fairly. We need systems that are conscious of context and sensitive to impact.
This is not a rejection of data power. Data will always remain critical. But without moral direction, data-driven systems risk becoming efficient tools with unintended consequences.
The real opportunity ahead lies in integrating both—building AI that is intelligent as well as trustworthy. Because in the long run, it is not just the systems with the most data that will lead, but the ones that people can rely on with confidence.
That is the shift we must prepare for.




