Monday, February 16, 2026
HomeInnovate“You can automate processes, but cannot automate trust”

“You can automate processes, but cannot automate trust”

Srikanth Iyengar, CEO of upGrad Enterprise, speaks to us on why the competitive advantage lies in responsible deployment and why leadership mindset is the ultimate barrier 

In the rush to adopt Generative AI, many enterprises prioritize velocity over validity. But according to Srikanth Iyengar, CEO of upGrad Enterprise, this is a strategic error. With AI-focused training demand doubling globally, Iyengar argues that true organizational resilience is built on fairness, not just algorithms. In this exclusive conversation, he details why “equitable AI” is a leadership mandate, how to bridge generational skilling gaps, and why responsible implementation—not just speed—is the ultimate competitive advantage in the talent marketplace. 

As a leader in digital upskilling, how do you ensure your own AI-driven learning platforms are free from bias and offer truly equitable career paths? 

At upGrad Enterprise, equitable AI isn’t a feature, it’s a design principle. Bias doesn’t disappear because you build a smarter algorithm; it disappears when you leverage datasets from a more representative learning ecosystem. Our AI models are trained on diverse learner and enterprise data across industries, levels and geographies, and are continuously benchmarked to ensure they don’t reinforce legacy inequities.  

We combine this with human oversight, faculty, and mentors, who act as a counterbalance to purely machine-led decisions. The strongest validation comes from enterprises themselves; over 85% of our revenue (upGrad Enterprise) last year was repeat business, which signals that our AI-driven training and learning pathways are actually widening opportunity, not narrowing it. I’ve personally found it fascinating that we see interest in AI from hitherto unexpected segments of the population; one of my colleagues mentioned that his mother (not very Tech-savvy) uses ChatGPT every day. 

We also spoke to HR leaders across India, the UK, the USA and the EU for our upcoming (latest) study titled ‘The Power Skills Imperative: Global Outlook 2026’, on how soft skills are driving workforce dynamics. Findings on the most valued power skills are consistent globally – problem-solving, adaptability, collaboration, communication, and resilience are now considered as critical to both personal and organisational success. So, for us, equitable AI means giving every learner a clear line of sight to both – technical capability and human capability. 
 

How do you convince a CEO client that they cannot build a resilient workforce with AI if they don’t first build for equitable AI? 

We work closely with large Indian conglomerates, GCCs and global enterprises, and we often have an interesting conversation on how one can automate processes but cannot automate trust. And trust comes from fairness. A workforce (especially younger colleagues) won’t adopt AI if it believes the system is opaque or uneven. That’s why building an AI-enabled organisation starts with building equitable AI practices.  

We’re seeing this shift at scale. AI-focused training demand has doubled in the last year across India, the US, Europe and now the Middle East as organisations prepare for AI-led roles, workflows and leadership models. And in every boardroom discussion, there is a strong focus on culture; leaders want AI that strengthens confidence, transparency and mobility within the workforce. Enterprises that use AI to broaden opportunity, not compress it, are the ones building truly resilient talent system. 

What is the greater business liability for an enterprise today: the speed of their AI adoption, or the equity of its implementation? 

Speed without guardrails is a bigger liability. Deploying AI quickly is easy; deploying it responsibly is where real competitive advantage sits. The risk is rarely adoption, it’s uneven implementation. That’s why our partners are not just scaling AI, they’re building capability pipelines around it.  From GenAI for software teams to LLM-focused upskilling, QA automation, data storytelling and leadership programs, we’ve built one of the world’s most extensive AI skilling portfolios. This ensures AI isn’t rolled out faster than the organisation’s ability to absorb it. 

Our upcoming (latest) global report also highlights regional variations in human capability prioritisation – India over-indexes on problem-solving but under-indexes on influence and empathy; the US prioritises emotional intelligence and teamwork; the UK leans on influence; and the EU lags on creativity and critical thinking. 
 

Is the primary barrier to “fair AI” a technological challenge that can be fixed with more training, or is it a leadership and culture challenge? 

Technology is rarely the barrier; leadership mindset usually is. One can retrain models and fix datasets but unless leaders accept that current data sets most likely include some element of bias given past history, and they consciously strive to embed fairness, transparency and inclusive workflows, AI will simply mirror existing patterns.  

The most forward-thinking enterprises we work with globally, invest as much in leadership readiness as they do in technical readiness, because they know AI transformation is a culture question as much as a capability question.  

That’s why we’re building end-to-end solutions including personalised labs, real-world simulations and Hire-Train-Deploy pipelines to align capability development with organisational needs. And our upcoming (latest) report reinforces this: while awareness of ‘power skills’ varies across regions, value recognition is universal. That signals a mindset opportunity, not a technical gap.  

Another report from earlier this year, ‘Skilling Smarter: A Strategic Guide to Training Across Generations’, where we surveyed over 12,300 professionals and revealed important areas of evolution. Many employees still train only when required, and learning preferences differ sharply by generation, Gen Z wants on-demand immersive formats, Gen X prefers expert-led models, while organisations are still in the early stages of tailoring experiences accordingly. This is a reflection of where most enterprises are on their maturity journey.  

As organisations modernise their skilling strategies, they naturally strengthen the foundation needed for fair and responsible AI adoption. So yes, one can fix a model. But shaping a fair AI system requires leadership maturity, cultural intent and a commitment to building human capability alongside technological capability. 

RELATED ARTICLES

Latest Artilces