India’s AI Impact Summit

What I Heard and Read Between the Lines about the India AI Impact Summit 2026

Last week, India did something unprecedented. It hosted the fourth global AI summit. This was the first time a Global South nation hosted such an event. The India AI Impact Summit 2026 spanned six days at Bharat Mandapam in New Delhi. It drew over 100 country delegations and 20+ heads of state. Global AI leaders, including Sundar Pichai, Sam Altman, Dario Amodei, Demis Hassabis, and Mukesh Ambani, gathered together.

They all converged on a single question: What does AI look like when 1.5 billion people are part of the equation? and, What is in it for them?

I have tracked this space closely through my work in AI deep tech consulting. I have also worked in AI adoption strategy. I want to share what I think it means. This is relevant for India, for the enterprise, and for those of us building in this space.

The $250 Billion Infrastructure Bet

The headline number is staggering: over $250 billion in AI infrastructure commitments announced in a single week.

Reliance Industries and Jio committed $110 billion over seven years. The funds will support gigawatt-scale data centres in Jamnagar. A nationwide edge computing network and 10 GW of green solar power are also included. Mukesh Ambani’s framing was blunt: “India cannot afford to rent intelligence.”

Adani Group pledged $100 billion by 2035. This pledge is for renewable-energy-powered, hyperscale AI-ready data centres. They are expanding AdaniConnex from 2 GW to a 5 GW target.

Microsoft committed $50 billion by the decade’s end. This commitment aims to expand AI access across the Global South. India is a major recipient of this effort.

Google announced subsea optical fibre cable routes connecting India, the US, and the Southern Hemisphere.

TCS announced OpenAI as the first customer for its new data centre business. This includes 100 MW of AI capacity, which is scalable to 1 GW. This is part of OpenAI’s $500B Stargate initiative.

Larsen & Toubro and Nvidia are building India’s largest gigawatt-scale “AI factory” in Chennai and Mumbai.

These are not token announcements. This is nation-scale infrastructure being laid down.

My take: I don’t think the big conglomerates are delivering intelligence — they’re removing friction. Geo-political friction. Scaling friction. The bottom layers of this cake — energy and infrastructure — are the critical ones. We’ve already seen the US government push back on its own AI companies. The US government argues that energy and infrastructure are scarce. US energy is not for Indian users to consume, even if it is a paid subscription. They should be diverted to building America’s intelligence edge.

Reliance’s $110B and Adani’s $100B represent significant investments in this friction. They aim to control the compute, energy, and network layers. This strategy ensures India isn’t dependent on renting intelligence from abroad.

India has three structural advantages that make it an attractive infrastructure partner. The OpenAI-TCS Hypervault deal is the first proof point. The AI-Energy-Finance trifecta that the World Bank hosted a session on isn’t a coincidence — it’s the foundational equation.

  • Democratic values align with the West.
  • Being a peninsula provides abundant water for cooling for data centers.
  • The sun in regions like Rajasthan, Gujarat, and Andhra Pradesh offers natural energy.

Sovereign AI: Made-in-India Foundation Models

Under the ₹10,372 cr IndiaAI Mission, India unveiled three sovereign AI model families. This signals a shift from being a consumer of global AI to becoming a creator of indigenous intelligence.

Sarvam AI (Bangalore) launched Sarvam 30B and Sarvam 105B. These models were trained entirely in India from scratch. They were not fine-tuned from foreign models. The 105B model handles complex reasoning with a 128K context window and agentic capabilities. Both support all 22 Indian languages and outperformed several global peers on MMLU-Pro benchmarks.

BharatGen (IIT Bombay consortium) unveiled Param2 17B MoE. It was developed with Nvidia AI Enterprise. The model is optimized for governance, education, healthcare, and agriculture. It is also being open-sourced via Hugging Face.

Gnani.ai launched Vachana TTS — a voice-cloning system. It supports 12 Indian languages from under 10 seconds of audio.

My take: Building foundational models for India’s languages, culture, and legal context is genuinely important. Why is clear! It’s also partly a convenient wrapper around the real questions. There will be something to lose, and something to gain; and it’s not going to be equity for all states.

  • Where will infrastructure be built? Andhra Pradesh, Gujarat, Rajasthan, UP, …
  • What infrastructure essentials will be made in India? Renewables, Chips, …
  • Which infrastructure will be built? Energy, Data Centers, …
  • Who controls the natural resources (land, water)? PPP, Gov, Private, …
  • What do people lose? Land, Agriculture economy size, …
  • What do people gain? Intelligence access, New infrastructure economy, …
  • What does the government gain? Defence autonomy, …

IT Services: Reset, Not Requiem

India’s top IT companies addressed fears of obsolescence head-on — and the narrative was more nuanced than the headlines suggest.

TCS leadership acknowledged that while roles will evolve, the fundamental need for system integrators remains. The real constraint isn’t access to models. It’s structural. Organisations are layering AI onto fragmented digital estates built for transactions. These estates are not designed for real-time execution.

Infosys assessed a $300 billion AI opportunity across six sectors. Tata Sons issued a “defend-and-grow” mandate for TCS, accelerating AI acquisitions and up-skilling. The consensus was clear: true scale requires enterprise-wide process re-imagination, not just pilots.

A pragmatic insight that resonated: only 16% of developer time is spent writing code. The other 84% goes to production troubleshooting. That’s where agentic AI’s real value lies. AI won’t kill tech services. It will reset them.

In India, the chief AI officer in four out of five companies is effectively the CEO. Leaders stressed the importance of building on platforms rather than individual models. They emphasised the need for a talent strategy and values-based guardrails. Leaders also encouraged the courage to move from pilots to organisation-wide transformation.

My take: Bolting on an AI layer to existing systems is one way to solve the problem. The other way is to re-look into the enterprise in an AI-first world. Consulting firms in a system-integration or pure-technology consulting role will be relevant. Nonetheless, for pure software engineering, the demand for speed (in the name of productivity) will increase. This means that there will be more failed projects before the light at the end of the tunnel. Consulting that can evolve customers into an AI-first world will succeed, and those that are bolting on capabilities will survive. Consulting companies need to leverage their domain depth and partner on value creation rather than outsourcing for cost or risk. The CDO (Chief Digital Officer) is more critical to AI-driven than the CEO.

Five Impressive Products

EkaScribe (https://ekascribe.ai/) — an AI clinical scribe that lets doctors in busy rural clinics see patients without touching a keyboard. It handles prescriptions, history, and filing automatically.

Ottobots (https://ottonomy.io/) — autonomous hospital robots navigating corridors and elevators to deliver medicines independently.

Sarvam Kaze — AI smart spectacles. They see what you see. They explain the world in your local language via bone conduction. Launching May 2026.

Sarvam Edge (https://www.sarvam.ai/blogs/sarvam-edge) — on-device AI translating 22 languages in real-time with zero internet connectivity.

Mankomb’s “Chewie” (https://www.mankomb.com/chewie) — a kitchen appliance using real-time AI sensors to convert wet waste into nutrient-rich soil in hours.

Cooperation with Clenched Fists

The summit concluded with the New Delhi Declaration, endorsed by 88 countries including the US, China, EU, and UK. It delivered a Charter for the Democratic Diffusion of AI, a Global AI Impact Commons, a Trusted AI Commons, and workforce development playbooks.

But the tensions were palpable. The US delegation made its position explicit: “We totally reject global governance of AI.” The US framed AI squarely as a geopolitical race. Many middle powers used the summit to discuss building their own AI sovereignty. They focused on models, on chips, and on escaping Silicon Valley’s gravity. AI governance is rapidly moving from compliance afterthought to boardroom priority.

The Agentic Shift

The summit’s defining motif was the shift from traditional AI. In traditional AI, you ask, and it answers. It shifted to Agentic AI, where you instruct, and it executes everything. The progression started with ML and pattern recognition. It moved through deep learning and generative AI, leading to AI agents. Finally, it reached fully autonomous multi-agent systems. This progression was framed as the decade’s defining trajectory.

The message was clear: if your systems matter to your business, then AI across the SDLC is not optional.

Where the Value Gets Captured

Here’s the question I kept coming back to throughout the week: India has 1.5 billion walking, talking, naturally general intelligence. This is not just a population — it’s a market that needs expertise augmentation at scale. AI can transform agriculture with crop advisory. It can revolutionise healthcare with point-of-care diagnostics. It can enhance education with personalisation. AI can also allow strong but lean digital governance without becoming a surveillance state.

The summit’s “AI for All” framing is in the right direction. But the real test will be whether these infrastructure investments benefit the village clinic. They need to reach the smallholder farm. They must also support the government school.

The summit’s overarching message is unmistakable: India is not just adopting AI. It is building it. It is governing it. It is deploying it at scale. The real question is about who captures the value. Is it the infrastructure builders? Is it the model makers? Or is it the domain consultants/integrators who wire intelligence into the last mile & workflow?

Seems like everyone who will prevent the AI bubble from bursting is going to capture value. The “Planet” should not die in the process.

The Inverse Universe

A story about how the machines stole every job on the planet. Then, humanity finally figured out what it was actually worth.

The Crime Scene

Here’s the thing about the biggest heist in history — nobody called the cops. Nobody even noticed it was happening. One day, you’re grinding your 9-to-5, bragging about your “hustle,” posting your sad desk lunch on Instagram. The next day, a bot does your entire week’s work during its lunch break. Except bots don’t take lunch breaks. That’s the whole problem.

They didn’t come with guns. They came in as helpful assistants.

AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) rolled into civilization as the best cons always do. It was smiling and helpful, solving your problems and making your life easier. And by the time you looked up from your phone, it had taken everything. Your spreadsheets. Your diagnoses. Your legal briefs. Your music. Your art. Even that one thing you thought made you special at work — yeah, that too. Gone. Automated. Running on a server farm in Iceland that doesn’t even know your name.

The cops weren’t coming because there was no crime. Not technically. The machines didn’t steal your job. They just made it worthless. Which, if you think about it, is way more violent.

So here we are. Seven billion suspects. No victims willing to testify. And one big, ugly question spray-painted on the wall of the 21st century:

If the bots do everything, what’s your alibi for being alive?

The Alibis We Used to Hide Behind

See, for generations, we had the perfect cover story. “I’m busy.” That was the alibi. You dodge your kids. You ghost your parents. You ignore your mental health and avoid every hard conversation in your life. Nobody questioned it because you were productive. Busy was the getaway car, bestie.

Your boss needed you. Your company needed you. The economy needed you. You were a cog, sure, but a necessary cog. And that necessity? That was identity. That was the purpose. That was the thing you whispered to yourself at 2 AM when nothing else made sense.

Then AGI showed up and shot your alibi dead in a parking lot.

No more “sorry, babe, I have to work late.” The bot did it in forty-five seconds. No more “I’ll spend time with the kids this weekend.” Weekends are here, and your calendar is empty. Has been for months. No more pretending that answering emails is a personality trait.

The busywork alibi is bleeding out on the floor. Now you’re standing in your kitchen at 10 AM on a Tuesday. You stare at your family as if you’re a stranger. You realize you haven’t had a real conversation with your daughter since she was in third grade.

That’s not liberation. That’s a crime scene of a different kind.

The New Black Market — Who’s Selling What

Every heist reshuffles the underground. Old rackets die. New ones open up. And in the Inverse Universe, the most valuable contraband isn’t drugs, data, or diamonds.

It’s being real.

No cap — authenticity becomes the new currency, and the black market for it is wild. Let me walk you through the new economy like I’m walking you through a crime syndicate org chart.

The Accountables — these are the bosses. Not because they’re the smartest. The bots are smarter. These are the people who sign their names. When an AI recommends a surgery, and the patient dies, somebody’s gotta face the family. When an algorithm denies a mortgage to ten thousand people, somebody’s gotta sit in front of Congress. That signature? That willingness to be the one who answers for it? That’s the most expensive thing in the new world. Accountability is the new corner office. A bot can make the call. Only a human can take the fall.

The Curators — think of them as the fences, but for meaning. When AI generates ten thousand songs a minute, someone has to review them. AI creates a million articles an hour. Infinite content emerges in every direction. Somebody’s gotta look at all of it. They must say, “This.” This one matters. Ignore the rest. That’s not an algorithm. That’s taste. And taste, in a world drowning in content, is worth more than the content itself. The curator doesn’t create the art. They create the attention. And attention, my friend, is the last scarce thing on earth.

The Present Ones — the caregivers, the teachers, the coaches, and the nurses. They are the parents who actually sit down and look their kids in the eye. These aren’t tasks. You can’t optimize a hug. You can’t automate the 3 AM conversation with your teenager who just got their heart broken. Bots can simulate empathy the way a con artist simulates love — convincingly, until it matters. The Present Ones deal in the real thing, and the real thing has a street value that keeps going up.

The Meaning Makers — mediators, coaches, community builders, and spiritual guides. They are like the bartender who knows when to talk and when to shut up. Coordination gets easier with bots. But agreement? Agreement is still a knife fight in a phone booth. Someone’s gotta walk into that booth. That’s the Meaning Makers. Conflict resolution is a growth industry because every other friction has been automated except the human kind.

The Labels

In every underground economy, provenance matters. Is this real? Is this stolen? Who touched it last?

The same thing happens in the Inverse Universe, except the labels go on everything.

“Human-Made.” That little tag is the new Gucci logo. A poem written by a person. A chair built by hand. A meal cooked by someone who learned the recipe from their grandmother, not from a dataset. It doesn’t have to be better than the AI version. It has to be real. And “real” hits different when everything else is synthetic. Like finding an actual letter in a mailbox full of spam. You hold it differently. You read it more slowly.

“Human-Verified.” This is for high-stakes matters. These include medical results, financial advice, and legal opinions. Anything can wreck your life if it’s wrong. An AI did the work. A human checked it. That human’s name is on file. It’s the difference between a street pill and a prescription from a pharmacy. Same molecule, maybe. But one comes with a receipt and a person you can call.

“Human-Accountable.” The heavy label. Someone’s neck is on the line. Criminal sentencing. Military decisions. End-of-life care. You want a bot making that call? Nah. You want a person. It’s not because they’ll get it right. It’s because they can be held responsible when they don’t. That’s the deal. That’s always been the deal.

The Two Gangs

Here’s where the story splits, and this is where it gets lowkey terrifying.

AGI removes the obstacles. It kills the busywork, frees up the time, and handles the grind. But what do you do with that freedom? That’s on you. And humanity splits into two gangs.

Gang One: The Intentionals. These are the ones who sit down at the dinner table. Who learn to cook slow meals. Who join local clubs, play sports with their neighbors, take the long walk, and have the hard conversation. They build rituals. They raise their kids with presence, not productivity metrics. They’re slower, and they know it, and they chose it. The Intentionals treat their free time like something sacred. They understand that time is the only resource AGI can’t manufacture.

Gang Two: The Numb. These are the ones who fall into the dopamine pipeline. Hyper-personalized entertainment. Synthetic companions who never disagree with you. Feeds that know your psychology better than your therapist and use it to keep you scrolling until your eyes bleed. The Numb aren’t lazy — they’re captured. The same bots that freed them have recaptured them. This is the irony that would make a crime novelist weep.

No one tells you which gang you’re joining. You just wake up one day and realize you’ve been recruited.

The dinner table is right there. It’s always been right there. The question — the only question that matters in the Inverse Universe — is whether you pull up a chair.

The Workplace After the Heist

Corporations used to be factories cosplaying as offices. Throughput. Process. KPIs. Stand-ups that made you want to lie down permanently.

Post-heist? The workplace looks like a jury room. Small. Sharp. Serious. A thin crew of humans setting goals, drawing lines, owning consequences. Behind them, a thick army of bots operates. They execute tasks, conduct analyses, and manage operations. This is everything that used to need a building full of people and a parking lot full of sadness.

Meetings get rare but heavy. No more “syncing up,” “circling back,” or whatever performative nonsense fills your calendar. Every meeting is a decision. Every decision has a name attached. You don’t go to work to do things anymore. You go to work to choose things. And choosing is challenging. Real choosing involves real stakes. The consequences land on you. It turns out to be the hardest job humans have ever had.

The org chart doesn’t look like a pyramid anymore. It looks like a courtroom. The bots are the lawyers doing research. The humans are the judges. And every ruling has weight.

School Gets Interesting (Finally)

If every kid has an AI tutor that’s infinitely patient and infinitely adaptive, what happens? This tutor is available 24/7 and knows exactly how to explain long division in a way that clicks. Then what’s the school building even for?

Not content delivery. That game is over. The school becomes something different. It returns to what it was intended to be before the industrial era changed it into a child-processing plant. It becomes a place where you learn how to be a person.

Emotional regulation. Conflict handling. Learning to work with people who annoy you is crucial. Let’s be honest, it’s the most valuable life skill nobody teaches. Ethics. Epistemic humility, which is a fancy way of saying “learning to ask ‘how do we actually know this?’ before running your mouth.” Sports. Crafts. Performance. Stuff you can only learn with a body in a room with other bodies.

The kid who can recite a textbook? Irrelevant. The bot has the textbook memorized in every language. The kid who can sit with ambiguity, navigate a disagreement, and make a thoughtful choice under pressure? That kid runs the world.

Education stops being about filling heads and starts being about forming humans. Which is what Socrates was trying to do before we turned it all into standardized testing and anxiety disorders.

The Three Endings

Every crime novel gives you possible endings. Here are yours.

Ending One: The Garden. The bots run the infrastructure. Humans focus on relationships, craft, health, civic life, and exploration (my favorite). Inequality gets managed. Accountability norms hold. It’s quiet. It’s slow. People know their neighbors’ names. It’s not exciting, but it’s real. Picture a well-funded small town. Robots mow the lawns. Humans sit on the porch and argue about philosophy. Sounds boring. Sounds like heaven.

Ending Two: The Casino. The bots create abundance, but the attention markets eat people alive. Entertainment and persuasion become the only industries that matter. A small elite owns the bots. Everyone else rents meaning by the month, like a streaming subscription for a purpose. Think Vegas, but everywhere, and the house always wins because the house has a super-intelligence running the odds. You’re free. You’re fed. You’re entertained. And you’re absolutely, devastatingly empty.

Ending Three: The Cathedral. Strong institutions put hard limits on bot autonomy. Humans get paid to be stewards — ethics, oversight, care, governance. Progress is slower. The tech bros are mad about it. But legitimacy holds. Society moves at the speed of human deliberation, not machine computation. Something important is preserved — the sense that people are still in charge of their own story.

Most likely outcome? A messy, chaotic, beautiful, terrifying cocktail of all three. Different in every city, every country, every household. The Inverse Universe isn’t one world. It’s a million negotiations happening concurrently.

The Closing Statement

I’ll keep it short because Gen Z doesn’t do long outros. No cap.

The biggest crime of the AGI era won’t be committed by machines. It’ll be committed by humans against themselves. The crime of having all the time in the world and wasting it. The crime of being freed from the grind and choosing numbness over connection. The crime of sitting three feet from the people you love and still staring at a screen.

The machines are getting smarter. That part’s done. That part’s inevitable.

The only open case — the only mystery left — is whether we get wiser.

The bots took the jobs. They gave us back our time. What we do with it is the only verdict that matters.

No jury. No judge. Just you, the people you love, and a dinner table with empty chairs.

Sit down.

Future of Work: Adapting Roles in the Age of AI

Jobs aren’t “going away.” The easy parts of jobs are going away.

That distinction matters because it changes what you do next.

For 20+ years, every serious wave of tech change has followed the same script: we don’t remove work—we move it. We compress the routine and expand the messy human aspects: judgment, validation, trade-offs, and ownership. Economists have long argued this. Technology tends to substitute for well-defined “routine” tasks. It complements non-routine problem-solving and interaction.

Generative AI is simply the first wave that can eat a chunk of cognitive routine that we pretended was “craft.”

So yes—roles across engineering are about to be “redefined.” Software developers, tech leads, architects, testers, program managers, general managers, support engineers—basically anyone who has ever touched a backlog, a build pipeline, or a production incident—will get a fresh job description. It won’t show up as a layoff notice at first. It’ll appear as a cheerful new button labeled “Generate.” You’ll click it. It’ll work. You’ll smile. Then you’ll realize your role didn’t disappear… it just evolved into full-time responsibility for whatever that button did.

And if you’re waiting for the “AI took my job” moment… you’re watching the wrong thing. The real shift is quieter: your job is becoming more like the hardest 33% of itself.

Now let’s talk about what history tells us happens next.

The Posters-to-Plumbing Cycle

Every transformation begins as messaging and ends as infrastructure. In the beginning, it’s all posters—vision decks, slogans, townhalls, and big claims about how “everything will change.” The organization overestimates the short term because early demos look magical and people confuse possibility with readiness. Everyone projects their favorite outcome onto the new thing: engineers see speed, leaders see savings, and someone sees a “10x” slide and forgets the fine print.

Then reality walks in wearing a security badge. Hype turns into panic (quiet or loud) when the organization realizes this isn’t a trend to admire—it’s a system to operate. Questions get sharper: where does the data go, who owns mistakes, what happens in production, what will auditors ask, what’s the blast radius when this is wrong with confidence? This is when pilots start—not because pilots are inspiring, but because pilots are the corporate way of saying “we need proof before we bet the company.”

Pilots inevitably trigger resistance, and resistance is often misread as fear. In practice, it’s frequently competence. The people who live with outages, escalations, compliance, and long-tail defects have seen enough “quick wins” to know the invoice arrives later. They’re not rejecting the tool—they’re rejecting the lack of guardrails. This is the phase where transformations either mature or stall: either you build a repeatable operating model, or you remain stuck in a loop of demos, exceptions, and heroics. This is where most first-mover organizations are today!

Finally, almost without announcement, the change becomes plumbing. Standards get written, defaults get set, evaluation and review gates become normal, access controls and audit trails become routine, and “AI-assisted” stops being a special initiative and becomes the path of least resistance. That’s when the long-term impact shows up: not as fireworks, but as boredom. New hires assume this is how work has always been done, and the old way starts to feel strange. That’s why we under-estimate the long term—once it becomes plumbing, it compounds quietly and relentlessly.

The Capability–Constraint See-Saw

Every time we add a new capability, we don’t eliminate friction—we move it. When software teams got faster at shipping, the bottleneck didn’t vanish; it simply relocated into quality, reliability, and alignment. That’s why Agile mattered: not because it made teams “faster,” but because it acknowledged an ugly truth—long cycles hide misunderstanding, and misunderstanding is expensive. Short feedback loops weren’t a trendy process upgrade; they were a survival mechanism against late-stage surprises and expectation drift.

Then speed created its own boomerang. Shipping faster without operational maturity doesn’t produce progress—it produces faster failure. So reliability became the constraint, and the industry responded by professionalizing operations into an engineering discipline. SRE-style thinking emerged because organizations discovered a predictable trap: if operational work consumes everyone, engineering becomes a ticket factory with a fancy logo. The move wasn’t “do more ops,” it was “cap the chaos”—protect engineering time, reduce toil, and treat reliability as a first-class product of the system.

AI is the same cycle on fast-forward. Right now, many teams are trying to automate the entire SDLC like it’s a one-click migration, repeating the classic waterfall fantasy: “we can predict correctness upfront.” But AI doesn’t remove uncertainty—it accelerates it. The realistic path is the one we learned the hard way: build an interim state quickly, validate assumptions early, and iterate ruthlessly. AI doesn’t remove iteration. It weaponizes iteration—meaning you’ll either use that speed to learn faster, or you’ll use it to ship mistakes faster.

Power Tools Need Seatbelts

When tooling becomes truly powerful, the organization doesn’t just need new skills—it needs new guardrails. Otherwise the tool optimizes for the wrong thing, and it does so at machine speed. This is the uncomfortable truth: capability is not the same as control. A powerful tool without constraints doesn’t merely “help you go faster.” It helps you go faster in whatever direction your incentives point—even if that direction is off a cliff.

This is exactly where “agentic AI” gets misunderstood. Most agent systems today aren’t magical beings with intent; they’re architectures that call a model repeatedly, stitch outputs together with a bit of planning, memory, and tool use, and keep looping until something looks like progress. That loop can feel intelligent because it keeps moving, but it’s also why costs balloon. You’re not paying for one answer; you’re paying for many steps, retries, tool calls, and revisions—often to arrive at something that looks polished long before it’s actually correct.

Then CFO reality arrives, and the industry does what it always does: it tries to reduce cost and increase value. The shiny phase gives way to the mature phase. Open-ended “agents that can do anything” slowly get replaced by bounded agents that do one job well. Smaller models get used where they’re good enough. Evaluation gates become mandatory, not optional. Fewer expensive exploratory runs, more repeatable workflows. This isn’t anti-innovation—it’s the moment the tool stops being a demo and becomes an operating model.

And that’s when jobs actually change in a real, grounded way. Testing doesn’t vanish; it hardens into evaluation engineering. When AI-assisted changes can ship daily, the old test plan becomes a liability because it can’t keep up with the velocity of mistakes. The valuable tester becomes the person who builds systems that detect wrongness early—acceptance criteria that can’t be gamed, regression suites that catch silent breakage, adversarial test cases that expose confident nonsense. In this world, “this output looks convincing—and it’s wrong” becomes a core professional skill, not an occasional observation.

Architecture and leadership sharpen in the same way. When a model can generate a service in minutes, the architect’s job stops being diagram production and becomes trade-off governance: cost curves, failure modes, data boundaries, compliance posture, traceability, and what happens when the model is confidently incorrect.

Tech leads shift from decomposing work for humans to decomposing work for a mixed workforce—humans, copilots, and bounded agents—deciding what must be deterministic, what can be probabilistic, what needs review, and where the quality bar is non-negotiable.

Managers, meanwhile, become change agents on steroids, because incentives get weaponized: measure activity and you’ll get performative output; measure AI-generated PRs and you’ll get risk packaged as productivity. And hovering over all of this is the quiet risk people minimize until it bites: sycophancy—the tendency of systems to agree to be liked—because “the customer asked for it” is not the same as “it’s correct,” and “it sounds right” is not the same as “it’s safe.”

The Judgment Premium

Every leap in automation makes wine cheaper to produce—but it makes palate and restraint more valuable. When a giant producer wine producer can turn out consistent bottles at massive scale, the scarcity shifts away from “can you make wine” to “can you make great wine on purpose.” That’s why certain producers and tasters become disproportionately important: a winemaker who knows when not to push extraction, or a critic like Robert Parker who can reliably separate “flashy and loud” from “balanced and lasting.” Output is abundant; discernment is the premium product.

And automation doesn’t just scale production—it scales mistakes with terrifying efficiency. If you let speed run the show (rush fermentation decisions, shortcut blending trials, bottle too early, “ship it, we’ll fix it in the next vintage”), you don’t get a small defect—you get 10,000 bottles of regret with matching labels. The cost of ungoverned speed shows up as oxidation, volatility, cork issues, brand damage, and the nightmare scenario: the market learning your wine is “fine” until it isn’t. The best estates aren’t famous because they can produce; they’re famous because they can judge precisely, slow down at the right moments, and refuse shortcuts even when the schedule (and ego) screams for them.

Bottomline

Jobs aren’t going away. They’re being redefined into what’s hardest to automate: problem framing, constraint setting, verification, risk trade-offs, and ownership. Routine output gets cheaper. Accountability gets more expensive. The winners won’t be the people who “use AI.” The winners will be the people who can use AI without turning engineering into confident nonsense at scale.

AI will not replace engineers. It will replace engineers who refuse to evolve from “doing” into “designing the system that does.”

V’s of Data

Volume, Velocity, Variety, Veracity, Value, Variability, Visibility, Visualization, Volatility, Viability

What are the 3C’s of Leadership? “Competence, Commitment, and Character,” said the wise.

What are the 3C’s of Thinking? “Critical, Creative, and Collaborative,” said the wise.

What are the 3C’s of Marketing? “Customer, Competitors, and Company,” said the wise.

What are the 3C’s of Managing Team Performance? “Cultivate, Calibrate, and Celebrate,” said the wise.

What are the 3C’s of Data? “Consistency, Correctness, and Completeness,” said the wise; “Clean, Current, and Compliant,” said the more intelligent; “Clear, Complete, and Connected,” said the smartest.

“Depends,” said the Architect. Technologists describe data properties in the context of use. Gartner coined the 3V’s – Volume, Velocity, and Variety to create hype around BIG Data. These V’s have grown in volume 🙂

  • 5V’s: Volume, Velocity, Variety, Veracity, and Value
  • 7 V’s: Volume, Velocity, Variety, Veracity, Value, Visualization, and Visibility

This ‘V’ model seems like blind men describing an elephant. A humble engineer uses better words to describe data properties.

Volume: Multi-Dimensional, Size

“Volume” is typically understood in three dimensions. Data is multi-dimensional and stored as bytes—a disk volume stores data of all sizes. Data does not have volume! It has dimensions and size.

A person’s record may include age, weight, height, eye color, and other dimensions. The size of the record may be 24 bytes. When a BILLION person records are stored, the size is 24 BILLION bytes.

Velocity: Speed, Motion

Engineers understand the term velocity as a vector and not a scalar.

A heart rate monitor may generate data at different speeds, e.g., 82 beats per minute. I can’t say my heart rate is 82 beats per minute to the northwest. Hence, heart rate is a speed. It’s not heart velocity. I can say that a car is traveling 35 kilometers per hour to the northwest. The velocity of the vehicle is 35KMPH NW.

Data does not have direction; hence it does not have velocity. Data in motion has speed.

Variety: Heterogeneity

The word variety is used to describe differences in an object type, e.g., egg curry varieties, pancake varieties, sofa varieties, tv varieties, image data format varieties (jpg, jpeg, bmp), and data structure varieties (structured, unstructured, semi-structured). Data variety is abstract and is a marketecture term.

Heterogeneity is preferred because it explicitly states that:

  1. Data has types (E.g., String, Integer, Float, Boolean)
  2. Composite types are created by composing other data types (E.g., A Person Type)
  3. Composite types could be structured, unstructured, or semi-structured (E.g., A Person Type is semi-structured as the person’s address is a String type)
  4. Collections contain the same or different data types.
  5. Types, Composition, and Collections apply to all data (BIG or not).

Veracity: Lineage, Provenance

Veracity means Accurate, Precise, and Truthfulness.

Let’s say that a weighing scale reports the weight of a person as 81.5 KG. Is this accurate? Is the weighing scale calibrated? If the same person measures her weight on another weighing scale, the reported weight might be 81.45 KG. The truth may be 81.455 KG.

Data represent facts, and when new facts are available, the truth may change. Data cannot be truthful; it’s just facts. Meaning or truthfulness is derived using a method.

Lineage and provenance meta-data about Data enables engineers’ to decorate the fact with other useful facts:
1. Primary Source of Data
2. Users or Systems that contributed to Data
3. Date and Time of Data collection
4. Data creation method
5. Data collection method

Value: Useful

If Data is a bunch of facts, how can it be valuable? Understandably, the information generated from data by analyzing the facts is valuable. Data (facts) can either be useful to create valuable information or useless and discarded. We associate a cost to a brick and a value to a house. Data is like bricks used to build valuable information/knowledge.

Summary

I did not go into every V, but you get the drill. If an interviewer asks you about 5V’s in an interview, I request you to give the standard marketecture answer for their sanity. The engineer’s vocabulary is not universal; technical journals publish articles in the sales/marketing vocabulary. As engineers/architects, we have to remember the fundamental descriptive properties of data so that the marketecture vocabulary does not fool us. However, we have to internalize the marketecture vocabulary and be internally consistent with engineering principles.

It’s not a surprise that Gartner invented the hype cycle.

Data about Data

As a Data Engineer, I want to be able to understand the data vocabulary, so that I can communicate about the data more meaningfully and find tools to deal with the data for computingData Engineer

Let’s start with this: Binary Data, Non-binary Data, Structured Data, Unstructured Data, Semi-structured Data, Panel Data, Image Data, Text Data, Audio Data, Categorical Data, Discreet Data, Continuous Data, Ordinal Data, Numerical Data, Nominal Data, Interval Data, Sequence Data, Time-series Data, Data Transformation, Data Extraction, Data Load, High Volume Data, High Velocity Data, Streaming Data, Batch Data, Data Variety, Data Veracity, Data Value, Data Trends, Data Seasonality, Data Correlation, Data Noise, Data Indexes, Data Schema, BIG Data, JSON Data, Document Data, Relational Data, Graph Data, Spatial Data, Multi-dimensional Data, BLOCK Data, Clean Data, Dirty Data, Data Augmentation, Data Imputation, Data Model, Object (Blob) Data, Key-value Data, Data Mapping, Data Filtering, Data Aggregation, Data Lake, Data Mart, Data Warehouse, Database, Data Lakehouse, Data Quality, Data Catalog, Data Source, Data Sink, Data Masking, Data Privacy

Now let’s go here: High volume time-series unstructured image data, High velocity semi-structured data with trends and seasonality without correlation, High volume Image data with Pexels Data source masked and stored in Data Lake as the Data Sink.

The vocabulary is daunting for a beginner. These 10 categories (ways of bucketizing) would be a good place to start:

  1. Data Representation for Computing: How is Data Represented in a Computer?
    • Binary Data, Non-binary Data
  2. Data Structure & Semantics: How well is the data structured?
    • Structured Data, Unstructured Data, Semi-structured Data
    • Sequence Data, Time-series Data
    • Panel Data
    • Image Data, Text Data, Audio Data
  3. Data Measurement Scale: How can data be reasoned with and measured?
    • Categorial Data, Nominal Data, Ordinal Data
    • Discreet Data, Interval Data, Numerical Data, Continuous Data
  4. Data Processing: How is the data processed?
    • Streaming Data, Batch Data
    • Data Filtering, Data Mapping, Data Aggregation
    • Clean Data, Dirty Data
    • Data Transformation, Data Extraction, Data Load
    • Data Augmentation, Data Imputation
  5. Data Attributes: How can data be broadly characterized?
    • Velocity, Volume, Veracity, Value, Variety
  6. Data Patterns: What are the patterns found in data?
    • Time-series Data Patterns: Trends, Seasonality, Correlation, Noise
  7. Data Relations: What are the relationships within data?
    • Relational Data, Graph Data, Document Data (Key-value Data, JSON Data)
    • Multi-dimensional Data, Spatial Data
  8. Data Storage Types:
    • Block Data, Object (Blob) Data
  9. Data Management Systems:
    • Filesystem, Database, Data Lake, Data Mart, Data Warehouse, Data Lakehouse
    • Data Indexes
  10. Data Governance, Security, Privacy:
    • Data Catalog, Data Quality, Data Schema, Data Model
    • Data Masking, Data Privacy

More blogs to deep dive into each category and the challenges involved. Let’s peel this onion.