Loading…

Wait, what is agentic AI?

Is “agentic AI” just a buzzword, or is it the sea change it seems?

Article hero image
Credit: Alexandra Francis

Developers are hearing a lot about agentic AI lately, that’s for certain. What’s less certain is what the term actually means and whether it’s A Thing you need to learn about. Is “agentic AI” just a buzzword, or is it the sea change it seems?

There’s no denying that “agentic AI” is a buzz-worthy term. But there is a "there" there. In this post, we’ll give you a big-picture overview of what AI agents are, what they can do, and how to think about them. This is a good place to start if you’re new to the notion of autonomous AI agents.

What are AI agents?

When we talk about AI agents or agentic AI, we’re not all talking about the same. As a recent TechCrunch piece by Maxwell Zeff and Kyle Wiggers put it, No one knows what the hell an AI agent is. Or, at least, not everyone can agree on a definition for what, exactly, an AI agent is or does, even as the industry grows increasingly bullish on the concept. The TechCrunch piece surveys the way titans like OpenAI, Anthropic, Google, and Microsoft describe agentic AI differently, highlighting the term’s plasticity. Much like the term “AI” itself, agentic AI is still coming into focus.

Caveats aside, agentic AI refers to autonomous AI systems that make decisions to achieve specific goals with minimal need for human oversight and intervention. Simply put: Generative AI creates content; agentic AI solves problems on a user’s behalf.

“The agentic AI system understands what the goal or vision of the user is and the context to the problem they are trying to solve,” AI expert Enver Cetin told the Harvard Business Review. The HBR article teases out three key differences between agentic AI and the generative AI systems we’re by now familiar with:

  1. Agentic AI is focused on making decisions, not generating content.
  2. Rather than responding to human prompts like generative AI systems do, agentic AI systems are set to work toward specific goals like maximizing efficiency, boosting customer satisfaction, or increasing sales.
  3. AI agents can carry out complex sequences of tasks in furtherance of their goals.

To bring this concept down to earth, here’s a high-level example of agentic AI we heard from the stage at TDX last month:

Say you’re in a fender bender. Instead of calling roadside assistance to kick off the long, involved process of getting your car towed and repaired, you ping an AI agent who can call emergency services if necessary, contact a towing service, help you document damage to your car, surface the relevant insurance information (like the amount of your deductible and whether your policy covers a rental car), make a list of the best-reviewed body shops in your area and request estimates from them, and coordinate with the insurance company (or the insurance company’s autonomous agents!) throughout the whole process.

The difference between that set of interconnected actions, all geared toward a specific goal, and the content a generative AI system produces when prompted with a question about a car accident tells you a lot about the functionality gap between agentic and generative AI. You tell the agent where you want to go and it gets you there. If generative AI gives you a map, agentic AI picks you up and takes you.

Good advice from a generative AI system, but less helpful than actually doing some of these things.

An exponential step forward

Because of their capacity to orchestrate complex sequences of events and apply cognitive reasoning in furtherance of their goals, agentic AI systems give users the opportunity to automate workflows they couldn’t automate before. The post-crash sequence outlined above is just one example.

Tools that make it easier for non-technical people to create software are nothing new, of course. Low-code and no-code tools have existed for decades. Generative AI-powered coding tools like Copilot have allowed non-programmers to write a little code that increases their efficiency by automating time-consuming and repetitive tasks. But agentic AI systems represent an exponential step forward in terms of how much they empower people without coding experience to orchestrate complicated, multi-step processes. And, crucially, they allow people with coding experience to do a whole lot more, too.

AI agents are generally better than generative AI models at organizing, surfacing, and evaluating data. In theory, this makes them less prone to hallucinations. From the HBR article: “The greater cognitive reasoning of agentic AI systems means that they are less likely to suffer from the so-called hallucinations (or invented information) common to generative AI systems. Agentic AI systems also have [a] significantly greater ability to sift and differentiate information sources for quality and reliability, increasing the degree of trust in their decisions.”

Despite widespread adoption, developers’ lack of trust in the output of generative AI systems persists. If AI agents are less prone to hallucinations, could they increase the degree of faith developers are willing to place in AI?

How are people using AI agents?

From conducting deep research into prospective customers in advance of sales calls to making recommendations to improve process efficiency to providing risk indicators across financial systems, AI agents unlock a world of potential use cases that are not just developer-centric. One company is leveraging agentic AI to help healthcare providers get paid for their work with less back-and-forth with insurance companies.

Another set of use cases for agentic AI is the rote-yet-crucial work developers do: testing and reviewing code, writing pull requests, error handling, helpdesk operations, threat scanning and security monitoring, and more. Agentic AI is poised to transform developer workflows by automating more of these processes, turning software development into a collaborative process in which the AI agent executes against the goal and constraints specified by the human user.

On the Stack Overflow podcast, we interviewed an engineer at Diffblue about how they’re using agentic AI to test complex code at scale. AWS told us how they saved 4,500 years of developer time upgrading Java across their humongous codebase (more than 30,000 packages). We talked with the cofounder and CEO of a startup building AI agents to review code and write pull requests. And an IT director and Salesforce architect we met at TDX is using AI agents to build an error handling system for his small org.

From our point of view, the AI agent doesn’t replace developers. It frees them to focus on what’s often referred to as “higher-order” tasks: creative, strategic, innovative or architectural work. It allows them to zero in on the aspects of their work where they’re most passionate or where they can have the biggest impact. The best agentic AI solutions will give developers time and energy back while helping them learn new tools and technologies—just as their preferred generative AI tools have done.

Our POV: Take a deep breath

Stack Overflow has been a tried-and-true developer resource for more than 15 years, so we’ve seen technologies rise and fall, trends come and go. Agentic AI is a paradigm shift on the order of the emergence of LLMs or the shift to SaaS. That is to say, it’s a real thing, but we’re not yet close to understanding exactly how it will change the way we live and work just yet.

The adoption curve for agentic AI will have its challenges. There are questions wherever you look: How do you put AI agents into production? How do you test and validate code generated by autonomous agents? How do you deal with security and compliance? What are the ethical implications of relying on AI agents? As we all navigate the adoption curve, we’ll do our best to help our community answer these questions. (After all, answering questions is our whole thing.). While building agents might quickly become easier, solving for these downstream impacts is still incomplete.

Sometimes when a new approach or a new piece of tech breaks onto the scene, we become fixated on the hype around that tech instead of focusing on its potential to solve actual problems. Rather than rushing to adopt agentic AI ASAP, our recommendation is that developers (and their managers) take a deep breath and ask themselves: What problems can I solve for my workflows, team or employer with AI agents that I can’t solve without them? A clear answer to that question is the starting point for considering if agent adoption can benefit you, and where it will take technology next.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.