Building AI in an Agentic AI World
How do we keep up with the ever-changing AI landscape and what does this mean for product development in the near future?
We’re midway through 2025 and so much has changed since the end of 2024. AI has evolved beyond passive assistance to become proactive agents capable of making independent decisions, quantum computing is transitioning from theoretical research to practical applications, cybersecurity has become increasingly complex with the rise of AI-driven attacks, rollout of 5G-Advanced networks, and robotics technology is advancing rapidly (i.e. China’s integration of robotics into daily lives).
As a PM, it can sometimes feel hard to keep up with all of the changes that are moving 5 steps ahead, and it’s easy to feel like we’re always 1 step behind.
Thus, I wanted to start spending some time writing again, to try to keep up with the everchanging landscape of tech, and to learn something — together.
In today’s article, I wanted to dive deeper in to AI, and what that means when we talk about the new reality of Agentic AI. What is Agentic AI and how does it tie to the general principles of AI? How will Agentic AI reshape product development? How should we evolve as PMs to this new reality?
Let’s dive deeper.
The AI Evolution
AI research and development is not a new concept. There have been decades of AI research and development.
Traditional AI: These systems excel at tasks requiring logical reasoning and pattern recognition with structured data, but are limited by their lack of autonomy and inability to adapt dynamically to new or unpredictable situations (source).
Machine Learning (ML): This was a crucial advancement where it utilizes algorithms that automatically learn and identify patterns from data. ML is widely applied in areas like recommendation systems and predictive analytics. There are three primary approaches (source):
Supervised learning (training on labeled data)
Unsupervised learning (discovering patterns in unlabeled data)
Reinforcement learning (learning through continuous interaction and feedback)
Deep Learning (DL): This is an evolution of ML, where it employs artificial neural networks with multiple layers, mimicking the human brain’s information processing. These networks excel in complex tasks such as image recognition and natural language processing (NLP), which should process data through multiple hidden layers to detect intricate patterns. (source)
Generative AI (Gen AI): This was all the jazz in the past couple of years, focusing primarily on the creation of relevant content like text, images, or music by learning patterns from vast datasets. Models like GPT or DALL-E are adept at producing human-like outputs, making them ideal for creative tasks and conversational interfaces. However, their scope is largely reactive, responding to prompts without autonomous decision-making or goal-directed behaviour (source).
What is Agentic AI?
As we look at the evolution of AI, we can notice that AI systems primarily served as assistants, executing predefined rules or generating content based on human prompts. Now, Agentic AI represents a transition from AI being a reactive assistance to a proactive, independent action. Agentic AI perceives environments, reasons through complex problems, formulates plans, and executes decisions autonomously to achieve specific goals (source).
Here are the key characteristics:
Autonomy: The ability to operate independently, initiating decisions and actions without constant human supervision. (source)
Goal-oriented: The ability to take broad business objectives and independently determine how to achieve it by analyzing constraints, evaluate trade-offs, and initiate corrective actions. (source)
Dynamic learning: They continuously learn from experiences, environmental feedback, and new data inputs, and adjusts their behaviour in real-time to optimize outcomes. (source)
Multistep task execution: They can identify a problem, gather inputs from other systems or agents, decide on a solution, and follow through to resolution. (source)
Collaborative orchestration: Agentic systems are designed to work together, with multiple specialized agents contributing to a shared outcome without direct human coordination. (source)
Contextual understanding: The ability to interpret information based on the surrounding context, user history, and even emotional cues, rather than just isolated word. (source)
Pretty cool, huh? Agentic AI seems to be almost human-like, with a similar trajectory as a young adult.
Agentic AI consists of a sophisticated cycle of perception, reasoning, planning, and execution, constantly refining its actions based on feedback.
Let’s first talk about what Large Language Models (LLMs) are, because this is a foundational concept of AI. LLMs often serve as a the central “brain” or reasoning engine. They are a sophisticated subset of both ML and NLP that utilize deep learning to understand and generate human-like text. Usually, they are trained on immense volumes of text data, which allows them to learn intricate language patterns, develop reasoning abilities, and understand complex content.
Now, in the context of Agentic AI, LLMs interpret instructions, identify key components, and orchestrate decision-making, breaking down complex tasks into smaller, manageable parts. They can:
Process and generate human-like text, enabling intuitive natural language interaction and generating responses based on nuanced, context-dependent understanding
Direct behaviour of multiple specialized AI models or agents and orchestrate decision-making.
Be highly creative in tasks like content generation, code completion, and summarization.
(source)
However, since they are trained on vast datasets, they can sometimes “hallucinate” or generate false information. This is where the Retrieval-Augment Generation (RAG) comes in to allow the AI system to access and retrieve relevant information from authoritative, proprietary data sources in real-time before generating a response.
RAG grounds AI agents in factual, authoritative data sources at runtime. So when an AI agent needs to answer a question/make a decision, it first fetches relevant documents or facts from a knowledge base to support its output. By providing the LLM with specific, up-to-date information from a trusted source, RAG significantly improves the accuracy and reliability of the generated responses, reducing the likelihood of hallucinations. (source)
For example, in HR operations, an Agentic AI might use RAG to retrieve specific internal policies or procedures to answer an employee’s query while ensuring the response is accurate and contextually relevant. (source)
Based on its reasoning, the agent creates a multi-step action plan with subtasks. It utilizes predefined action schemas and goal-oriented planning, considering dependencies and constraints to ensure a logical and effective flow of actions.
After taking direct action with minimal human involvement by interacting with connected applications and systems to carry out the planned steps, it actively monitors progress by observing success and failure signals. It will then adjust its behaviour based on real-time conditions and outcomes.
Throughout its operation, the agent records every action, decision point, and outcome in structured logs. These logs are crucial for human review and enable continuous system improvements through supervised learning.
(source)
The defining element of Agentic AI is its inherent agency.
Agentic AI possess an internal drive and capacity for proactive initiation of tasks and decisions. This was a good analogy that I learned from a coworker a year ago: If traditional AI is instructed to “get strawberries”, it will only perform that specific task. An Agentic AI, given the objective to “make a fruit salad”, might independently decide to use blueberries if strawberries are unavailable.
What does this mean for product development?
We’ll get to that in a bit. But in the interim, this means that product development should evolve from focusing on monolithic systems to conceptualizing ecosystems of interconnected agents and tools. The emphasis shifts to designing the orchestration layer which defines how these agents interact and ensure seamless integration with existing enterprise systems.
The orchestration layer is responsible for managing the entire workflow of an Agentic AI system, so it needs to seamlessly integrate with external platforms, tools, and systems. Agentic AI agents can interact with APis, databases, software libraries, and user interfaces to perform tasks on a user’s behalf.
Within the orchestration layer, there needs to be specialized AI models/subagents. These are highly focused performers designed to excel at specific tasks. The LLM then orchestrates these subagents and delegates tasks to them based on their domain knowledge and capabilities. (source)
Evolving into an AI-native PM.
Sounds like a tag-line, I know. But given the rise of Agentic AI and AI-native companies, it demands a fundamental evolution in the role of a PM. This new reality demands that PMs step fully into their strategic responsibilities, moving beyond the tactical execution often associated with “product owner” or “project manager”, and instead embrace true product leadership and accountability for outcomes.
Feature-centric to outcome-centric accountability: PMs are now explicitly responsible for defining success in terms of desired business outcomes, moving beyond simply managing feature roadmaps or optimizing conversion funnels. This redefines success metrics, shifting from usage-based engagement to goal-driven automation, ensuring PMs are measured by the tangible value created by the AI rather than just having effort expended. (source)
AI Agents as strategic collaborators and not task executors: PMs must now strategically orchestrate AI agents as new colleagues. By utilizing AI agents, it frees up valuable time for higher-level strategic planning, which allows PMs to focus on the overarching product vision and market differentiation. PMs are now “editors”, where most of their time is spent curating and refining AI outputs and focus on strategic analysis. (source)
Systems design thinking is crucial: There is now deeper accountability for API design, integration depth, and the product’s interoperability within a broader API ecosystem. There needs to be now true ownership of the product’s underlying architecture and its interactions. (source)
Data is now a strategic asset: AI systems fundamentally rely on high-quality, domain-specific data. PMs are now accountable for integrating data strategy into their day-to-day, and ensuring that their data is “AI-ready” (accurate, consistent, complete, timely, and contextual).
Beyond some of the necessary shifts that need to be made as a PM, there is a need to have a foundational understanding of AI concepts, how AI models learn, and ho to manage systems that do not always behave predictably is crucial. (P.S. that’s a big reason why I started writing again!)
To start, we need to begin identifying areas where Agentic AI can add value in repeatable tasks that consume significant time. For example, I’m using Gemini to sort through Sheets to quickly derive insights, using their Deep Research functionality to help me outline frameworks that I can assess/curate for strategic planning, or using it to brainstorm with me ways to structure a message.
Also, AI systems fundamentally rely on high-quality, domain-specific data. As a PM, it’s now time to integrate data strategy into our role by considering where the data is coming from, how it’s structured, it’s quality, freshness, and how it can contribute to a competitive advantage.
Lastly, leveraging the concept of “Trust by Design” where trust, ethics, and governance are not afterthoughts. Specifically, when using AI, we need to conduct regular bias audits of AI decision-making patterns, and continuously monitor/refine based on real-world performance. (source)
Navigating this new reality is going to be both a challenge but a huge opportunity for us as Product Managers. The future lies in harnessing the power of AI to create more efficient, innovative, and valuable products, with clear outcomes in mind.
Before you go!
In the spirit of continuous improvement, your feedback would help me make each post better and more relevant. It only takes approximately a minute to complete this survey and it’ll be extremely helpful!
New to this series? Check out the other articles 👉 Mental Models & Product
Happy thinking!
🤘, Isabel



