Transformative Discovery: Integrating Coaching Principles for Project Success

The Human-Centered Approach to Discovery

At the core of effective discovery work lies the importance of coaching when gathering requirements. Over time, I’ve realized that meaningful insights rarely emerge from rigid templates or formal interviews; instead, they arise through genuine conversations where people feel supported enough to pause, think deeply, and express what they need.

Often, an initial request such as “We need a dashboard,” or “Can you shorten this workflow?” uncovers more fundamental issues like decision-making, team alignment, confidence, or communication barriers. By approaching discovery with a coaching mindset, we can reveal these underlying concerns rather than just addressing superficial symptoms. If you’ve ever experienced a discovery session that seemed more like coaching than interviewing, you’ll recognize the value of intentionally cultivating this dynamic.

Reflecting on my recent years of interviews, I’ve noticed a shift, they increasingly resemble coaching sessions. Initially, I thought I was merely “collecting requirements,” but over time, it became clear I was guiding people in clarifying their actual needs. Rather than just recording their requests, I was facilitating their thinking.

In early design meetings, users typically begin with basic asks: “We want a dashboard,” “Can you make this workflow shorter,” “Can we have a button that does X?” These are useful starting points, but they seldom tell the whole story. When I consciously adopt a coaching approach, slowing down, listening attentively, and posing thoughtful questions, the dialogue changes dramatically. At that moment, our focus shifts beyond the user interface into deeper topics: friction, decision-making processes, confidence, accountability, ambiguity, and the human elements hidden beneath feature requests.

Many professionals who have spent decades in their roles rarely get the chance to reflect on the patterns shaping their daily work. So, when I ask something as straightforward as, “What’s the hardest part about planning next season?” the answer often reveals gaps and bottlenecks behind the scenes, rather than issues with the software itself. These stories simply don’t surface during standard meetings.

Uncovering Deeper Insights through Curiosity and Coaching

Curiosity allows us to explore areas untouched by process charts and requirement documents. Prioritizing the individual over the process exposes context that’s invisible on paper, like emotional burden, workplace politics, quiet worries, workarounds, and shared tribal knowledge. Coaching fosters an environment where all these factors come to light, transforming them into valuable material for design decisions.

I used to think the better I got at systems, the less I’d need to do this. But it turned out the opposite is true. The better the system, the more human the conversations become. Coaching is almost like a bridge, helping people cross from “I think I need this feature” to “Here’s what I’m actually trying to solve.”

Active Listening and Guided Curiosity

Active listening forms the core of my approach, ensuring I deeply understand not just participants’ words but the meaning behind them. I reflect statements back — such as, “So it sounds like the challenge isn’t entering the data, it’s aligning on which data to trust, right?” — to confirm genuine understanding. This often transforms technical discussions into conversations about alignment, ownership, or governance.

A key tool is the “Five Whys” technique, which I use as a guide for curiosity rather than a rigid checklist. If someone requests better notifications, I’ll probe: “Why is that important?” and follow with questions like, “Why is it hard to notice things right now?” or, “What happens when you miss something?” By the fourth or fifth ‘why,’ the conversation surfaces underlying factors such as workload, confidence, or fear of missing out, revealing emotional and operational triggers beneath the initial request.

In workplaces, these deeper issues often connect to organizational culture. For example, a request for faster workflows sometimes indicates a real need for predictability or reduced chaos, rooted in communication or authority structures rather than the system itself. Recognizing these patterns enables more effective design decisions by addressing root causes instead of just symptoms.

Intentional silence is another valuable technique. After asking a question, I resist filling the pause, giving participants space to think and speak freely. This silence often prompts unfiltered insights, especially when someone is on the verge of articulating something new. Allowing this space helps participants trust and own their insights, leading to more meaningful outcomes.

Future-Focused Exploration and Empowering Language

I also employ future-anchoring questions like, “Imagine it’s six months after launch — what does success look like for you?” or, “If the system made your job easier in one specific way, what would that be?” These help participants shift from immediate concerns to aspirational thinking, revealing priorities such as autonomy or coordination that guide design principles.

Tone and language are critical for psychological safety. I aim to make discovery feel inviting, often assuring participants, “There’s no wrong answer here,” or encouraging them to think out loud. When people use absolutes — “We always have to redo this,” “No one ever gives us the right information” — it signals where they feel stuck. I gently challenge these constraints by asking, “What might need to change for that to be different?” This opens possibilities and helps distinguish between real and internalized limitations. Coaching-based discovery is key to uncovering and addressing these constraints for lasting change.

Reflections and Takeaways

Coaching Tools as Foundational Practice

Initially, I viewed coaching tools as separate from implementation work, and more of an optional soft skill than a crucial element. Over time, my outlook changed: I saw these tools as fundamental to successful outcomes. I noticed that the best results happened when participants truly took ownership of the insights we discovered together. That sense of ownership was strongest when the understanding came from them, even with my guidance. Insights gained this way tend to last longer and have a greater impact.

My approach to discovery has evolved significantly over time. Initially, I viewed discovery as a process focused on extracting insights from users. More recently, it has transitioned into facilitating users’ own self-discovery, enabling them to articulate intuitions and knowledge that may have previously been unexpressed. This progression from a transactional checklist to a collaborative and transformative meaning-making practice has had a substantial impact on my design methodology.

Efficiency through Early Alignment and Clarity

Contrary to prevailing assumptions, coaching-based discovery does not impede project timelines. Although it demands greater initial investment of time, the resulting enhanced alignment and mutual understanding often expedite progress. Early engagement in substantive discussions enables teams to minimize rework, clarify decision-making processes, and avoid misinterpretations, which can ultimately result in projects being completed ahead of schedule due to unified objectives.

Efficiency is driven by clarity. When users feel acknowledged and their perspectives are incorporated, their level of engagement and willingness to collaborate increases. The trust established during these interactions persists throughout testing, feedback, and rollout stages, mitigating many subsequent problems that typically occur when user requirements are not considered from the outset.

Strong Implementation Questions Are Strong Coaching Questions

At their core, effective implementation questions are essentially strong coaching questions. These are fuelled by curiosity, maintain a non-judgmental tone, and aim to empower others. Instead of guiding someone toward a set answer, such questions encourage individuals to uncover their own insights about the work.

Regardless of the type of discovery — be it design, implementation, or workflow — insight comes from those directly involved. Coaching goes beyond mere technique; it represents a mindset based on the belief that people already hold valuable wisdom. The coach’s job is to help draw out this knowledge, using thoughtful questions.

A key moment in coaching-based discovery happens when someone has a sudden realization, saying things like, “I’ve never thought about it that way,” or “Now I understand why this keeps happening.” These moments are where improvements in design and implementation begin.

Such realizations act as anchors throughout a project. When team members shift their understanding, these breakthroughs can be revisited during times of complexity or tough decisions, providing direction as a “north star” to keep teams aligned.

Coaching is not just a resource, it should be demonstrated in everyday interactions. As teams experience its benefits, they often adopt coaching practices with each other, leading to genuine transformation that extends past individual projects and influences wider workplace culture.

Ultimately, the real value of this work lies not just in the solutions themselves, but in the conversations that reshape how people engage with their work.

Atom Loss: A Bottleneck in Quantum Computing

It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

Introduction

Until recently, quantum computers have faced a significant obstacle known as ‘atom loss’, which has limited their advancement and ability to operate for long durations. At the heart of these systems are quantum bits, or qubits, which represent information in a quantum state, allowing them to be in the state 0, 1, or both simultaneously, thanks to superposition. Qubits are formed from subatomic particles and engineered through precise manipulation and measurement of quantum mechanical properties.

Historically, this atom loss phenomenon restricted quantum computers to performing computations for only a few milliseconds. Even the most sophisticated machines struggled to operate beyond a few seconds. However, recent breakthroughs by Sandia National Laboratories and Harvard University researchers have changed this landscape dramatically. At Harvard, researchers have built a quantum computer that could sustain operations for over two hours [1], a substantial improvement over previous limitation. This advancement has led scientists to believe they are on the verge of enabling quantum computers to run continuously, potentially without time constraints.

What causes atom loss?

Atom loss presents a significant challenge in quantum computing, as it results in the loss of the fundamental unit of information – the qubit – along with any data it contains. During quantum computations, qubits may be lost from the system due to factors such as noise and temperature fluctuations. This phenomenon can lead to information degradation and eventual system failure. To maintain qubit stability and prevent atom loss, a stringent set of physical, environmental, and engineering conditions must be satisfied.

Environmental fluctuations

Maintaining the integrity of qubits in a quantum computing system is heavily dependent on shielding them from various environmental disturbances. Qubits are highly sensitive to noise, electromagnetic fields, and stray particles, any of which can interfere with their quantum coherence. Quantum coherence describes the ability of a qubit to remain in a stable superposition state over time; the duration of this coherence directly affects how long a qubit can function without experiencing errors.

One fundamental requirement for preserving quantum coherence is the maintenance of cryogenic environments. Qubits must be kept at temperatures near absolute zero, which is essential for eliminating thermal noise and fostering the quantum behaviour necessary for reliable operations. Even slight fluctuations in temperature or the presence of external electromagnetic influences can cause the delicate quantum state of a qubit to degrade or flip unpredictably, leading to information loss and system errors [2].

These stringent environmental controls are critical for ensuring that qubits remain stable and effective throughout quantum computations, highlighting the importance of addressing environmental fluctuations as a key challenge in quantum computing.

Trap imperfections

Neutral atom processors have become a prominent platform for achieving large-scale, fault-tolerant quantum computing [3]. This approach enables qubits to be encoded in states that possess exceptionally long coherence times, often extending up to tens of seconds. The extended coherence time is crucial for maintaining quantum information over prolonged computations, which is essential for complex and reliable quantum operations.

The operation of neutral atom processors relies on the use of optical tweezer arrays. These arrays are dynamically configured, allowing qubits to be trapped in arbitrary geometries and enabling the system to scale to tens of thousands of qubits. The flexibility in configuration and scalability makes neutral atom processors especially suited for advancing quantum computing technology beyond previous limitations.

Despite these advantages, neutral atom processors are not immune to challenges. Atom loss remains a significant issue, arising from several sources. Heating within the system can cause atoms to escape their traps, while collisions with background gas particles further contribute to atom loss. Additionally, during the excitation of an atom from one quantum state to another, such as the transition to a Rydberg state, anti-trapping can occur, leading to the loss of qubits from the processor array.

Readout errors

During the process of reading out quantum information, qubits may be displaced from their positions within the two-dimensional arrays. This readout operation, which involves imaging the qubits to determine their quantum state, can inadvertently lead to the loss of qubits from the processor array. Such atom loss poses a risk to the integrity and continuity of quantum computations.

To address this challenge, neutral atom processor arrays are typically designed with additional qubits that act as a buffer. These extra qubits ensure that, even when some atoms are lost during readout or other operations, enough qubits remain available for the system to continue performing calculations reliably.

Another approach to mitigating atom loss during readout is to slow down the imaging process. By reducing the speed of readout operations, the likelihood of displacing qubits can be minimized, thereby decreasing the rate at which atoms are lost from the array. However, this strategy comes with a trade-off: slowing down readout operations leads to reduced overall system efficiency, as calculations take longer to complete [4]. As a result, there is an inherent balance between maintaining qubit integrity and preserving the speed and efficiency of quantum computations.

Imperfect isolation

Maintaining perfect isolation of qubits from their environment is an immense challenge, primarily because it demands highly sophisticated and costly shielding methods. In practice, it is virtually impossible to completely shield quantum systems from external influences. As a result, stray electromagnetic signals, fluctuations in temperature, and mechanical vibrations can penetrate these defences and interact with quantum systems. Such interactions are detrimental, as they can disrupt the delicate balance required for quantum operations and ultimately lead to atom loss within the processor array [5]. These environmental disturbances compromise the stability and coherence of qubits, posing a significant obstacle to the reliability and scalability of quantum computers.

Recent solutions and research

Multiple research teams are developing ways to reduce atom loss by detecting and correcting missing atoms in quantum systems, improving calculation reliability.

Researchers at Sandia National Laboratories, in collaboration with the University of New Mexico, have published a study demonstrating, for the first time, that qubit leakage errors in neutral atom platforms can be detected without compromising or altering computational outcomes [6]. The team achieved this by utilising the alternating states of entanglement and disentanglement among atoms within the system. In experiments where the atoms were disentangled, results showed substantial deviations compared to those observed during entanglement. This approach enabled the detection of the presence of adjacent atoms without direct observation, thereby preserving the integrity of the information contained within each atom.

Ancilla qubits are essential in quantum error correction and algorithms [7]. These extra qubits help with measurement and gate implementation, yet they do not store information from the main quantum state. By weakly entangling ancilla qubits with the physical qubits, it becomes possible for them to identify errors without disturbing the actual quantum data. Thanks to non-demolition measurements, errors can be detected while keeping the physical qubit’s state intact.

A group of physicists from Harvard University have recently created the first quantum computer capable of continuous operation without needing to restart [1]. By inventing a technique to replenish qubits in optical tweezer arrays as they exit the system, the researchers managed to keep the computer running for more than two hours. Their setup contains 3,000 qubits and can inject up to 300,000 atoms each second into the array, compensating for any lost qubits. This approach enables the system to maintain quantum information, even as atoms are lost and replaced. According to the Harvard team, this innovation could pave the way for quantum systems that can function indefinitely.

Conclusion

It was previously believed that atom loss could seriously hinder the progress of quantum computing. Atom loss and qubit leakage were serious errors that could render calculations unreliable. With the advancements introduced by the researchers at Sandia National Laboratories, the University of New Mexico and Harvard University, and a host of other teams around the world, the revolutionary advancements quantum computers could introduce in scientific research, medicine and finance are becoming closer than ever. It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

[1] Harvard Researchers Develop First Ever Continuously Operating Quantum Computer

[2] Quantum Chips: The Brains Behind Quantum Computing

[3] Quantum Error Correction resilient against Atom Loss

[4] Novel Solutions For Continuously Loading Large Atomic Arrays

[5] Quantum Decoherence: The Barrier to Quantum Computing

[6] A breakthrough in Quantum Error Correction

[7] Ancilla Qubit

Bringing Ideas to Life: My Journey as a Product Architect

My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible.

Lately, I have been reflecting on what drew me, as a designer, to write about topics such as artificial intelligence and quantum computing. I have been fascinated with both topics and how they have transformed that way we view the world. Everything we see today in terms of advancements in AI and quantum computing started with an idea, brought to life through innovation and perseverance.

In AI, there was the idea that machine learning would transform the way we do business by leveraging large amounts of data to provide valuable insights, something that would not be easily attainable through human effort. In quantum computing, there was the idea that applying the way particles behave in the universe to computing would unlock a vast potential for computing capabilities and power, beyond what classical computers can achieve. So many other advancements and achievements in AI and quantum computing continue to be realized through the conception of ideas and the relentless pursuit of ways and methods to implement them.

Everything starts with an idea

Beyond AI and quantum computing, everything we see around us started with an idea, brought to life through continued and persistent effort to make it a reality. Every building we see, every product, every service and all material and immaterial things in our lives are the product of an idea.

As a designer and product architect, I also help make ideas a reality through persistent effort and the application of methodology that lays a roadmap for the implementation of those ideas. Similarly, AI and quantum computing are fields that are bringing novel and exciting concepts to life through the development and application of scientific methodology.

While thinking about all of this, I pondered how I would define my work and role as a designer. How would I describe my work, knowing that most of us use technology without thinking about the journey a product takes from idea to experience? What value do I bring to organizations that hire me to help them with their problems? In an age where products are incorporating ever more advanced and sophisticated technology, as is the case with AI and quantum computing, how does my work extend beyond simply developing designs and prototypes?

To answer these questions, I am drawn back to the fact that everything around us starts with an idea. As a designer, it is extremely rewarding to me to help make ideas for my clients a reality while navigating the conceptual, technical and implementation challenges.

Making the invisible useful

I’ve been thinking a lot about the similarities between how we design physical spaces and how we design digital ones. Just like a building starts as an idea in an architect’s mind, so are the products that I work on and help a multitude of organizations bring to life. As a designer, I help lay the foundations for a product idea by thoroughly understanding the motivations and needs behind it, and what benefits and improvements implementing it would bring.

Buildings serve needs by providing housing for people or serving as places to work, and for businesses and organizations to operate. A well-designed building offers an effortless flow that draws people in and makes them want to stay. Similarly, great digital design allows for seamless navigation, creating an experience that feels natural and engaging. Before an architect devises plans and drawings for a building, they must first maintain a clear vision of the idea in their mind, understand the needs behind it and ensure that their designs and plans meet those needs.

From there, the idea and concept of the building in the architect’s mind are translated into plans and drawings. Those plans are drawn and shared with a builder, who in turn collaborates with the architect to bring them to life. Without the architect and their clear vision of the idea and concept behind the building, the building would not exist, at least not in the shape and form that the architect would have imagined. It would not properly serve the needs and bring about the benefits that accompanied the original idea.

Just like a building architect, as a product architect I must also understand the needs behind digital products to create experiences that truly serve the user. Through this process, I envision flows and interactions that will enable users to achieve their goals in the simplest and easiest way possible, reducing friction while also achieving the desired business value and benefit. Like an architect, I collaborate with members of technical teams so that the idea behind the product can be realized to its full potential through detailed roadmaps, designs and prototypes.

Figure 1. Architects are masters of the invisible made useful.

An architect must possess technical and creative skills that enable them to visualize the idea of a building. The same is true for me as a product architect. Without the ability to clearly articulate complex technical concepts through detailed designs and specifications while also applying a creative lens, product ideas would not be realized to their full potential.

In summary, how do I define my work? My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible. I can help you and your organization achieve the same. Let me show you how.

The Principles of Quantum Computing Explained

Today, a variety of companies are producing mainstream quantum hardware and making tools available to developers, turning quantum computing technology that was theoretical a few decades ago into a reality.

Introduction

During one of his Messenger Lectures at MIT in 1964, the renowned Nobel prize laureate and theoretical physicist, Richard Feynman, was quoted as saying “I think I can safely say that no one can understand quantum mechanics”. Feynman emphasized the counter intuitiveness of quantum mechanics, and encouraged listeners at his lecture to simply accept how atoms behave at the quantum level, rather than trying to apply a classical understanding onto it [1].

At its core, quantum theory describes how light and matter behave at the subatomic level. Quantum theory explains how particles can appear in two different places at the same time, how light can behave both as a particle and a wave, and how electrical current can flow both clockwise and counter-clockwise in a wire. These ideas can seem strange to us, even bizarre, yet quantum mechanics gave rise to a new world of possibilities science, technology and information processing.

What is a quantum computer?

While classical computers use bits that can be either 0 or 1, quantum computers use quantum bits (qubits) that can be 0, 1 or both at the same time, suspended in superposition. Qubits are created by manipulating and measuring systems that exhibit quantum mechanical behaviour. Because qubits can hold superposition and exhibit interference, they can solve problems differently than classical computers.

Quantum computers perform quantum computations by manipulating the quantum states of qubits in a controlled way to perform algorithms [2]. Quantum computers can adopt an arbitrary quantum state from an arbitrary input quantum state. This enables quantum computers to accurately compute the behaviour of small particles that follow the laws of quantum mechanics, such as the behaviour of an electron in a hydrogen molecule. Quantum computers can also be used to efficiently run optimization and machine learning algorithms.

For example, a classical computer might apply a brute force method to solve a maze by trying every possible path and remembering the paths that don’t work. A quantum computer, on the other hand, may not need to test all paths in the maze to arrive at the solution. Instead, given a snapshot of the maze, a quantum computer relies on measuring the probability amplitudes of qubits to determine the outcome. Since the amplitudes behave like waves, the solution is found when the waves overlap.

Principles of quantum computing

Quantum computing relies on four key principles:

Superposition – represents all possible combinations of a qubit through a complex multi-dimensional computational space. Superposition allows the representation of complex problems in new ways using these computational spaces. The quantum state is measured by collapsing it from the superposition of possibilities into a binary state that can be registered as binary code using 0 and 1[3].   

Entanglement – the ability of qubits to correlate their state with other qubits. Entanglement implies close connections among qubits in a quantum system, such that each qubit can immediately determine information about other qubits in the system.

Interference – qubits placed in a state of collective superposition structure information in a way that looks like waves, with amplitudes associated with each wave. These waves can either peak at a particular level or cancel each other out, thus amplifying the probability or canceling it out for a specific outcome. Amplifying or canceling out a probability are both forms of interference.

Decoherence – occurs when a system collapses from a quantum state to a non-quantum state. This can be triggered intentionally through measurement of the quantum system or other unintentional factors. Quantum computers require avoiding or minimizing decoherence.                 

Combining these principles can help explain how quantum computers work. By preparing a superposition of quantum states, a quantum circuit written by the user uses operations to entangle qubits and generate interference patterns, as governed by a quantum algorithm. Outcomes are either canceled out or amplified through interference, and the amplified outcomes serve as the solution to the computation.

Conclusion

Today, a variety of companies are producing mainstream quantum hardware and making tools available to developers, turning quantum computing technology that was theoretical a few decades ago into a reality. Superconducting quantum processors are being delivered at regular intervals, increasing quantum computing speed and capacity. Researchers are continuing to make quantum computers even more useful, while overcoming challenges related to scaling quantum hardware and software, quantum error correction and quantum algorithms.


Designing solutions that work for users is what fuels my work. I’d love to connect and talk through your design ideas or challenges, connect with me today on LinkedIn or contact me at Mimico Design House.


References

[1] Quantum Mechanics by Richard P. Feynman

[2] The basics of Quantum Computing

[3] What is quantum computing?

Retail Is Entering Its Agentic AI Era

The retail landscape is being quickly transformed by Agentic AI programs that are driving a competitive race to lead in autonomy, speed and personalized customer experiences. In 2025, retailers cannot afford not to move quickly and aggressively in implementing agentic AI in all business functions or they risk being left behind, or worse, forced to exit.  

Introduction

AI agents are redefining retail and are evolving into autonomous assistants that plan, recommend and take action. One of the most prominent examples of this shift is Walmart’s “Sparky”, a conversational AI shopping assistant in the mobile app that can understand customers shopping needs, suggest relevant products, answer questions and provide recommendations based on preferences [1]. Walmart is betting big on AI to drive its e-commerce growth and is aiming for online sales to account for 50% of its total sales [2].  

Amazon, another retail giant, is using AI on a different scale by creating a harmonious ecosystem of AI and Machine Learning (ML) models across the different functional areas of the business. For example, demand forecasting is accomplished using models that leverage sales history, social media, economic trends and weather to predict demand more accurately. Machine learning (ML) algorithms use data across the supply chain to optimize stock levels and replenishment strategies to ensure alignment with predicted demand. Amazon is also using AI to automate inventory management and using AI-driven robots to manage the movement of good within warehouses. Other AI models optimize delivery routes in real-time using inputs like traffic conditions and weather among other factors [3].  

Retailers that make use of AI and ML will ensure they maintain a competitive edge, and those that do not, risk being left behind or forced to exit. Amazon’s example of creating an ecosystem that uses the output from one AI model as input into another ensures that the business continues to add efficiencies and boost future profitability.  Across the U.S., retailers are investing heavily in AI agents, with 83% of companies claiming AI is a top priority in business plans [4].  

These statistics bring about an interesting question: what if every customer and every employee had their own AI agent, helping find products and optimize their shopping experience, or helping with labor-intensive tasks? AI agents are evolving from pilot projects to front-line and business critical applications, enabling businesses to gain a competitive edge and attract customers with better online shopping experiences.

What Are AI Agents? 

In the context of AI, “agentic” refers to autonomous systems capable of making decisions and acting independently. AI agents are a more advanced form of AI that can make decisions and take actions with little or no human intervention. Agentic AI can combine multiple interconnected AI agents that are continuously learning, reasoning and acting proactively. Businesses can customize AI agents to meet their needs, given the flexibility and adaptability of AI agents for a wide range of industries and applications [5][6]. 

The key features of agentic AI include: 

  • Autonomy: the ability to work autonomously to analyze data and solve problems in real-time with little human intervention. 
  • Collaboration: the ability of multiple AI agents to work together leveraging Large Language Models (LLMs) and complex reasoning to solve complex business problems. 
  • Learning and adaptation: dynamically evolving by interacting with its environment, and refining strategies, based on feedback and real-time data. 
  • Reasoning and proactivity: identifying issues and forecast trends to make decisions such as reordering inventory or resolving customer complaints.  

The adoption of Agentic AI in 2025 is gaining momentum as businesses aim to move from insight to action at greater speed and efficiency. Agentic AI solves the problem of scarce human resources needed to deal with the growing volume, complexity and inter-dependence of data. By moving at the speed of machine computation, agentic AI allows businesses to be more agile in real-time, act on business-critical insights more quickly, and scale more rapidly.  

The competitive edge introduced by agentic AI is driving its rapid adoption, and it is due to the following factors [7][8][9]: 

  • Speed: Businesses must move and react to customer needs, supply chain factors and market conditions at unprecedented speeds in 2025. It is no longer sufficient to use traditional AI that still relies on human intervention. Agentic AI is not only able to forecast problems and issues, but it can also act and execute upon them. Agentic AI can forecast and resolve customer issues before they even occur, and it can react to supply chain disruptions by forecasting and acting upon them, before they happen. 
  • Reduce reliance on humans, not replace them: Agentic AI does not aim to replace humans and take away jobs, but rather to augment them. It acts as a co-worker that enhances productivity by focusing on analysis of repetitive data-intensive processes, creating forecasts that enable faster decision-making, and enabling employees to focus on business strategy and the creative, innovative decisions that will allow the business to continue to grow. Agentic AI allows businesses to increase performance while cutting costs without the need for increased human intervention. 
  • Cost reduction and improved ROI: Agentic AI is also unlocking vast opportunities for cost reduction, through quick evaluation of data, testing strategies and adjusting operations in real-time. By automating repetitive and data-intensive processes, AI agents reduce the dependence on manual labor, minimize errors that translate to rework and add cost-effectiveness and efficiency that in turn result in higher ROI.  
  • Enhanced customer experience: AI agents are capable of contextual understanding, proactive assistance and continuous learning. This allows them to boost customer satisfaction and loyalty by offering instant, real-time assistance and answers to customers queries while reducing wait times and improving resolutions rates.  
  • Business must adapt or die: Agentic AI allows businesses to remain at the forefront of their market by learning and adapting in real-time. In 2025, customers expect instant and personalized service. It is becoming easier for businesses to integrate agentic AI into their various systems, especially with the introduction of Model Context Protocol (MCP) integration framework enabling intelligent agents to interact with external systems in a standardized, secure, and contextual way. User-friendly applications allow businesses to easily connect and deploy AI agents via a visual workflow builder without coding. Business have the opportunity to adapt by leveraging the technologies and capabilities available to them today to implement agentic AI.   

The following table illustrates how AI is being implemented across various areas within Retail. 

Functional Areas Applications Examples 
Customer experience • Personalize services, answer questions, and process orders 
• Offer product and project guidance 
• Smart kiosks assist with product search, availability, and location 
• AI delivers instant answers, recommendations, and smoother shopping 
• Walmart’s “Sparky” suggests products and summarizes reviews Lowe’s AI assistant offers DIY and product support via app H&M’s chatbot recommends outfits, boosting satisfaction by 30% [10] 
Inventory Management  • Streamline store ops and inventory management AI robots track stock and automate restocking 
• Smart shelves auto-detect low stock and reorder 
• Forecast demand using sales and market data 
• AI schedules staff based on foot traffic forecasts Video analytics detect theft and safety issues 
• Zara’s AI cut stockouts by 20% and excess by 15% by using data from sales, customer behavior and market trends to forecast demand 
• Walmart uses robots for real-time shelf scanning 
• Home Depot AI helps staff quickly access data and gain necessary information to help customers 
Supply Chain • Adjust orders and routing using sales, weather, and trend data 
• Track shipments, suppliers, and logistics for full supply chain visibility 
• Improve forecasting to optimize supply chain operations 
• Cut costs by aligning forecasts with supply chain efficiency 
• Kroger’s AI forecasting cut food waste by 10% and improved inventory accuracy by 25% 
• Unilever’s AI use reduced supply chain costs by 15% and improved delivery times by 10% 
• Walmart also achieved major gains through AI-driven supply chain improvements 
Marketing  • Agentic AI manages end-to-end customer journeys across commerce, content, loyalty, and service [11] 
• AI interprets real-time journey data to adapt marketing strategies 
• Retailers use AI insights to keep campaigns fast, relevant, and effective 
• AI analyzes feedback to spot improvements and cut manual tasks 
• Nike uses AI to predict purchases and personalize marketing, boosting engagement by 20% and driving sales 
• Coca-Cola uses predictive analytics to shift budget to high-performing channels, increasing Instagram spend by 20% and sales by 15%.  
 Table 1: Retail Examples Where AI Is Already Driving Impact

What Executives Should Do To Drive The Agentic AI Shift 

AI agents are changing how organizations can deliver value to their customers, improve customer experience and manage risks. Executives are becoming increasingly aware that agentic AI is not just an automation tool, but rather a new way to drive deep business innovation and, if harnessed correctly, a way to maintain a competitive advantage. 

Executives must lead the shift in the organization towards agentic AI by aligning governance and priorities to support IT and data investments required. To facilitate this shift to agentic AI the CEO must focus on [12][13]: 

  • Investing in labor and technical infrastructure: this is accomplished by removing the barriers across the various systems in the organization to enable AI agents to operate across the various functional areas. In addition, upskilling and retraining the workforce is required to learn how to work with the new technologies introduced by agentic AI. 
  • Lead the organizational shift: establish the goals and intended values of using agentic AI in the organization, and how it is to be used as a partner in creating value. The goal should not be simply to focus on optimizing headcount and reducing costs, it is about leading the organization into the future of retail. 
  • Highlight key projects: by spearheading key and high-value projects in areas of the organization such as supply chain management, operations and customer service, the CEO can help build momentum and rally resources. They can also demonstrate the value of agentic AI by tracking key KPIs. 
  • Oversee risk, compliance, and ethics: it is essential for the CEO to oversee all regulatory, privacy, transparency and risk issues related to the adoption of agentic AI. This is crucial in allowing the organization to proceed with confidence in implementing the various technical and IT infrastructures needed, and to realize the value and gains from agentic AI quickly and efficiently.  

It is important to note that organizations that can quickly adopt and adapt to agentic AI will gain the competitive edge. The value proposition for executives in adopting this technology can be summarized in the following key elements: 

  • Business transformation through automation and productivity: Agentic AI goes beyond the range of capabilities offered by Gen AI and can handle complex workflows through autonomous decision-making. This allows staff to work alongside AI agents and use its output while focusing on strategic and high-value tasks that boost workers productivity and allow them to use their time efficiently.  
  • Gaining a competitive edge: AI agents work continuously adapting to real-time issues, learning and making decisions quickly. This enhances customer experience, boosts innovation and resilience against market changes.  
  • Boost ROI and increase revenues: Studies have shown that agentic AI contributes up to 18% improvement in customer satisfaction, employee productivity, and market share, with $3.50 in return for every $1 invested realized over a 14-month payback period [14]. This is driven primarily by redirecting human resources from focusing on repetitive low-value tasks to more strategic and high-value ones.  

Enable rapid scaling and agility: AI agents can help lead the transformation of the organization to be more forward-looking and competitive, by driving business transformation, upskilling the workforce and enabling the rapid scaling and adaptation of business models. 

Implementation Priorities: How to Get Started 

The diagram below illustrates the interconnected functional areas and visually describes how they intersect with Inventory Management in an omnichannel retail environment.  The data that flows between each area is what is used in AI models to enhance decision making. The interconnected data that flows between functions feed AI models which generate insights needed to optimize inventory, fulfillment, and customer responsiveness. 

 Figure 1: Inventory Management across Functional areas in Retail

The table below outlines key functional areas, the associated data points, and how AI is applied to enhance operational efficiency. 

Inventory Layer Key Data Points AI Usage to Improve Efficiency 
Factory / Seller* • Proforma Invoice 
• Commercial Invoice 
• Packing List 
• Predict lead times and invoice anomalies 
• Detect supply risk patterns 
Shipper • Advanced Shipping Notice (ASN) • Predict shipment delays  
• Optimize dock scheduling at warehouse 
Warehouse • Putaway Status 
• Inventory Quantity & Location 
• SKU Detail  
• Cycle Count Accuracy 
• Labor Handling Time 
• Predict slotting needs 
• Detect discrepancies 
• Optimize workforce allocation 
Available Inventory • On-hand quantity  
• Committed vs Free inventory 
• Safety stock levels 
• Dynamic Available to Pick (ATP) calc 
• Reallocation suggestions 
• Overstock / stockout alerts 
Allocation • Demand forecasts 
• Store sales velocity 
• Promotion calendar 
• Optimize first allocation  
• Recommend flow-through allocation 
Replenishment • Sell-through data 
• Min/max thresholds 
• Lead times 
• Auto-trigger replenishment 
• Predict out-of-stock risk  
• Dynamic reorder points 
Store Inventory • Store on-hand inventory 
• Returns & damages 
• Shelf vs backroom split 
• Optimize replenishment routing 
• Detect phantom inventory 
Customer Order • SKU ordered 
• Delivery preference 
• Fulfillment location 
• Predict best node to fulfill  (e.g., ship-from-store vs DC) 
• Reduce split shipments 
Fulfillment / Distribution • Pick time 
• Delivery time  
• On-time %  
• Exception logs 
• Route optimization 
• Predict fulfillment delays  
• Auto rerouting 
Reorder Loop • Real-time sales data 
• Inventory velocity  
• Reorder frequency 
• Adaptive reorder intervals  
• Prevent overstock / stockouts 
Table 2: How Data Enables AI to Improve Inventory Across the Supply Chain
*Assumes FOB Incoterms  

Implementing Agentic AI follows a multi-phased approach that integrates technology, business and culture. This approach can be iterative and repeated as necessary depending on the complexity and scope of the processes being automated [15]. 

Readiness ➡ Design ➡ Pilot ➡ Scale 

Assessing readiness 

Assessing readiness involves evaluating and auditing workflows, data infrastructures and IT capabilities to ensure compatibility with the agentic AI needs. These include ensuring that AI model outputs will be compatible with the organization’s future audit needs and that IT infrastructures can support the AI models data requirements.  

It is also important to evaluate the company’s culture and assess the adaptability and openness to automation. This is a good opportunity to address any resistance and skill gaps through education and training to ensure that teams see the value agentic AI will add to their work. 

The readiness phase is also a good opportunity to identify high-impact business use cases that can be used to pilot the implementation of agentic AI processes, and scale as necessary to the rest of the organizations, as these processes are further developed and defined.     

Design 

The design phase is important in defining objectives and scope, ensuring leadership buy-in and that data systems are properly integrated to meet the needs of the agentic AI models.  

  • Defining scope and objectives involves setting clear and measurable business goals and aligning AI initiatives with the overall company strategy. This is best achieved by identifying key business processes and applications that could provide the highest impact, show the best ROI and serve as the benchmark for future projects and applications. 
  • Securing leadership and cross-functional team buy-in is also critical in ensuring that AI models are fully adopted into the various business processes, and that communicated ROIs are realized to their fullest potential. This is achieved by securing sponsorship at the executive level, and assembling multi-disciplinary teams from IT, data science and engineering, operations and compliance. It is essential that clear, attainable and measurable ROIs are clearly communicated to ensure that teams work collectively towards achieving the defined goals and objectives.  
  • Mapping data and systems integration ensures that agentic AI systems have easy and real-time access to data across various silos including CRM, EPR and other cloud applications. This is essential in allowing agentic AI models to ingest all data required for the algorithms and produce accurate and timely outputs to guide their decisions. It is essential that close attention is paid to upgrading the security of all systems as they are integrated to ensure that no vulnerabilities are introduced as part of this process. 

Pilot 

Deploy the AI models in a contained environment that allows collecting live data for training. This is a great opportunity to train, fine-tune and iterate on the agents to ensure they produce accurate output, ROIs are met and compliance is achieved. Correct errors in the models and the algorithms, monitor output and behavior, and document outcomes.  

Scale 

Scale the phased approach across additional business functions and processes while increasing integration across the various AI agents as they are scaled. Continue to retrain agents and monitor their performance and output, paying close attention to monitoring and updating the risks and adding controls as necessary. It is also essential to continue to train and upskill employees to enable them to collaborate productively with agents. 

Risks, Realities, and Responsible Scaling 

Agentic AI is projected to automate up to 15% of day-to-day enterprise decisions by 2028, and potentially resolve 80% of standard customer service issues [16]. However, this also introduces a large risk surface, especially for critical systems.  

  • Increased cyber-attack and security risks – agentic AI systems are designed to act autonomously across multiple systems with access to various data silos across the organization. This creates a multitude of entry points and vulnerabilities for traditional cyber threats such as data leaks and hijacking. More novel and emergent threats can also be introduced such as “agent hijacking”, which allows malicious software to control agent behavior, directing it to perform unauthorized actions and access to data, and potentially collaborate with other agents through interactions that are difficult to detect and monitor.  
  • Loss of control & unintended outcomes – by reducing human involvement and interactions, agentic AI increases the risk for agents to make incorrect, inappropriate or harmful decisions. This is especially true for LLMs that can misinterpret data and context and lead to unintended outcomes on a potentially massive scale.  
  • Compliance, privacy and operational risks – agentic AI consumes and acts upon large amounts of sensitive data. Without proper oversight this opens the organization to risks of breaching privacy laws. It can also be difficult for large organizations to trace agentic AI decision making, thus making it difficult to audit, correct errors and perform disaster recovery.     

In 2025, most enterprises are implementing and running agentic AI pilots, especially in functions like customer service and supply chain management. However, enterprises have yet to achieve true end-to-end adoption of agentic AI across their various business functions. To achieve this requires strong cross-functional alignment and adoption of agentic AI, something that is rare and hard to achieve.  

Agentic AI has also been able to deliver value and efficiencies in domain-specific areas such as customer service and logistics, but it has yet to reliably deliver the same value for mission-critical business functions. There are still reliability challenges to overcome for agentic AI in these domain-agnostic areas. 

As the market became flooded with a multitude of vendors and start-ups hoping to capitalize on the acceleration of AI technologies, the tools and frameworks offered for agentic AI have become fragmented and difficult to standardize. The pace of demand for these tools continues to far outstrip the pace at which these tools are offered. 

What Kind of Retailer Will You Be? 

The retail landscape is being quickly transformed by Agentic AI programs that are driving a competitive race to lead in autonomy, speed and personalized customer experiences. In 2025, retailers cannot afford not to move quickly and aggressively in implementing agentic AI in all business functions or they risk being left behind, or worse, forced to exit.  

To be on track or ahead of the agentic AI trend in 2025, retailers must already be piloting or implementing it in one or more domains that were identified to have high ROI. Businesses can identify one or more functions such as customer support, supply chain and inventory management or marketing automation, where agentic AI can be strategically deployed to realize high ROIs.  

IT infrastructures and systems must also be revamped through APIs and data pipelines that allow for seamless integration of AI agents across various data silos across POS, supply chain and CRM platforms. While these actions are taking place, it is critical for retailers to ensure proper governance and frameworks are put in place to manage agentic AI risks, ethics and compliance. This can be done through maintaining proper audit trails, real-time monitoring of AI agents output and decision-making, and clear disaster recovery plans.  

It is also critical for retailers to ensure that employees are continuously educated, trained and upskilled in collaborating with and using AI agents. Maximizing ROIs does not rely entirely on the performance of AI agents. It also requires that employees learn and understand how to use AI agents to gain strategic insights that allow to focus on creative and impactful decisions.  

Retailers can also establish agentic AI centers of excellence to ensure proper governance and compliance, manage risks and lead strategies for responsible scaling of agentic AI at the enterprise level. Training and upskilling of employees to collaborate with Agentic AI is also critical. These actions can also be further strengthened through the formation of vendor partnerships to collaborate with AI solutions providers that allow for rapid deployment capabilities and quicker realization of ROIs. Retailers can also participate is industry consortiums to benchmark, share knowledge and establish standards and risk mitigation strategies. 

References