The Evolution of Large Language Models: From Recurrence to Transformers

While LLMs and their revolutionary transformer technology continue to impress us with new milestones, their foundations are deeply rooted in decades of research conducted in neural networks at countless institutions, and through the work of countless researchers.

Introduction

Large Language Models (LLMs) have gained momentum over the past five years as their use proliferated in a variety of applications, from chat-based language processing to code generation. Thanks to the transformer architecture, these LLMs possess superior abilities to capture the relationships within a sequence of text input, regardless of where in the input those relationships exist.

Transformers were first introduced in a 2017 landmark paper titled “Attention is all you need” [1]. The paper introduced a new approach to language processing that applied the concept of self-attention to process entire input sequences in parallel. Prior to transformers, neural architectures handled data sequentially, maintaining awareness of the input through hidden states that were recurrently updated with each step passing its output as input into the next.

LLMs are only an evolution of decades old artificial intelligence technology that can be traced back to the mid 20th century. While the breakthroughs of the past five years in LLMs have been propelled by the introduction of transformers, their foundations were established and developed over decades of research in Artificial Intelligence.

The History of LLMs

The foundations of Large Language Models (LLMs) can be traced back to experiments with neural networks conducted in the 1950s and 1960s. In the 1950s, researchers at IBM and Georgetown University investigated ways to enable computers to perform natural language processing (NLP). The goal of this experiment was to create a system that allowed translation from Russian to English. The first example of a chatbot was conceived in the 1960s with “Eliza”, designed by MIT’s Joseph Weizenbaum, and it established the foundations for research into natural language processing.

NLPs relied on simple models like the Perceptron, which were simple feed-forward networks without any recurrence features. Perceptrons were first introduced by Frank Rosenblatt in 1958. They were a single-layer neural network, based on an algorithm that classified input into two possible categories, and tweaked its predictions over millions of iterations to improve accuracy [3]. In the 1980s, the introduction of Recurrent Neural Networks (RNNs) improved on perceptrons by handling data sequentially while maintaining feedback loops in each step, further improving learning capabilities. RNNs were better able to understand and generate sequences through memory and recurrence, something perceptrons could not do [4]. Modern LLMs improved further on RNNs by enabling parallel rather than sequential computing.    

In 1997, Long Short-Term Memory (LSTM) networks introduced deeper and more complex neural networks that could handle greater amounts of data. Fast forward to 2019, a team of researchers at Google introduced the Bidirectional Encoder Representations from Transformers (BERT) model. BERT’s innovation was its bidirectionality which allowed the output and input to take each other’s context into account. This allowed the pre-trained BERT to be fine-tuned with just one additional output layer to create state-of-the-art models for a range of tasks [5].

From 2019 onwards, the size and capabilities of LLMs grew exponentially. By the time OpenAI released ChatGPT in November 2022, the size of its GPT models was growing in staggering amounts, until it reached an estimated 1.8 trillion parameters in GPT-4. These parameters include learned model weights that control how input tokens are transformed layer by layer, as discussed later in this article. ChatGPT allows non-technical users to prompt the LLM and receive a response quickly. The more the user interacts with the model, the better the context it can build, thus allowing it to maintain a conversational type of interaction with the user.

The LLM race was on. All the key industry players began releasing their own versions of LLMs to compete with OpenAI. To respond to ChatGPT, Google released Bard, while Meta introduced LLaMA (Large Language Model Meta AI). Microsoft had partnered with OpenAI in 2019 and built a version of its Bing search engine powered by ChatGPT. DataBricks also released its own open-source LLM, named “Dolly”

Understanding Recurrent Neural Networks (RNNs)

The inherent characteristic of Recurrent Neural Networks (RNNs) is their memory or hidden state. RNNs process input sequentially, token by token, with each step considering the current input token and the current hidden state to calculate a new hidden state. The hidden state acts as a running summary of the information seen so far in the sequence. RNNs understand a sequence by processing them word by word, while keeping a running summary of the words seen so far [6].

The recurrent structure of RNNs means that they perform the same computation at each step, with their internal state changing based on the input sequence. Therefore, if we have an input sequence x = (x1, x2, …, xt), the RNN updates its hidden state ht at time step t using the current input xt and the previous hidden state ht-1. This can be formulated as:

Where:

  • ht is the new hidden state at time step t
  • ht-1 is the hidden state from the previous time step
  •  xt is the input vector at time step t
  •  Whh and Wxh are the shared weight matrices across all time steps for hidden-to-hidden and input-to-hidden connections, respectively.
  •  bh is a bias vector
  • tanh is a common activation function (hyperbolic tangent), introducing non-linearity. 

This process can be visualized by breaking out the RNN timeline to show a snapshot of the system at each step in time, and how the hidden state is updated and passed from one step to the next.

Figure 1. An RNN through time. The RNN cell processes input and the hidden state from time t-1 to produce the hidden state that is used as input together with token input from time t.

Limitations of RNNs

The sequential nature of RNNs limits their ability to process tasks that contain long sequences. This may be one of the main limitations that gave rise to the development of the transformer architecture and the need to process sequences much faster and in parallel. The following are some of the main limitations of RNNs.

Limitations in modeling long-range dependencies: RNNs are limited in capturing dependencies between elements within long sequences. This limitation is due primarily to the vanishing gradient problem. As gradients (error signals) occur during later steps in time, it becomes increasingly more difficult for the signal to flow backwards to earlier time steps and adjust the weights. This is because the longer the sequence the fainter the signal becomes. As the signal becomes weaker, by the time it reaches the relevant steps it becomes increasingly more difficult for the network to learn and trace back the relationship between those earlier inputs and the later outputs.

Sequential processing: RNNs process sequences token by token, in order. Hidden states must also be processed sequentially such that to obtain the hidden state at time t, the RNN must use the hidden state from t-1 as input. Modern hardware like GPUs and TPUs are well equipped to work with parallel computation. RNNs are unable to make use of this hardware due to their sequential processing, which leads to longer training times compared to parallel architectures.

Fixed size of hidden states: In the sequence-to-sequence model, the encoder must process and compress the entire input sequence into a single fixed size vector. This vector is then passed to the RNN decoder which is used to generate the output for the next hidden state. The compression of potentially long and complex input sequences into fixed-size vectors can be challenging. It also makes it difficult to retrain the network on all the input details that may have been compressed, and thus potentially causing some important information required for training to be missed.

How Transformers Replaced Recurrence

The limitations of RNNs in optimizing learning over large sequences and their sequential processing gave rise to the transformer architecture. Instead of sequential processing, transformers introduced self-attention, enabling the network to learn from any point in the sequence, regardless of distance.

Self-attention in transformers is analogous to the way humans process long sequences of text. When we attempt to translate text or process complex sequences, we do not read the entire text and then attempt to translate or understand it. Instead, we tend to go back and review parts of the text that we determine are most relevant to our understanding of it, so that we can generate the output we are trying to produce. In other words, we pay attention to the most relevant parts of the input that will help us generate the output. Transformers apply global self-attention that allows each token to refer to any other token, regardless of distance. Additionally, by taking advantage of parallelization, transformers introduce features such as scalability, language understanding, deep reasoning and fluent text generation to LLMs that were never possible with RNNs.

How Transformers Pay Attention

Self-attention enables the model to refer to any token in the input sequence and capture any complex relationships and dependencies within the data [7]. Self-attention is computed through the following steps.

1. Query, Key, Value (Q,K,V) matrices
  • Query (Q): Represents what current information is required or must be focused on. This can be described by asking the question “What information is the most relevant right now?”
  • Key (K): Keys act as identifiers for each input element. They are compared against the input sequence to determine relevance. This is analogous to asking the question “does this input token match the information I am looking for?”
  • Value (V): Values are also associated with each input token and represent the content or the meaning of that token. Values are weighted and summed to produce the context vector, which can be described as “this is the information I have”.

The model performs a lookup for each Query across all Keys. The degree to which a Query matches a Key determines the weight assigned to the corresponding Value. The model then calculates a weight or an attention score that determines how much attention a token should receive when generating predictions. The attention scores are used to calculate a weighted sum of all the Value vectors from the input sequence. The result is a vector containing the weighted sum of all the weighted value vectors.

Figure 2. Calculating the weighted sum vector. Attention scores are multiplied by the Value (input) vectors which are then summed to produce the weighted sum vector.

We have already discussed how traditional RNNs struggle to retain information for distant input due to the sequential nature of their hidden state. Attention, on the other hand, allows the model to consider the weights of all inputs and by summing them up. The resulting vector incorporates information from all inputs with the proper weights assigned to them. This allows the model to have a context of all input, while focusing on the most relevant information in the sequence, regardless of their distance.

2. Multi-head attention

When we consider a sentence, we do not consider it one word at a time. Instead, we look at each specific word in the sentence and consider whether it is the subject or the object. We also consider the overall grammar to make sense of the sentence and what it is trying to convey.

The same analogy applies when calculating attention. Instead of performing a single attention calculation for the Q, K, V vectors, multiple calculations are performed each on a single attention head, such that each head looks at a different pattern or relationship in the sequence. This is the concept of multi-head attention which allows the parallel processing of the Q, K, V vectors. It allows the model to look at different patterns or relationships within the sequence.

3. Masked multi-head self-attention

Masking ensures that the head focuses only on the tokens received so far when generating output, without looking ahead into the input sequence to generate the next token.

  • Attention score: The dot product of the Q and K matrices is used to determine the alignment of each Query with each Key, producing a square matrix reflecting the relationship between all input tokens.
  • Masking: A mask is applied to the resulting attention matrix to positions the model is not allowed to access, thus preventing it from ‘peeking’ when predicting the next token.
  • Softmax: After masking, the attention score is converted into probability using the Softmax function. The Softmax function applies a probability distribution to a vector whose size matches the vocabulary of the model, called logits. For example, if the model has a vocabulary of 50,000 words, the output logits vector will have a dimension of 50,000. Each element in the logits vector corresponds to a score for one specific token in the vocabulary. The Softmax function takes the logits vector as input, and outputs a probability vector that represents the model’s predicted probability distribution over the entire vocabulary of the model for the current position in the output sequence. 

When it calculates attention for the Q, K, V vectors, the model does not recalculate attention for the same original Q, K, V vectors. Rather, it learns separate linear projections for each head. If we have h attention heads, then each head i learns the projection matrices

Each head performs the scaled Dot-Product Attention calculation using its projected Qi, Ki, Vi:

Where dis the dimension of the Ki vectors within each head. Each projection of Qi, Ki, Vi allows a head to focus on and learn from a different representation of the original input. By running these calculations in parallel the model can learn about different types of relationships within the sequence.

4. Output and concatenation

The final step is to concatenate the output from all attention heads and apply a linear projection, using a learned weight matrix, to the concatenated output. The concatenated output is fed into another linear feedforward layer, where it is normalized back to a constant size to preserve the original meaning of the input before it is passed deeper into additional layers in the network [8].

Conclusion

There is no doubt that transformers have revolutionized the way LLMs have been deployed and applied in a variety of applications, including chatbots, content creation, agents and code completion. By relying on large and ever-growing volume of parameters, and an architecture that is designed for scalability and parallel computing, we are only beginning to discover the breadth of applications transformers can have.

As the challenges facing LLMs continue to be overcome, such as the ethical and environmental concerns, we can expect them to continue to become more efficient, more powerful and ultimately more intelligent. While LLMs and their revolutionary transformer technology continue to impress us with new milestones, their foundations are deeply rooted in decades of research conducted in neural networks at countless institutions, and through the work of countless researchers.

References

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin. “Attention Is All You Need.” In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017). arXiv:1706.03762 cs.CL, 2017

[2] The history, timeline, and future of LLMs

[3] Cornell Chronicle

[4] Deep Neural Networks: The 3 Popular Types (MLP, CNN and RNN)

[5] Large language models: their history, capabilities and limitations

[6] RNN Recap for Sequence Modeling

[7] Transformer Explainer

[8] What is a transformer model?

Large Language Models: Principles, Examples, and Technical Foundations

Introduction

Large Language Models (LLMs) are Artificial Intelligence algorithms that use massive data sets to summarize, generate and reason about new content. LLMs use deep learning techniques capable of broad range Natural Language Processing (NLP) tasks. NLP tasks are those involving analysis, answering questions through text translation, classification, and generation [1][2].

Put simply, LLMs are computer programs that can interpret human input and complex data extremely well given large enough datasets to train and learn from. The generative AI technologies that have been enabled by LLMs have transformed how organizations serve their customers, how workers perform their jobs and how users perform daily tasks when searching for information and leveraging intelligent systems.

The core principles of LLMs

LLMs are built on a set of neural networks based on the transformer architecture. Each transformer consists of encoders and decoders that can understand text and the relationship between words and phrases in it. The transformer architecture relies on the next-word prediction principle which allows predicting the most probable next work based on a text prompt from the user. Transformers can process sequences in parallel which enables them to learn and train much faster [2][3][4]. This is due to their self-attention mechanism which enables transformers to process sequences and capture distant dependencies much more effectively than previous architectures.

The transformer architecture consists of three key components:

  • Embedding: To generate text using the transformer model, the input must be converted into a numerical format that can be understood by the model. This process involves three steps that involve 1) tokenizing the input and breaking it down into smaller, more manageable pieces 2) embedding the tokens in a matrix that allows the model to assign semantic meaning to each token 3) encoding the position of each token in the input prompt 4) final embedding by taking the sum of the tokens and positional encodings, and capturing the position of the tokens in the input sequence.
  • Transformer block: comprises of multi-head self-attention and a multi-layer Perceptron layer. Most models stack these blocks sequentially, allowing the token representations to evolve though layers of blocks which in turn allows the model to build an intricate understanding of each token.
  • Output probabilities: Once the input has been processed through the transformer blocks, it passes through the final layer that prepares it for token prediction. This step projects a final representation into a dimensional space where each token is assigned a likelihood of being the next word. A probability distribution is applied to determine the next token based on its likelihood, which in turn enables text generation.

LLM applications

The transformer architecture allows LLMs to achieve a massive scale of billions of parameters. LLMs begin with pre-training on large datasets of text, grammar, facts and context. Once pre-trained, the models undergo fine-tuning where labeled datasets are used to adapt the models to specific tasks. The ability of LLMs to use billions of parameters, combined with their efficient attention mechanisms and their training capabilities, allows LLMs to power modern AI applications such as chatbots, content creation, code completion and translation.

Text generation: Chatbots and content creation

Text generation is one of the most prominent applications of LLMs where coherent and context-relevant text is automatically produced. This application of LLMs powers chatbots, like ChatGPT, that interact with users by answering questions, providing recommendations, generating images and conducting research [5].

GPT 4.5 and 4o feature multimodal capabilities allowing them to handle text and images for versatile use in different applications, and they can both handle text processing capacity of 25,000 words, though the amount of computational resources varies between the two models.

By leveraging their vast datasets, LLMs are also used for content creation such as social media posts, product descriptions and marketing. Tools like Copy.ai and Grammarly use LLMs to generate marketing copy, and assist with grammar and text editing. DeepL Translator uses LLMs trained on linguistic data for language translation.

Agents

Agentic LLMs refer to conversational programs such as chatbots and intelligent assistants that use transformer-based architectures and Recurrent Neural Networks (RNNs) to interpret user input, process sequential data such as text, and generate coherent, personalized responses [6]. Personalized responses to input text are achieved through context-awareness and analyzing conversations.

Agentic LLMs are also capable of managing complex workflows and can collaborate with other AI agents for better analysis. Vast datasets can be leveraged to support a variety of domains such as healthcare, finance and customer support.

Code completion

Code completion is a leading application of LLMs that uses the transformer-based  architecture to generate and suggest code by predicting next tokens, statements or entire code blocks. In this context, transformer models are trained using self-attention mechanisms to enable code understanding and completion predictions [7]. The encoder-decoder transformer model is used such that the input is code surrounding the cursor (converted into tokens), and the output is a set of suggestions to complete the current or multiple lines.

Challenges and future directions

Large Language Models are still facing challenges related to ethical and privacy concerns, maintaining accuracy, avoiding bias, and managing high resource consumption [8].

  • Ethical concerns: LLMs are trained on massive datasets. There are still open questions as to who can use these datasets, and how and when they can be used. These datasets can also be biased and lead to biased output from LLMs, which can lead to misinformation and hate speech.
  • Data privacy: The use of massive datasets containing large amounts of user data poses significant privacy concerns. Safeguards in the use of data are required to train a model without compromising user privacy. As the use of LLMs becomes more mainstream, and as the size of datasets used to train them increases, so do the privacy concerns around their use.
  • Output bias: Existing biases in the available training data can cause LLMs to amplify those biases, leading to inaccurate and misleading results. This is particularly important for areas that require objective data analysis and output, such as law, healthcare and economics.
  • Hallucinations: LLMs are prone to “hallucinations” where the model output may seem reasonable, yet the information provided is incorrect. Hallucinations can be addressed through better training and validation methodologies to enhance the reliability of generated content.
  • Environmental impact: Training and deploying LLMs requires an extensive amount of energy resources, leading to increased carbon emissions. There is a need to develop more efficient algorithms while also investing in renewable and efficient energy generation that will lower the carbon footprint of LLMs, especially as their use and application accelerate.

Addressing these and other challenges such as regulatory compliance, security and cyber attacks will ensure that LLMs continue to use the correct input datasets while producing the correct output in an ethical, fair and unbiased manner. The integration of domain-specific knowledge through specialized fine tuning will also enable LLMs to produce more accurate and context-aware information that will maximize their benefits.

Conclusion

LLMs power a variety of applications, including chatbots to content creation, code completion and domain-specific automation. Using the transformer architecture and vast datasets to train and learn, they have emerged as a transformative discipline of artificial intelligence. LLMs have proved their outstanding capabilities in understanding, generating, and reasoning with natural language. While there are challenges to overcome for LLMs such as bias, accuracy, environmental impact, and domain specialization, it is expected that LLMs will become more efficient and trustworthy as algorithms improve and innovations are achieved through better fact-checking and human oversight.

References

[1] What are large language models (LLMs)

[2] What are large language models (LLMs)

[3] What is LLM (Large Language Model)?

[4] Transformer Explainer

[5] 10+ Large Language Model Examples & Benchmark 2025

[6] Chatbot Architecture: RNNs and Transformers

[7] ML-Enhanced Code Completion Improves Developer Productivity

[8] Raza, M., Jahangir, Z., Riaz, M.B. et al. Industrial applications of large language models. Sci Rep 15, 13755 (2025). https://doi.org/10.1038/s41598-025-98483-1

Retail Is Entering Its Agentic AI Era

The retail landscape is being quickly transformed by Agentic AI programs that are driving a competitive race to lead in autonomy, speed and personalized customer experiences. In 2025, retailers cannot afford not to move quickly and aggressively in implementing agentic AI in all business functions or they risk being left behind, or worse, forced to exit.  

Introduction

AI agents are redefining retail and are evolving into autonomous assistants that plan, recommend and take action. One of the most prominent examples of this shift is Walmart’s “Sparky”, a conversational AI shopping assistant in the mobile app that can understand customers shopping needs, suggest relevant products, answer questions and provide recommendations based on preferences [1]. Walmart is betting big on AI to drive its e-commerce growth and is aiming for online sales to account for 50% of its total sales [2].  

Amazon, another retail giant, is using AI on a different scale by creating a harmonious ecosystem of AI and Machine Learning (ML) models across the different functional areas of the business. For example, demand forecasting is accomplished using models that leverage sales history, social media, economic trends and weather to predict demand more accurately. Machine learning (ML) algorithms use data across the supply chain to optimize stock levels and replenishment strategies to ensure alignment with predicted demand. Amazon is also using AI to automate inventory management and using AI-driven robots to manage the movement of good within warehouses. Other AI models optimize delivery routes in real-time using inputs like traffic conditions and weather among other factors [3].  

Retailers that make use of AI and ML will ensure they maintain a competitive edge, and those that do not, risk being left behind or forced to exit. Amazon’s example of creating an ecosystem that uses the output from one AI model as input into another ensures that the business continues to add efficiencies and boost future profitability.  Across the U.S., retailers are investing heavily in AI agents, with 83% of companies claiming AI is a top priority in business plans [4].  

These statistics bring about an interesting question: what if every customer and every employee had their own AI agent, helping find products and optimize their shopping experience, or helping with labor-intensive tasks? AI agents are evolving from pilot projects to front-line and business critical applications, enabling businesses to gain a competitive edge and attract customers with better online shopping experiences.

What Are AI Agents? 

In the context of AI, “agentic” refers to autonomous systems capable of making decisions and acting independently. AI agents are a more advanced form of AI that can make decisions and take actions with little or no human intervention. Agentic AI can combine multiple interconnected AI agents that are continuously learning, reasoning and acting proactively. Businesses can customize AI agents to meet their needs, given the flexibility and adaptability of AI agents for a wide range of industries and applications [5][6]. 

The key features of agentic AI include: 

  • Autonomy: the ability to work autonomously to analyze data and solve problems in real-time with little human intervention. 
  • Collaboration: the ability of multiple AI agents to work together leveraging Large Language Models (LLMs) and complex reasoning to solve complex business problems. 
  • Learning and adaptation: dynamically evolving by interacting with its environment, and refining strategies, based on feedback and real-time data. 
  • Reasoning and proactivity: identifying issues and forecast trends to make decisions such as reordering inventory or resolving customer complaints.  

The adoption of Agentic AI in 2025 is gaining momentum as businesses aim to move from insight to action at greater speed and efficiency. Agentic AI solves the problem of scarce human resources needed to deal with the growing volume, complexity and inter-dependence of data. By moving at the speed of machine computation, agentic AI allows businesses to be more agile in real-time, act on business-critical insights more quickly, and scale more rapidly.  

The competitive edge introduced by agentic AI is driving its rapid adoption, and it is due to the following factors [7][8][9]: 

  • Speed: Businesses must move and react to customer needs, supply chain factors and market conditions at unprecedented speeds in 2025. It is no longer sufficient to use traditional AI that still relies on human intervention. Agentic AI is not only able to forecast problems and issues, but it can also act and execute upon them. Agentic AI can forecast and resolve customer issues before they even occur, and it can react to supply chain disruptions by forecasting and acting upon them, before they happen. 
  • Reduce reliance on humans, not replace them: Agentic AI does not aim to replace humans and take away jobs, but rather to augment them. It acts as a co-worker that enhances productivity by focusing on analysis of repetitive data-intensive processes, creating forecasts that enable faster decision-making, and enabling employees to focus on business strategy and the creative, innovative decisions that will allow the business to continue to grow. Agentic AI allows businesses to increase performance while cutting costs without the need for increased human intervention. 
  • Cost reduction and improved ROI: Agentic AI is also unlocking vast opportunities for cost reduction, through quick evaluation of data, testing strategies and adjusting operations in real-time. By automating repetitive and data-intensive processes, AI agents reduce the dependence on manual labor, minimize errors that translate to rework and add cost-effectiveness and efficiency that in turn result in higher ROI.  
  • Enhanced customer experience: AI agents are capable of contextual understanding, proactive assistance and continuous learning. This allows them to boost customer satisfaction and loyalty by offering instant, real-time assistance and answers to customers queries while reducing wait times and improving resolutions rates.  
  • Business must adapt or die: Agentic AI allows businesses to remain at the forefront of their market by learning and adapting in real-time. In 2025, customers expect instant and personalized service. It is becoming easier for businesses to integrate agentic AI into their various systems, especially with the introduction of Model Context Protocol (MCP) integration framework enabling intelligent agents to interact with external systems in a standardized, secure, and contextual way. User-friendly applications allow businesses to easily connect and deploy AI agents via a visual workflow builder without coding. Business have the opportunity to adapt by leveraging the technologies and capabilities available to them today to implement agentic AI.   

The following table illustrates how AI is being implemented across various areas within Retail. 

Functional Areas Applications Examples 
Customer experience • Personalize services, answer questions, and process orders 
• Offer product and project guidance 
• Smart kiosks assist with product search, availability, and location 
• AI delivers instant answers, recommendations, and smoother shopping 
• Walmart’s “Sparky” suggests products and summarizes reviews Lowe’s AI assistant offers DIY and product support via app H&M’s chatbot recommends outfits, boosting satisfaction by 30% [10] 
Inventory Management  • Streamline store ops and inventory management AI robots track stock and automate restocking 
• Smart shelves auto-detect low stock and reorder 
• Forecast demand using sales and market data 
• AI schedules staff based on foot traffic forecasts Video analytics detect theft and safety issues 
• Zara’s AI cut stockouts by 20% and excess by 15% by using data from sales, customer behavior and market trends to forecast demand 
• Walmart uses robots for real-time shelf scanning 
• Home Depot AI helps staff quickly access data and gain necessary information to help customers 
Supply Chain • Adjust orders and routing using sales, weather, and trend data 
• Track shipments, suppliers, and logistics for full supply chain visibility 
• Improve forecasting to optimize supply chain operations 
• Cut costs by aligning forecasts with supply chain efficiency 
• Kroger’s AI forecasting cut food waste by 10% and improved inventory accuracy by 25% 
• Unilever’s AI use reduced supply chain costs by 15% and improved delivery times by 10% 
• Walmart also achieved major gains through AI-driven supply chain improvements 
Marketing  • Agentic AI manages end-to-end customer journeys across commerce, content, loyalty, and service [11] 
• AI interprets real-time journey data to adapt marketing strategies 
• Retailers use AI insights to keep campaigns fast, relevant, and effective 
• AI analyzes feedback to spot improvements and cut manual tasks 
• Nike uses AI to predict purchases and personalize marketing, boosting engagement by 20% and driving sales 
• Coca-Cola uses predictive analytics to shift budget to high-performing channels, increasing Instagram spend by 20% and sales by 15%.  
 Table 1: Retail Examples Where AI Is Already Driving Impact

What Executives Should Do To Drive The Agentic AI Shift 

AI agents are changing how organizations can deliver value to their customers, improve customer experience and manage risks. Executives are becoming increasingly aware that agentic AI is not just an automation tool, but rather a new way to drive deep business innovation and, if harnessed correctly, a way to maintain a competitive advantage. 

Executives must lead the shift in the organization towards agentic AI by aligning governance and priorities to support IT and data investments required. To facilitate this shift to agentic AI the CEO must focus on [12][13]: 

  • Investing in labor and technical infrastructure: this is accomplished by removing the barriers across the various systems in the organization to enable AI agents to operate across the various functional areas. In addition, upskilling and retraining the workforce is required to learn how to work with the new technologies introduced by agentic AI. 
  • Lead the organizational shift: establish the goals and intended values of using agentic AI in the organization, and how it is to be used as a partner in creating value. The goal should not be simply to focus on optimizing headcount and reducing costs, it is about leading the organization into the future of retail. 
  • Highlight key projects: by spearheading key and high-value projects in areas of the organization such as supply chain management, operations and customer service, the CEO can help build momentum and rally resources. They can also demonstrate the value of agentic AI by tracking key KPIs. 
  • Oversee risk, compliance, and ethics: it is essential for the CEO to oversee all regulatory, privacy, transparency and risk issues related to the adoption of agentic AI. This is crucial in allowing the organization to proceed with confidence in implementing the various technical and IT infrastructures needed, and to realize the value and gains from agentic AI quickly and efficiently.  

It is important to note that organizations that can quickly adopt and adapt to agentic AI will gain the competitive edge. The value proposition for executives in adopting this technology can be summarized in the following key elements: 

  • Business transformation through automation and productivity: Agentic AI goes beyond the range of capabilities offered by Gen AI and can handle complex workflows through autonomous decision-making. This allows staff to work alongside AI agents and use its output while focusing on strategic and high-value tasks that boost workers productivity and allow them to use their time efficiently.  
  • Gaining a competitive edge: AI agents work continuously adapting to real-time issues, learning and making decisions quickly. This enhances customer experience, boosts innovation and resilience against market changes.  
  • Boost ROI and increase revenues: Studies have shown that agentic AI contributes up to 18% improvement in customer satisfaction, employee productivity, and market share, with $3.50 in return for every $1 invested realized over a 14-month payback period [14]. This is driven primarily by redirecting human resources from focusing on repetitive low-value tasks to more strategic and high-value ones.  

Enable rapid scaling and agility: AI agents can help lead the transformation of the organization to be more forward-looking and competitive, by driving business transformation, upskilling the workforce and enabling the rapid scaling and adaptation of business models. 

Implementation Priorities: How to Get Started 

The diagram below illustrates the interconnected functional areas and visually describes how they intersect with Inventory Management in an omnichannel retail environment.  The data that flows between each area is what is used in AI models to enhance decision making. The interconnected data that flows between functions feed AI models which generate insights needed to optimize inventory, fulfillment, and customer responsiveness. 

 Figure 1: Inventory Management across Functional areas in Retail

The table below outlines key functional areas, the associated data points, and how AI is applied to enhance operational efficiency. 

Inventory Layer Key Data Points AI Usage to Improve Efficiency 
Factory / Seller* • Proforma Invoice 
• Commercial Invoice 
• Packing List 
• Predict lead times and invoice anomalies 
• Detect supply risk patterns 
Shipper • Advanced Shipping Notice (ASN) • Predict shipment delays  
• Optimize dock scheduling at warehouse 
Warehouse • Putaway Status 
• Inventory Quantity & Location 
• SKU Detail  
• Cycle Count Accuracy 
• Labor Handling Time 
• Predict slotting needs 
• Detect discrepancies 
• Optimize workforce allocation 
Available Inventory • On-hand quantity  
• Committed vs Free inventory 
• Safety stock levels 
• Dynamic Available to Pick (ATP) calc 
• Reallocation suggestions 
• Overstock / stockout alerts 
Allocation • Demand forecasts 
• Store sales velocity 
• Promotion calendar 
• Optimize first allocation  
• Recommend flow-through allocation 
Replenishment • Sell-through data 
• Min/max thresholds 
• Lead times 
• Auto-trigger replenishment 
• Predict out-of-stock risk  
• Dynamic reorder points 
Store Inventory • Store on-hand inventory 
• Returns & damages 
• Shelf vs backroom split 
• Optimize replenishment routing 
• Detect phantom inventory 
Customer Order • SKU ordered 
• Delivery preference 
• Fulfillment location 
• Predict best node to fulfill  (e.g., ship-from-store vs DC) 
• Reduce split shipments 
Fulfillment / Distribution • Pick time 
• Delivery time  
• On-time %  
• Exception logs 
• Route optimization 
• Predict fulfillment delays  
• Auto rerouting 
Reorder Loop • Real-time sales data 
• Inventory velocity  
• Reorder frequency 
• Adaptive reorder intervals  
• Prevent overstock / stockouts 
Table 2: How Data Enables AI to Improve Inventory Across the Supply Chain
*Assumes FOB Incoterms  

Implementing Agentic AI follows a multi-phased approach that integrates technology, business and culture. This approach can be iterative and repeated as necessary depending on the complexity and scope of the processes being automated [15]. 

Readiness ➡ Design ➡ Pilot ➡ Scale 

Assessing readiness 

Assessing readiness involves evaluating and auditing workflows, data infrastructures and IT capabilities to ensure compatibility with the agentic AI needs. These include ensuring that AI model outputs will be compatible with the organization’s future audit needs and that IT infrastructures can support the AI models data requirements.  

It is also important to evaluate the company’s culture and assess the adaptability and openness to automation. This is a good opportunity to address any resistance and skill gaps through education and training to ensure that teams see the value agentic AI will add to their work. 

The readiness phase is also a good opportunity to identify high-impact business use cases that can be used to pilot the implementation of agentic AI processes, and scale as necessary to the rest of the organizations, as these processes are further developed and defined.     

Design 

The design phase is important in defining objectives and scope, ensuring leadership buy-in and that data systems are properly integrated to meet the needs of the agentic AI models.  

  • Defining scope and objectives involves setting clear and measurable business goals and aligning AI initiatives with the overall company strategy. This is best achieved by identifying key business processes and applications that could provide the highest impact, show the best ROI and serve as the benchmark for future projects and applications. 
  • Securing leadership and cross-functional team buy-in is also critical in ensuring that AI models are fully adopted into the various business processes, and that communicated ROIs are realized to their fullest potential. This is achieved by securing sponsorship at the executive level, and assembling multi-disciplinary teams from IT, data science and engineering, operations and compliance. It is essential that clear, attainable and measurable ROIs are clearly communicated to ensure that teams work collectively towards achieving the defined goals and objectives.  
  • Mapping data and systems integration ensures that agentic AI systems have easy and real-time access to data across various silos including CRM, EPR and other cloud applications. This is essential in allowing agentic AI models to ingest all data required for the algorithms and produce accurate and timely outputs to guide their decisions. It is essential that close attention is paid to upgrading the security of all systems as they are integrated to ensure that no vulnerabilities are introduced as part of this process. 

Pilot 

Deploy the AI models in a contained environment that allows collecting live data for training. This is a great opportunity to train, fine-tune and iterate on the agents to ensure they produce accurate output, ROIs are met and compliance is achieved. Correct errors in the models and the algorithms, monitor output and behavior, and document outcomes.  

Scale 

Scale the phased approach across additional business functions and processes while increasing integration across the various AI agents as they are scaled. Continue to retrain agents and monitor their performance and output, paying close attention to monitoring and updating the risks and adding controls as necessary. It is also essential to continue to train and upskill employees to enable them to collaborate productively with agents. 

Risks, Realities, and Responsible Scaling 

Agentic AI is projected to automate up to 15% of day-to-day enterprise decisions by 2028, and potentially resolve 80% of standard customer service issues [16]. However, this also introduces a large risk surface, especially for critical systems.  

  • Increased cyber-attack and security risks – agentic AI systems are designed to act autonomously across multiple systems with access to various data silos across the organization. This creates a multitude of entry points and vulnerabilities for traditional cyber threats such as data leaks and hijacking. More novel and emergent threats can also be introduced such as “agent hijacking”, which allows malicious software to control agent behavior, directing it to perform unauthorized actions and access to data, and potentially collaborate with other agents through interactions that are difficult to detect and monitor.  
  • Loss of control & unintended outcomes – by reducing human involvement and interactions, agentic AI increases the risk for agents to make incorrect, inappropriate or harmful decisions. This is especially true for LLMs that can misinterpret data and context and lead to unintended outcomes on a potentially massive scale.  
  • Compliance, privacy and operational risks – agentic AI consumes and acts upon large amounts of sensitive data. Without proper oversight this opens the organization to risks of breaching privacy laws. It can also be difficult for large organizations to trace agentic AI decision making, thus making it difficult to audit, correct errors and perform disaster recovery.     

In 2025, most enterprises are implementing and running agentic AI pilots, especially in functions like customer service and supply chain management. However, enterprises have yet to achieve true end-to-end adoption of agentic AI across their various business functions. To achieve this requires strong cross-functional alignment and adoption of agentic AI, something that is rare and hard to achieve.  

Agentic AI has also been able to deliver value and efficiencies in domain-specific areas such as customer service and logistics, but it has yet to reliably deliver the same value for mission-critical business functions. There are still reliability challenges to overcome for agentic AI in these domain-agnostic areas. 

As the market became flooded with a multitude of vendors and start-ups hoping to capitalize on the acceleration of AI technologies, the tools and frameworks offered for agentic AI have become fragmented and difficult to standardize. The pace of demand for these tools continues to far outstrip the pace at which these tools are offered. 

What Kind of Retailer Will You Be? 

The retail landscape is being quickly transformed by Agentic AI programs that are driving a competitive race to lead in autonomy, speed and personalized customer experiences. In 2025, retailers cannot afford not to move quickly and aggressively in implementing agentic AI in all business functions or they risk being left behind, or worse, forced to exit.  

To be on track or ahead of the agentic AI trend in 2025, retailers must already be piloting or implementing it in one or more domains that were identified to have high ROI. Businesses can identify one or more functions such as customer support, supply chain and inventory management or marketing automation, where agentic AI can be strategically deployed to realize high ROIs.  

IT infrastructures and systems must also be revamped through APIs and data pipelines that allow for seamless integration of AI agents across various data silos across POS, supply chain and CRM platforms. While these actions are taking place, it is critical for retailers to ensure proper governance and frameworks are put in place to manage agentic AI risks, ethics and compliance. This can be done through maintaining proper audit trails, real-time monitoring of AI agents output and decision-making, and clear disaster recovery plans.  

It is also critical for retailers to ensure that employees are continuously educated, trained and upskilled in collaborating with and using AI agents. Maximizing ROIs does not rely entirely on the performance of AI agents. It also requires that employees learn and understand how to use AI agents to gain strategic insights that allow to focus on creative and impactful decisions.  

Retailers can also establish agentic AI centers of excellence to ensure proper governance and compliance, manage risks and lead strategies for responsible scaling of agentic AI at the enterprise level. Training and upskilling of employees to collaborate with Agentic AI is also critical. These actions can also be further strengthened through the formation of vendor partnerships to collaborate with AI solutions providers that allow for rapid deployment capabilities and quicker realization of ROIs. Retailers can also participate is industry consortiums to benchmark, share knowledge and establish standards and risk mitigation strategies. 

References

Learning to Lead in the Wilderness

There is value in nurturing and growing the qualities of leadership within all of us. As leaders, we improve our lives and the lives of others arounds us. We develop the skills and capabilities that allow us to rally others around common goals and to show them the path forward . In the process, we help advance humanity through peaceful collaboration and existence. 

Introduction

On an early winter morning in 1996, I heard a loud knock on my dorm room door. It was 2 a.m., and as I was waking up, I could gather some of the words spoken loudly by my wilderness instructor, Tom Lamberth, from behind the door.

Good morning Baha, there is a missing skier, and we have been called to assist in the search. We leave in 40 minutes.”

That’s all Tom said as he continued his rush down the hallway to wake up the rest of the team members in their rooms. Tom was the wilderness instructor during my time at the United World College of the American West (UWC USA) in New Mexico. The reason he was knocking on my door early that morning, I was a team leader in the wilderness program.

Search and rescue was part of the community service requirements for the International Baccalaureate curriculum. The previous year I had spent training to be a leader of a search and rescue team. I learned the expected survival and navigation skills, first-aid and most importantly, how to care for the members that made up my team.  It was through this experience, where I was introduced to what it meant to be a leader and the responsibilities that it carries.

As I led my classmates through the snowy night, deep in a New Mexico mountainous forest, I understood the value of trust. I felt a sense of pride that we trusted each other, and they trusted me, to guide us through the difficult terrain.

My experience at the college, and particularly my experience as a team leader and the sense of pride I felt when I had earned the trust of my team members, has shaped how I show up in my work. The last few weeks I have been reading about and contemplating what it means to be a leader and how I have shown up as a leader during my career.  I have been considering what qualities have defined my leadership style to date, alongside those that I look to hone and practice, to grow with intention and authenticity. What follows are the thoughts I have been thinking about.

The qualities of leadership

When a person embodies the true qualities of a leader, no matter what their age, social context or qualifications are, they can rally people around them towards a common purpose. Leadership qualities can be inward or outward facing, but regardless of how they are expressed in an individual, we are always drawn to them for their ability to embody those qualities in a way that gives meaningful expression for our own ideas and causes.

Inward qualities of leadership

The inward qualities of a leader are not always immediately visible to us, however they are powerful in creating that invisible pull we feel towards them. The following are key inward qualities of leadership.

Character – a leader’s character inspires confidence in people to rally around them. A leader’s character embodies their thoughts, feelings and behaviours and it drives the actions behind their words. Their character is based on strength and integrity, and it is something that they constantly build on and improve. People will not follow a leader if they do not have trust in the strength of their character.

Commitment – no one is drawn to an uncommitted leader. Commitment separates dreamers from doers, and nothing is more powerful than someone who is committed to their goal, their cause or their quest. We may not always be motivated to pursue our goals, or we may be in a state where we cannot quite find the way forward. A committed leader, especially one whose goals and dreams align with ours, can be the catalyst that allows us to rally around them towards achieving those goals and dreams.

Focus – a leader combines their priorities with concentration on achieving those priorities. They focus mainly on their strengths and doing the things they can do best, but they also focus on improving their abilities in new areas because they constantly seek change and improvement.

Passion – passion is the true driver of success. There is no doubt that most successful entrepreneurs and leaders achieve their success not through training and education, but through true love and passion for what they do. A strong desire to bring their ideas to life is what allows leaders to achieve their goals and dreams. Passion drives unshakable willpower and makes what seems impossible, possible. This is why passionate leaders are so effective, they believe in the ideas, goals and dreams they envision, and they pursue them with unwavering commitment.

Other inward leadership qualities include and not limited to competence, courage, initiative, responsibility, and security. They are all intertwined with the qualities highlighted above, strengthening them and reinforcing their foundations. 

Outward qualities of leadership

Outward leadership qualities are easily noticed and they capture our attention in a leader. When we see how a leader communicates, listens and cares for others, we easily connect with them, especially if their goals and ideas match ours. Below are the key outward qualities of a leader.

Communication – clarity, speaking their truth, recognizing others and seeking their thoughts are the cornerstones of communication. A leader speaks their message with absolute clarity and out of pure and uncompromising conviction. They have an unshakable belief in the truth behind what they say. They also actively solicit feedback from their audience to help them adjust and validate their message.

Charisma – charismatic leaders can attract others to them by focusing on the best in the people around them. They focus on making sure that every person they interact with receives their full attention, and that they are the most important person in the room, not the other way around. By listening to others and recognizing their ideas and needs, leaders cultivate a sense of a shared common cause that further strengthens their leadership.

Generosity – leaders do not gather just for themselves. They share their successes and wins with others. A leader’s generosity is embodied in their gratitude and appreciation for what they have, by putting others first, and most importantly by putting their resources and money to work in the service of others. Gratitude cultivates a habit of giving generously, and this tends to come back through a multitude of rewards both material and otherwise. When this attitude is inherent in a leader, there is no limit to the positive impact they can have on others.    

Listening – a leader earns trust and connection before they ask to be followed. The only way to build trust and connection is by truly listening first. For a leader, this means listening while wholeheartedly embracing the diversity of the voices and opinions they hear. A leader can learn nothing by simply speaking their own opinions. The only way they can learn is by listening to and learning from the thoughts and opinions of other people. 

Relationships – we do not tend to maintain relationships with people who do not care for us. This is even more true for people we want to follow. Leaders understand people, not in the general sense, instead they get to know every person they interact with. True leaders are centered in their purpose to help others.

Other outward qualities of a leader include vision, problem solving, and servanthood.

The qualities of leadership

Final remarks

When I think back on my time at UWC USA, I realize how much Tom embodied the qualities of leadership I highlighted above, and how much I learned from him. I watched Tom and listened to him closely as he trained and prepared us for survival in the wilderness. That experience continues to define how I view my life and my work.

As a father, guiding my children towards their future means that I serve as an example to them by showing commitment and passion to the work I do, and ensuring that I communicate, listen and always place them and their needs at the center of my attention. I have also always applied the same practice to my work and how I interact with my colleagues. Without fail, I found that this practice has allowed to me be a better father, a more supportive and productive colleague, and overall a better person.

There is value in nurturing and growing the qualities of leadership within all of us. As leaders, we improve our lives and the lives of others arounds us. We develop the skills and capabilities that allow us to rally others around common goals and to show them the path forward . In the process, we help advance humanity through peaceful collaboration and existence.


If this story resonates with you, or if you simply want to chat about your own experiences and how leadership has helped you or can help you in your work, I’d love to connect.

Feel free to reach out through LinkedIn or contact me here. Let’s explore how thoughtful, human-centered design can transform your next project

The Power of Optimism in Design: Lessons from Sales and Personal Experience

When we are optimistic, we strive to bting out the best in us and create our best work. An optimistic attitude allows us to be bold and even adventurous with our ideas, and we start to create out of inspiration and confidence in the true potential of our work.

Introduction

In his book Learned Optimism Martin Seligman describes a study in 1985 that focused on fifteen thousand insurance agent applicants to Met Life [1]. The study involved one thousand of the fifteen thousand applicants who failed the standard industry test, but who took an additional test called the ASQ that determined whether they were optimists or pessimists. The higher the ASQ score the more optimistic an agent was determined to be. The goal of the study was to hire these agents as part of Met Life’s workforce and measure the performance of optimistic agents compared to the pessimistic ones.

The study showed that agents in the top half of the ASQ score sold 20% more insurance than the less optimistic ones from the bottom half. Agents from the top quarter of the ASQ sold 50% more insurance than those from the bottom quarter. The study predicted that optimism determined which of the agents survived and sold the most insurance, and it did so about as well as the industry test. Seligman’s study clearly shows the impact of optimism on the success of insurance agents selling insurance, and it enabled Met Life to change their hiring practices to not focus only on whether agents passed the industry test, but also on how optimistic they were.

How does Seligman’s study apply to design? Designers, like insurance agents, are also sales people. They sell ideas, stories and concepts that shape products to a different group of people. While insurance is a discretionary product that consumers can choose not to buy, designers sell key ideas that shape the way business organizations operate, the way they serve their customers and that ultimately shape the business bottom line. In a way, the challenges designers face in their task of selling their design ideas can be just as challenging, if not more challenging, than that of insurance agents. That’s because designers must sell ideas for products that must work and achieve the business and organizational success they set out to achieve. In this case, “not buying” the product is not an option for the organization, and the product and the ideas behind it must succeed.

Why optimism is essential for a designer

A designer’s role is not limited to only being able to tell compelling stories about the design solutions they propose. They need to effectively sell the ideas built into those stories to business stakeholders, who can often be at different levels of seniority and receptiveness. If we were to rely on the results of the study by Seligman as a guide, then optimism and a positive mindset play a key role in enabling a designer to achieve their objectives.

An optimistic mindset is essential to the designer’s success and the success of their team. Every organization has a different culture and level of receptiveness to change, especially when it comes to digital products that reimagine existing business processes. The designer may be working with a business team that is used to following a decades old business processs and they may be resistant to changing it. A successful designer can place themselves in any type of business environment, adapt to the existing culture and learn to work within its confines.

The key is to maintain a positive and optimistic attitude towards the outcomes of the project, the stakeholders and the team the designer works with. This attitude communicates the designer’s confidence in their ability to succeed in their work by demonstrating to everyone around them, that they believe in the value their work and design will bring.

How optimism helped me in my career

On a past project where I led the design for an options trading application at a financial institution, I witnessed the importance of an optimistic mindset firsthand during a design review I was leading. The application I was working on was a highly specialized investing application, and I was working with a team of stakeholders who were extremely specialized in their line of business and had strong beliefs about how the application should behave. The stakeholders believed, and rightly so, that they had a strong understanding of their customers and how they tended to use the application. When I started work on the application, I relied on user research, personas and journeys as well as extensive competitive analysis and frequent peer reviews to propose my designs. During one of the design reviews, as I was reviewing a flow in the design I was proposing, one of the stakeholders in a corner of the room interjected and said:

“I am not sure why we should take his (referring to me) advice on how this should work. We can do a far better and more efficient job at it if we design and implemented it ourselves.”

This comment could have demotivated me and discouraged me from continuing with my review. I could have viewed it as a failure rather than feedback. However, I remember not being phased by it because I was confident in the value of my design and I needed to show that to the stakeholders. Instead of taking that comment and allowing it to impact me negatively, I acknowledged the feedback and I repeated my explanation of the design. I backed up my explanation by invoking the research, the competitive analysis, best practices and peer reviews that led me to the proposed design. Reframing this experience and seeing the positive in it fueled my confidence that I would be able to get buy-in from all stakeholders including the one who offered the feedback. I was able to show them the value in the design I was proposing by acknowledging their feedback and showing my confidence in the design I was proposing through evidence and research.

There was another occasion in my career where I realized the value of having an optimistic mindset. On that occasion, I found that my design was well received by the stakeholders after several design reviews and iterations. However, one day as I was reviewing my design with my manager, I realized he did not have a good opinion of it. This was not uncommon, because a good designer should always operate from a perspective that no design is prefect and that there is always room to improve. However, what was uncommon was the way this feedback was delivered:   

“I understand this design has been approved by the stakeholders, but I don’t like it. It may too late to change it now, but I will make sure to change it in the next product iteration.”

In other words, instead of his feedback being constructive and offering ideas on how to improve the design, it was dismissive and only offered the option of him taking over the design and changing it himself. This could have been another instance where it was easy to feel a sense of failure after all the work and effort I invested in obtaining the stakeholders’ buy-in. Instead, I worked with my manager to understand where he thought the gaps were, took his feedback and incorporated it in the design. I took leadership and ownership of this situation and worked with the project manager to allow me to regroup with the stakeholders, and I was able to successfully obtain their buy-in on the updates.

These experiences, along with many other experiences I had over my career, taught me the power of optimism and a positive mindset. On both of the occasions I mentioned above, I was successful in moving the design toward production with great feedback from end users. I did not give up and maintained confidence in my abilities and my design ideas, and I was able to approach my work from a perspective that was geared towards and focused on success. Everyone I worked with also wanted to invest in my success because it also meant they were also investing in their own success. When a designer adopts an attitude of optimism they quickly notice that everyone they work with also adopts the same attitude. There is no doubt that an optimistic attitude paves the way to overcoming the  challenges and obstacles faced in the design process, even if that took time and effort to achieve.

How a designer can adopt an optimistic mindset

The practice of cultivating optimism is grounded in Psychology. When practiced by the designer, these principles can bring tremendous benefits to their work. Below are the most important principles that can be practiced to cultivate optimism:

Cognitive reframing – Mistakes can happen. Important requirements may be missed or misunderstood. Other times, design decisions are made but no close attention is paid to them until much later in the design stage, and they must be changed. Instead of the designer blaming themselves for mistakes, it is more useful to reframe the mistake, turn it into a learning experience and find a way to pivot.

Gratitude practice – For a designer this means observing and focusing on the occasions when their work makes a positive impact. For example, not everyone may agree with their design direction, and they may often face criticism on what designs should look like. Focusing on the positive aspects and how their design is making a difference to the stakeholders and end users, and appreciating the impact it is making, can inspire the designer to take the feedback received and further improve on their work.  

Visualizing positive outcomes – Visualizing best case scenarios helps create a sense of possibility and cultivates an attitude that anything can be achieved. A designer should focus on the possibilities a design solution can create and operate from a mindset that their design solution will bring the value that is expected.

Surrounding oneself with optimism – Our environment is important in reinforcing how we behave and react. When we surround ourselves with positivity, positive outcomes will come to us. This practice goes beyond the workplace and applies to everything we do in life. If a designer is constantly seeking a positive environment, then that will ultimately reflect in their attitude and how they come across to others they work with. The designer will notice that everyone they work with becomes more receptive to their ideas, and more willing to work with them to develop those ideas and share in their success.

Setting small, achievable goals – Progress fuels optimism, and small wins help us feel capable. When design problems seem difficult or consensus seems out of reach, it is important to set small and achievable milestones, and celebrate when those milestones are achieved.

Practicing self-compassion – We are always our own harshest critics. Speaking to ourselves with compassion and encouragement, not blame, helps us find the positive in our work and move forward with solutions that will allow us to achieve our goals.

Adopting an ‘Experimental’ mindset – This practice is the most important one in cultivating an optimistic mindset. If something does not work, it should be considered as feedback to improve and not a failure. It is all too often that designers experience setbacks where design may not meet expectations. Sometimes business stakeholders or managers may disagree with the designer’s approach. A designer might react to this thinking “I did not do a good job on this design” or “I failed to understand the requirements”. Reframing this as “I can propose different design alternatives” or “I can show how my design improves on the requirements” makes the setback feel less threatening.

Conclusion

Seligman’s study showed that optimism could predict success, above and beyond traditional criteria for hiring insurance agents. The results of the study were so effective that Met Life and other industry players changed their hiring practices to hire agents who scored high on the optimism test yet narrowly failed the industry test. Like this study in insurance sales, the success of a designer in his work also hinges on optimism and a positive attitude. It is possible for a designer to follow and implement the best practices and theory of design, but if they cannot maintain an optimistic and positive attitude they will struggle to achieve a successful outcome. When we are optimistic, we strive to bting out the best in us and create our best work. An optimistic attitude allows us to be bold and even adventurous with our ideas, and we start to create out of inspiration and confidence in the true potential of our work. Optimism empowers a designer to focus on the success of their design ideas rather than the failures, turn setbacks into opportunities and adopt a mindset where everything is achievable even when faced with complex problems.   

References

[1] Seligman, M. E. P. (2006). Learned optimism: How to change your mind and your life (2nd ed.). Vintage Books.