The Future of the AI Economy: Automation, Employment, and Economic Sustainability

Finding an equilibrium where AI-driven innovation leads to prosperity, without eroding the foundation of economic stability, is crucial. This “ideal” world would require creative policy solutions, careful regulation, and a commitment to human-centred values to ensure that technological progress uplifts society.

Introduction

As an economist and a computer scientist, I am keenly interested in the impact of artificial intelligence on workforce dynamics, particularly considering recent industry transformations. I have been contemplating a prospective scenario in which AI not only meets but exceeds sector expectations, assuming responsibility for a substantial proportion of tasks previously performed by humans and significantly reducing the need for human labor. In this context, organizations may pursue maximal automation, leading to extensive workforce reductions. The prospect of an AI-driven enterprise, where AI systems execute 80–90% of functions formerly managed by people, raises critical questions regarding the long-term implications for labour forces and economies across the globe.

A key consideration against this backdrop is the historical significance of consumer spending in acting as a vital stabilizer during economic recessions, propping up economies and helping spur growth. If a large portion of the global human workforce becomes unemployed due to widespread automation, consumer spending would inevitably decline. This reduction would also lead to a significant drop in government tax revenues, as fewer individuals would be earning wages and contributing through income taxes.

High Unemployment and Fiscal Strain

Research shows that when unemployment rates rise above 6–7%, many advanced economies begin to see marked declines in both government tax receipts and GDP growth. For example, during the 2008–2009 global financial crisis, unemployment spikes above this threshold resulted in sharp contractions in economic activity and reduced fiscal capacity for governments.

Sustained unemployment rates at or above 10% can exacerbate these effects, leading to prolonged recessions, lower consumer confidence, and persistent deficits in government budgets. Therefore, even moderate increases in unemployment driven by automation have the potential to significantly disrupt economic stability and the ability of governments to fund essential services [1].

In an economy dominated by automation and artificial intelligence, there are several critical questions to consider: Who will purchase the goods and services produced if consumers lack sufficient financial resources? Who will contribute the tax revenues necessary to support government services? Will corporations or AI systems themselves assume most of the tax burden?

AI as a Productivity Booster, Not a Job Eliminator

From one perspective, artificial intelligence, including advances toward artificial general intelligence (AGI), does not necessarily mean the end for all jobs. Instead, AI can serve as a powerful tool to augment human productivity, streamline workflows, and empower employees to focus on higher-value tasks.

By automating repetitive or routine functions, AI allows workers to redirect their efforts towards creative, strategic, and interpersonal roles that are less susceptible to automation. This approach could also contribute to reducing government budget deficits, as increased efficiency and productivity may lead to greater economic output and higher tax revenues without the need for massive layoffs. Under this scenario, AI complements the workforce, enabling organizations to thrive while maintaining employment levels and supporting consumer spending.

The Case for Maximum Automation and Its Contradictions

If, on the other hand, the goal is to unlock AI’s full potential and achieve AGI, then the ideal scenario can be viewed as maximum automation, where intelligent machines perform virtually all tasks. This scenario promises unparalleled efficiency and cost savings as we try to maximize the potential of AI in all aspects of the economy. However, this scenario also raises serious concerns about economic stability, mass unemployment, and a sharp decline in consumer spending.

The argument that AI will only make jobs more productive and not replace them entirely assumes that limits will be imposed on AI’s adoption. Without such constraints, the drive for efficiency could lead to a fully automated economy, fundamentally changing the role of human labour and challenging the sustainability of consumer-driven markets and government revenues.

Ultimately, the debate centres on whether we should harness AI primarily to enhance productivity within a human-centred workforce or pursue the ideal of full automation, each path carrying profound implications for employment, economic prosperity, and societal well-being.

As organizations increasingly depend on AI to drive productivity—while consumer purchasing power diminishes—generating revenue can become increasingly challenging. As less consumers are able to spend their disposable incomes on goods and services, a ripple effect may transpire throughout the broader economy.  This scenario potentially undermines both the traditional models of organizational revenue growth and the sustainability of government tax revenues.

Envisioning The Ideal AI-Driven Economy

In considering the future shaped by artificial intelligence, the real test will be how society chooses to balance the undeniable benefits of automation, such as increased productivity, efficiency, and the potential for economic growth, against the potentially far-reaching consequences for employment, consumer spending, and government revenues. While AI holds promise as a productivity booster, empowering workers and supporting economic output, it also presents the risk of mass unemployment and declining consumer activity, particularly if full automation becomes the norm.

Finding an equilibrium where AI-driven innovation leads to prosperity, without eroding the foundation of economic stability, is crucial. This “ideal” world would require creative policy solutions, careful regulation, and a commitment to human-centred values to ensure that technological progress uplifts society. Ultimately, the path forward will depend on how we harness AI, not just for efficiency and cost savings, but to foster inclusive growth, maintain robust employment, and sustain government revenues. The choices we make now will define whether AI becomes a tool for shared prosperity or a force that challenges the very pillars of our economy.

Sorry, But AI Deserves ‘Please’ Too!

As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

Introduction

Large language models (LLMs) and AI chatbots have become woven into the fabric of our workplaces and personal lives, inviting us to reflect on the profound shift in our interaction with technology. As we navigate this new landscape, we find ourselves reevaluating the role of artificial intelligence (AI) in our daily routines. These advancements have not merely changed how we access information, seek advice, and perform research; they have opened a door to an era where insights and solutions are unveiled with remarkable speed and efficiency. As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

As these AI systems evolve to simulate human-like interactions, an intriguing phenomenon has emerged: people often address AI with polite phrases like “please” and “thank you,” echoing the social etiquette typically reserved for human conversations. This shift reflects a deeper societal change, where individuals begin to attribute a sense of agency and respect to machines, blurring the lines between human and artificial interaction. Furthermore, as AI continues to improve, this trend may lead to even more sophisticated relationships, encouraging users to engage with AI in ways that foster collaboration and mutual understanding, ultimately enhancing productivity and satisfaction in both personal and professional interactions.

With AI entities now entrenched in collaborative environments, one must ask: how do we, as humans, truly treat these so-called conversational agents? Despite AI’s lack of real emotions and its indifference to our so-called politeness, the patterns of user interaction reveal deep-seated beliefs about technology and the essence of human-AI relationships. LLMs are crafted to imitate human communication, creating an illusion of agency that drivers users to apply familiar social norms. In collaborative contexts, politeness becomes not just a nicety, but a catalyst for cooperation, compelling users to extend the very same respectful behavior to AI that they reserve for their human colleagues. [1]

Politeness Towards Machines and the CASA Paradigm

Politeness plays a vital role in shaping social interactions, particularly in environments where individuals must navigate complex power dynamics. It promotes harmony, reduces misunderstandings, and fosters cooperation among participants. Rather than being a rigid set of linguistic rules, politeness is a dynamic process involving the negotiation of social identities and power dynamics. These negotiations are influenced by participants’ backgrounds, their relationships with one another, and the specific context in which the interaction takes place [2].

Extending the concept of politeness to interactions with machines highlights the broader question of social engagement with technology. The Computers Are Social Actors (CASA) paradigm states that humans interact with computers in a fundamentally social manner, not because they consciously believe computers are human-like, nor due to ignorance or psychological dysfunction. Rather, this social orientation arises when people engage with computers, revealing that human-computer interactions are biased towards applying social norms similar to those used in human-to-human communication [3].

The CASA approach demonstrates that users unconsciously transfer rules and behaviours from human-to-human interactions, including politeness, to their engagements with AI. However, research examining young children’s interactions with virtual agents revealed contrasting patterns. Children often adopted a command-based style of communication with virtual agents, and this behaviour sometimes extended to their interactions with parents and educators in their personal lives [4].

Further studies into human-robot interaction have shown that the choice of wake-words can influence how users communicate with technology. For instance, using direct wake-words such as “Hey, Robot” may inadvertently encourage more abrupt or rude communication, especially among children, which could spill over into their interactions with other people. Conversely, adopting polite wake-words like “Excuse me, Robot” was found to foster more respectful and considerate exchanges with the technology [5].

Human-AI Interaction Dynamics

Research demonstrates that attributing agency to artificial intelligence is not necessarily the primary factor influencing politeness in user interactions. Instead, users who believe they are engaging with a person—regardless of whether the entity on the other end is human or computer—tend to exhibit behaviours typically associated with establishing interpersonal relationships, including politeness. Conversely, when users are aware that they are communicating with a computer, they are less likely to display such behaviours [6].

This pattern may help explain why users display politeness to large language models (LLMs) and generative AI agents. As these systems become more emotionally responsive and socially sophisticated, users increasingly attribute human-like qualities to them. This attribution encourages users to apply the same interpersonal communication mechanisms they use in interactions with other humans, thereby fostering polite exchanges.

Politeness in human-AI interactions often decreases as the interaction progresses. While users typically start out polite when engaging with AI, this politeness tends to diminish as their focus shifts to completing their tasks. Over time, users become more accustomed to interacting with AI and the complexity of their tasks may lessen, both of which contribute to a reduction in polite behaviour. For example, a user querying an LLM about a relatively low-risk scenario—such as running a snack bar—may quickly abandon polite language once the context becomes clear. In contrast, when faced with a higher-stakes task—such as understanding a legal concept—users may maintain politeness for longer, possibly due to increased cognitive demands or the seriousness of the task. In such scenarios, politeness may be perceived as facilitating better outcomes or advice, especially when uncertainty is involved.

Conclusion

Politeness in human-AI interactions is shaped by a complex interplay of social norms, individual user characteristics, and system design choices—such as the use of polite wake-words and emotionally responsive AI behaviours. While attributing agency to AI may not be the primary driver of politeness, users tend to display interpersonal behaviours like politeness when they perceive they are interacting with a person, regardless of whether the entity is human or computer.

As AI agents become more emotionally and socially sophisticated, users increasingly apply human-like communication strategies to these systems. However, politeness tends to wane as familiarity grows and task complexity diminishes, with higher-stakes scenarios sustaining polite engagement for longer. Recognizing these dynamics is crucial for designing AI systems that foster respectful and effective communication, ultimately supporting positive user experiences and outcomes.

Transforming Data into Actionable Insights through Design

Introduction

At the age of fifteen, I secured a summer position at a furniture factory. To get the job, I expressed my interest in technology and programming to the owner, specifically regarding their newly acquired CNC machine. To demonstrate my capability, I presented my academic record and was hired to support a senior operator with the machine.

That summer, I was struck by the ability to control complex machinery through programmed commands on its control board. The design and layout of the interface, as well as the tangible results yielded from my input, highlighted the intersection of technical expertise and thoughtful design. This experience sparked my curiosity about the origins and development of such systems and functionalities.

I have always maintained that design is fundamentally about clarity, how systems make sense and elicit meaningful responses. It involves translating intricate, technical concepts into experiences that are intuitive and accessible. This perspective has guided my approach throughout my career, whether developing an AI-powered dashboard for Air Canada, creating an inclusive quoting tool for TD Insurance, or designing online public services for Ontario.

The central challenge remains consistent: achieving transparency and trust in complex environments. Effective design bridges the gap between people and systems, supporting purposeful engagement.

My observational nature drives me to understand how systems operate, decisions are reached, and individuals navigate complexity. This curiosity informs my design methodology, which begins by analyzing the foundational elements, people, processes, data, and technology, that must integrate seamlessly to deliver a cohesive experience.

To me, design is not merely an aesthetic layer; it serves as the essential framework that provides structure, clarity, and empathy within multifaceted systems. Designing from this perspective, I prioritize not only usability but also alignment across stakeholders and components.

My core design strengths

Throughout my career, I have found that my most effective work comes from applying a set of foundational strengths to every project. These strengths consistently guide my approach and ensure each solution is thoughtful, impactful, and built for real-world complexity.

Systems Thinking: I make it a priority to look beyond surface-level interfaces. My approach involves examining how data, people, and technology interact and influence each other within a system. By doing so, I can design solutions that are not only visually appealing but also deeply integrated and sustainable across the entire ecosystem.

Human-Centred Design: Every design decision I make is grounded in observation and empathy. I focus on the user’s experience, prioritizing how it feels to engage with the product or service. My aim is to create solutions that resonate with individuals on a practical and emotional level.

Accessibility & Inclusion: Designing for everyone is a fundamental principle for me. I strive to ensure that the experiences I create are not just compliant with accessibility standards, but are genuinely usable and fair for all users. Inclusion is woven into the fabric of my process, shaping outcomes that reflect the diversity of people who will interact with them.

Storytelling & Visualization: I leverage visual storytelling to simplify and clarify complex ideas. Using visuals, I help teams and stakeholders see both what we are building and why it matters. This approach fosters understanding and alignment, making the design process transparent and purposeful.

Facilitation & Collaboration: I believe that the best insights and solutions emerge when diverse voices contribute to the process. By facilitating collaboration, I encourage open dialogue and collective problem-solving, ensuring that outcomes are shaped by a broad range of perspectives and expertise.

If I had to distill all these strengths into a single guiding principle, it would be this: “I design to understand, not just to create.”

My design approach: a cyclical process

Design, for me, is less of a straight line and more of a cycle, a continuous rhythm of curiosity, synthesis, and iteration. This process shapes how I approach every project, ensuring that each step builds upon the previous insights and discoveries.

1. Understand the System: I begin by mapping the entire ecosystem, considering all the people involved, their goals, the relevant data, and any constraints. This foundational understanding allows me to see how different elements interact and influence each other.

2. Observe the Experience: Next, I dedicate time to watch, listen, and learn how people actually engage with the system. Through observation and empathy, I uncover genuine behaviours and needs that may not be immediately apparent.

3. Synthesize & Prioritize: I then translate my findings into clear opportunities and actionable design principles. This synthesis helps to focus efforts on what matters most, guiding the team toward solutions that address real challenges.

4. Visualize the Future: Prototyping and iteration are central to my approach. I work to make complexity feel simple and trustworthy, refining concepts until the design communicates clarity and confidence.

5. Deliver & Educate: Finally, I collaborate with developers, stakeholders, and accessibility teams to bring the vision to life. I also focus on making the solution scalable, ensuring that the impact and understanding extend as the project grows.

Good design isn’t just creative, it’s disciplined, methodical, and deeply human.

Projects that demonstrate impact

Transforming operations at Air Canada

At Air Canada, I was responsible for designing AI dashboards that transformed predictive data into clear, actionable insights. These dashboards provided operations teams with the tools to act quickly and effectively, which resulted in a significant reduction in delay response time, by 25%. This project highlighted the value of turning complex data into meaningful information that drives real-world improvements.

Advancing accessibility at TD Insurance

During my time at TD Insurance, I led an accessibility-first redesign of the Auto and Travel Quoter. My approach was centred on ensuring that the solution met the rigorous standards of WCAG 2.1 AA compliance. The redesign not only made the product fully accessible, but also drove an 18% increase in conversions. This experience reinforced the importance of designing for everyone and demonstrated how accessibility can be a catalyst for business growth.

Simplifying government services for Ontarians

With the Ontario Ministry of Transportation, I took on the challenge of redesigning a complex government service. My focus was on simplifying the process for citizens, making it easier and more intuitive to use. The result was a 40% reduction in form completion time, making government interactions smoother and more efficient for the people of Ontario.

Clarity as a catalyst

What stands out to me about these projects is that each one demonstrates a universal truth: clarity scales. When people have a clear understanding of what they are doing and why, efficiency, trust, and accessibility naturally follow. These outcomes prove that good design is not just about aesthetics, it’s about making information actionable and understandable, leading to measurable impact.

Reflection

The best design doesn’t add more, it removes confusion. It connects people, systems, and intent, turning complexity into clarity.

If your organization is wrestling with complexity, whether that’s data, accessibility, or AI, that’s exactly where design can make the biggest difference.

At Mimico Design House, we specialize in helping teams turn that complexity into clarity, mapping systems, simplifying experiences, and designing interfaces that people actually understand and trust.

Through a combination of human-centered design, systems thinking, and accessibility expertise, I work with organizations to bridge the gap between business strategy and user experience, transforming friction points into moments of understanding.

If your team is facing challenges with alignment, usability, or data-driven decision-making, I’d love to explore how we can help.

You can connect with me directly on LinkedIn or visit mimicodesignhouse.com to learn more about how we help organizations design systems people believe in.

Dashboards Drive Great User Experience

A dashboard must enable the user to gain the information and insights they need “at a glance”, while also enabling them to better perform their tasks, and enhance their user experience overall. 

Introduction 

Whenever I drive my car, I am reminded of how its dashboard allows me to maintain control and remain aware of all the actions I need to take, while also being able to pay attention to my driving. My car’s dashboard indicates critical information to me like speed, engine oil temperature, and fuel level among other critical information. As the driver, it is essential for me to remain aware of these data points while I focus on the important task of driving, and the actions of other drivers around me.  

Like many applications, a car’s dashboard provides insight into the car’s inner workings in a user-friendly and intuitive manner, allowing the user to see and act upon information without needing to understand the technical details or the engineering behind it. This is why designing an application around a dashboard, not the other way around, makes sense in ensuring that the application’s features all cater to the data and information needs of the user.  

It is possible to architect an entire application and its features by thinking about the various components that exist on the dashboard, what information they will convey, and how the user will interact with these components. When a dashboard is designed around the user’s needs, the various components of the application must be designed such that they enable the dashboard components to receive the input they need and output the data users expect.  

In the age of AI-focused applications that require the design and development of models to support business requirements and deliver valuable insights, designing an effective dashboard focuses AI teams efforts on building models that deliver impactful output, reflected on the dashboard. 

Types of dashboards 

Dashboard can vary depending on user needs. Those needs can vary depending on whether the dashboard must enable high-level or in-depth analysis, the frequency of data updates required, and the scope of data the dashboard must track. Based on this, dashboards can be categorized into three different categories [1]: 

  • Strategic dashboards: Provide high-level metrics to support making strategic business decisions such as monitoring current business performance against benchmarks and goals. An example metric would be current sales revenue against targets and benchmarks set by the business. A strategic dashboard is mainly used by directors or high-level executives who rely on them to gain insights and make strategic business decisions.  
  • Operational dashboards: Provide real-time data and metrics to enable users to remain proactive and make operational decisions that affect business continuity. Operational dashboards must show data in a clear and easy to understand layout so that users can quickly see and act upon the information displayed. They must also provide the flexibility for users to customize notifications and alerts so that they do not miss taking any important actions. For example, airline flight operations planners may require the ability to monitor flight status and be alerted to potential delays. Some of the metrics a dashboard could show in this case are the status of gate, crew or maintenance operations. 
  • Analytical dashboards: Analytical dashboards use data to visualize and provide insight into both historical and current trends. Analytical dashboards are useful in providing business intelligence by consolidating and analyzing large datasets to produce easy to understand and actionable insights, specifically in AI applications that use machine learning models to product insights. For example, in a sales application the dashboard can provide insight into the number of leads and a breakdown of whether they were generated through phone, social media, email or a corporate website.  

Design principles and best practices 

Much like a car dashboard, an application dashboard must abstract the complexities of the data it displays to enable the user to quickly and easily gain insights and make decisions. To achieve these objectives, the following design principles and best practices should be considered.  

  • Dashboard “architecture”: It is important to think about what the dashboard must achieve based on the dashboard types describes above. Creating a dashboard with clarity, simplicity, and a clear hierarchy of data laid out for quick assessment, ensures that the information presented on the dashboard does not compete for the user’s attention. A well architected dashboard does not overwhelm the user such that they are unable to make clear decisions. It acts as a co-pilot producing all the information the user needs, when they need it.  
  • Visual elements: Choosing the correct visual elements to represent information on the dashboard ensures that the user can quickly and easily interpret the data presented. Close attention should be paid to: 
    • Using the right charts to represent information. For example, use a pie chart instead of a bar chart if there is a need to visualize data percentages. 
    • Designing tables with a minimal number of columns such that they are not overwhelming to the user, making it harder to interpret them. 
    • Paying attention to color coding ensures that charts can be easily scanned without the user straining to distinguish between the various elements the charts represent. It is also important to ensure that all colors chosen contrast properly with each other and that all text overlaid on top of the charts remains easy to read and accessible. 
    • Providing clear definitions for symbols and units ensures no ambiguity as to how to interpret the data presented on the dashboard. 
  • Customization and interactivity: Providing users with the flexibility to customize their dashboard allows them to create a layout that works best for their needs. This includes the ability to add or remove charts or tables, the ability to filter data, drill down and specify time ranges to display the data, where applicable.  
  • Real-time updates and performance: Ensuring that dashboard components and data update quickly and in real-time adds to the dashboard usability and value. This is best achieved by ensuring an efficient design to the dashboard components, such that they display only the information required unless the user decides to interact with them and perform additional filtering or customization. 

When implementing dashboards, the Exploration, Preparation, Implementation and Sustainment (EPIS) framework provides a roadmap for designers and developers to design and develop effective dashboards [2]. Combining human-centered methodology during the exploration and preparation phases of EPIS ensures that the dashboard meets users’ needs and expectations, while implementation science methods are especially important during the implementation and sustainment phases [3]. Care must be taken when implementing dashboards and EPIS provides an excellent framework that will be discussed in more detail in a subsequent article.  

Conclusion 

I always admire the design, layout, and clarity of the information presented to me on my car’s dashboard. The experience I receive when driving my car, through the clear and intuitive design of its dashboard components and instruments, makes every drive enjoyable. All the information I need is presented in real-time, laid out clearly and placed such that it allows me to focus on the task of driving while also paying attention to how my car is behaving. I can adjust, tune and customize the dashboard components in a way that further enhances my driving experience and adds to my sense of control of the car. 

The properties of a car dashboard reflect exactly how an application dashboard must behave. While the user of an application may be using the dashboard under a different context than driving a car, the principles of user experience, interaction design and overall usability still apply. A dashboard must enable the user to gain the information and insights they need “at a glance”, while also enabling them to better perform their tasks, and enhance their user experience overall.  


Designing solutions that work for users is what fuels my work. I’d love to connect and talk through your design ideas or challenges, connect with me today LinkedIn or contact me on Mimico Design House.

References 

[1] Dashboard Types Guide: Strategic, Operational, Tactical + More 

[2] Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23. 

[3] From glitter to gold: recommendations for effective dashboards from design through sustainment 

How To Build Large Language Models

Introduction 

Large Language Models (LLMs) are Artificial Intelligence algorithms that use massive data sets to summarize, generate and reason about new content. LLMs are built on a set of neural networks based on the transformer architecture. Each transformer consists of encoders and decoders that can understand text sequence and the relationship between words and phrases in it. 

The generative AI technologies that have been enabled by LLMs have transformed how organizations serve their customers, how workers perform their jobs and how users perform daily tasks when searching for information and leveraging intelligent systems. To build an LLM we need to define the objective of the model and whether it will be chatbot, code generator or a summarizer.  

Building an LLM 

Building an LLM requires the curation of vast datasets that enable the model to gain a deep understanding of the language, vocabulary and context around the model’s objective. These datasets can span terabytes of data and can be grown even further depending on the model’s objectives [1].  

Data collection and processing 

Once the model objectives are defined, data can be gathered from sources on the internet, books and academic literature, social media and public and private databases. The data is then curated to remove any low-quality, duplicate or irrelevant content. It is also important to ensure that all ethics, copyright and bias issues are addressed since those areas can become of major concern as the model develops and begins to produce the results and predictions it is designed to do. 

Selecting the model architecture 

Model selection involves selecting the neural network design that is best suited for the LLM goals and objectives. The type of architecture to select depends on the tasks the LLM must support, whether it is generation, translation or summarization.  

  • Perceptrons and feed-forward networks: Perceptrons are the most basic neural networks consisting of only input and output layers, with no hidden layers. Perceptrons are most suitable for solving linear problems with binary output. Feed-forward networks may include one or more layers hidden between the input and output. They introduce non-linearity, allowing for more complex relationships in data [2]. 
  • Recurrent Neural Networks (RNNs): RNNs are neural networks that process information sequentially by maintaining a hidden state that is updated as each element in the sequence is processed. RNNs are limited in capturing dependencies between elements within long sequences due to the vanishing gradient problem, where the influence of distant input makes it difficult to train the model as their signal becomes weaker.   
  • Transformers: Transformers apply global self-attention that allows each token to refer to any other token, regardless of distance. Additionally, by taking advantage of parallelization, transformers introduce features such as scalability, language understanding, deep reasoning and fluent text generation to LLMs that were never possible with RNNs. It is recommended to start with a robust architecture such as transformers as this will maximize performance and training efficiency. 
Llm Architecture
Figure 1. Example of an LLM architecture [3].

Implementing the model 

Implementing the model requires using a deep learning framework such as TensorFlow or PyTorch to design and assemble the model’s core architecture [4]. The key steps in implementing the model are: 

  • Defining the model architecture such as transformers and specifying the key parameters including the number of heads and layers.  
  • Implementing the model by building the encoder and decoder layers, the attention mechanisms, feed-forward networks and normalizing the layers.  
  • Designing input/output mechanisms that enable tokenized text input and output layers for predicted tokens. 
  • Using modular design and optimizing resource allocation to scale training for large datasets. 

Training the model 

Model training is a multi-phase process requiring extensive data and computational resources. These phases include: 

  1. Self-supervised learning where the model is fed massive amounts of data, so that it can be trained in language understanding and predicting missing words in a sequence.  
  2. Supervised learning where the model is trained to understand prompts and instructions allowing it to generalize, interact and follow detailed requests.  
  3. Reinforcement learning with Human Feedback (RLHF) involves learning with human input to ensure that output matches human expectations and desired behaviour. This also ensures that the model avoids bias and harmful responses and that the output is helpful and accurate.  

Fine tuning and customization 

Customization techniques include full model fine-tuning where all weights in the model are adjusted to focus on task-specific data. It is also possible to fine-tune parameters and engineer prompts to focus on smaller modules, saving resources and enabling easier deployment.  

Training a pre-trained model based on domain-specific datasets allows the model to specialize on target tasks. This is easier and less resource-intensive than training a model from scratch since it leverages the base knowledge already learned by the model. 

Model deployment 

Deploying the LLM makes it available for real-world use, enabling users to interact with it. The model is deployed on local servers or cloud platforms using APIs to allow other applications or systems to interface with it. The model is scaled across multiple GPUs to handle the growing usage and improve performance [5]. The model is continually monitored, updated and maintained to ensure it remains current and accurate.   

Ethical and legal considerations 

The ethical and legal considerations are important in the development and deployment of LLMs. It is important that the LLM is unbiased and that it avoids propagating unfair and discriminatory outputs. This extends to discriminatory and harmful content which can be mitigated through reinforcement learning with human feedback (RLHF).  

Training data may contain sensitive and private information, and the larger the datasets used to train the model the greater the privacy risks they involve. It is essential that privacy laws are adhered to and followed to ensure the models can continue to evolve and develop while preventing unintended memorization or leakage of private information. 

Copyright and intellectual property must also be protected by ensuring that the proper licenses are obtained. Regular risk and compliance assessments and proper governance and oversight over the model life cycle can help mitigate ethical and legal issues.  

Conclusion 

Developing and deploying an LLM in 2025 requires a combination technical, analytical and soft skills. Strong programming skills in Python, R and Java are critical to AI development. A deep understanding of machine learning and LLM architectures including an understanding of the foundational mathematical concepts underlying them, are also critical. It is also important to have a good understanding of hardware architectures including CPUs, GPUs, TPUs and NPUs to ensure that the tasks performed by the LLM are deployed on the most suitable hardware to ensure efficiency, scalability and cost-effectiveness.    

Other skills related to data management, problem-solving, critical thinking, communication and collaboration, and ethics and responsible AI are also essential in ensuring the models remain useful and sustainable. 

References 

[1] The Ultimate Guide to Building Large Language Models 

[2] Feedforward Neural Networks 

[3] The architecture of today’s LLM applications  

[4] How to Build a Large Language Model: A Comprehensive Guide 

[5] Large Language Model Training in 2025  

The Evolution of Large Language Models: From Recurrence to Transformers

While LLMs and their revolutionary transformer technology continue to impress us with new milestones, their foundations are deeply rooted in decades of research conducted in neural networks at countless institutions, and through the work of countless researchers.

Introduction

Large Language Models (LLMs) have gained momentum over the past five years as their use proliferated in a variety of applications, from chat-based language processing to code generation. Thanks to the transformer architecture, these LLMs possess superior abilities to capture the relationships within a sequence of text input, regardless of where in the input those relationships exist.

Transformers were first introduced in a 2017 landmark paper titled “Attention is all you need” [1]. The paper introduced a new approach to language processing that applied the concept of self-attention to process entire input sequences in parallel. Prior to transformers, neural architectures handled data sequentially, maintaining awareness of the input through hidden states that were recurrently updated with each step passing its output as input into the next.

LLMs are only an evolution of decades old artificial intelligence technology that can be traced back to the mid 20th century. While the breakthroughs of the past five years in LLMs have been propelled by the introduction of transformers, their foundations were established and developed over decades of research in Artificial Intelligence.

The History of LLMs

The foundations of Large Language Models (LLMs) can be traced back to experiments with neural networks conducted in the 1950s and 1960s. In the 1950s, researchers at IBM and Georgetown University investigated ways to enable computers to perform natural language processing (NLP). The goal of this experiment was to create a system that allowed translation from Russian to English. The first example of a chatbot was conceived in the 1960s with “Eliza”, designed by MIT’s Joseph Weizenbaum, and it established the foundations for research into natural language processing.

NLPs relied on simple models like the Perceptron, which were simple feed-forward networks without any recurrence features. Perceptrons were first introduced by Frank Rosenblatt in 1958. They were a single-layer neural network, based on an algorithm that classified input into two possible categories, and tweaked its predictions over millions of iterations to improve accuracy [3]. In the 1980s, the introduction of Recurrent Neural Networks (RNNs) improved on perceptrons by handling data sequentially while maintaining feedback loops in each step, further improving learning capabilities. RNNs were better able to understand and generate sequences through memory and recurrence, something perceptrons could not do [4]. Modern LLMs improved further on RNNs by enabling parallel rather than sequential computing.    

In 1997, Long Short-Term Memory (LSTM) networks introduced deeper and more complex neural networks that could handle greater amounts of data. Fast forward to 2019, a team of researchers at Google introduced the Bidirectional Encoder Representations from Transformers (BERT) model. BERT’s innovation was its bidirectionality which allowed the output and input to take each other’s context into account. This allowed the pre-trained BERT to be fine-tuned with just one additional output layer to create state-of-the-art models for a range of tasks [5].

From 2019 onwards, the size and capabilities of LLMs grew exponentially. By the time OpenAI released ChatGPT in November 2022, the size of its GPT models was growing in staggering amounts, until it reached an estimated 1.8 trillion parameters in GPT-4. These parameters include learned model weights that control how input tokens are transformed layer by layer, as discussed later in this article. ChatGPT allows non-technical users to prompt the LLM and receive a response quickly. The more the user interacts with the model, the better the context it can build, thus allowing it to maintain a conversational type of interaction with the user.

The LLM race was on. All the key industry players began releasing their own versions of LLMs to compete with OpenAI. To respond to ChatGPT, Google released Bard, while Meta introduced LLaMA (Large Language Model Meta AI). Microsoft had partnered with OpenAI in 2019 and built a version of its Bing search engine powered by ChatGPT. DataBricks also released its own open-source LLM, named “Dolly”

Understanding Recurrent Neural Networks (RNNs)

The inherent characteristic of Recurrent Neural Networks (RNNs) is their memory or hidden state. RNNs process input sequentially, token by token, with each step considering the current input token and the current hidden state to calculate a new hidden state. The hidden state acts as a running summary of the information seen so far in the sequence. RNNs understand a sequence by processing them word by word, while keeping a running summary of the words seen so far [6].

The recurrent structure of RNNs means that they perform the same computation at each step, with their internal state changing based on the input sequence. Therefore, if we have an input sequence x = (x1, x2, …, xt), the RNN updates its hidden state ht at time step t using the current input xt and the previous hidden state ht-1. This can be formulated as:

Image 11

Where:

  • ht is the new hidden state at time step t
  • ht-1 is the hidden state from the previous time step
  •  xt is the input vector at time step t
  •  Whh and Wxh are the shared weight matrices across all time steps for hidden-to-hidden and input-to-hidden connections, respectively.
  •  bh is a bias vector
  • tanh is a common activation function (hyperbolic tangent), introducing non-linearity. 

This process can be visualized by breaking out the RNN timeline to show a snapshot of the system at each step in time, and how the hidden state is updated and passed from one step to the next.

Screenshot 2025 08 20 At 10.49.57 Am
Figure 1. An RNN through time. The RNN cell processes input and the hidden state from time t-1 to produce the hidden state that is used as input together with token input from time t.

Limitations of RNNs

The sequential nature of RNNs limits their ability to process tasks that contain long sequences. This may be one of the main limitations that gave rise to the development of the transformer architecture and the need to process sequences much faster and in parallel. The following are some of the main limitations of RNNs.

Limitations in modeling long-range dependencies: RNNs are limited in capturing dependencies between elements within long sequences. This limitation is due primarily to the vanishing gradient problem. As gradients (error signals) occur during later steps in time, it becomes increasingly more difficult for the signal to flow backwards to earlier time steps and adjust the weights. This is because the longer the sequence the fainter the signal becomes. As the signal becomes weaker, by the time it reaches the relevant steps it becomes increasingly more difficult for the network to learn and trace back the relationship between those earlier inputs and the later outputs.

Sequential processing: RNNs process sequences token by token, in order. Hidden states must also be processed sequentially such that to obtain the hidden state at time t, the RNN must use the hidden state from t-1 as input. Modern hardware like GPUs and TPUs are well equipped to work with parallel computation. RNNs are unable to make use of this hardware due to their sequential processing, which leads to longer training times compared to parallel architectures.

Fixed size of hidden states: In the sequence-to-sequence model, the encoder must process and compress the entire input sequence into a single fixed size vector. This vector is then passed to the RNN decoder which is used to generate the output for the next hidden state. The compression of potentially long and complex input sequences into fixed-size vectors can be challenging. It also makes it difficult to retrain the network on all the input details that may have been compressed, and thus potentially causing some important information required for training to be missed.

How Transformers Replaced Recurrence

The limitations of RNNs in optimizing learning over large sequences and their sequential processing gave rise to the transformer architecture. Instead of sequential processing, transformers introduced self-attention, enabling the network to learn from any point in the sequence, regardless of distance.

Self-attention in transformers is analogous to the way humans process long sequences of text. When we attempt to translate text or process complex sequences, we do not read the entire text and then attempt to translate or understand it. Instead, we tend to go back and review parts of the text that we determine are most relevant to our understanding of it, so that we can generate the output we are trying to produce. In other words, we pay attention to the most relevant parts of the input that will help us generate the output. Transformers apply global self-attention that allows each token to refer to any other token, regardless of distance. Additionally, by taking advantage of parallelization, transformers introduce features such as scalability, language understanding, deep reasoning and fluent text generation to LLMs that were never possible with RNNs.

How Transformers Pay Attention

Self-attention enables the model to refer to any token in the input sequence and capture any complex relationships and dependencies within the data [7]. Self-attention is computed through the following steps.

1. Query, Key, Value (Q,K,V) matrices
  • Query (Q): Represents what current information is required or must be focused on. This can be described by asking the question “What information is the most relevant right now?”
  • Key (K): Keys act as identifiers for each input element. They are compared against the input sequence to determine relevance. This is analogous to asking the question “does this input token match the information I am looking for?”
  • Value (V): Values are also associated with each input token and represent the content or the meaning of that token. Values are weighted and summed to produce the context vector, which can be described as “this is the information I have”.

The model performs a lookup for each Query across all Keys. The degree to which a Query matches a Key determines the weight assigned to the corresponding Value. The model then calculates a weight or an attention score that determines how much attention a token should receive when generating predictions. The attention scores are used to calculate a weighted sum of all the Value vectors from the input sequence. The result is a vector containing the weighted sum of all the weighted value vectors.

Screenshot 2025 08 20 At 10.53.37 Am
Figure 2. Calculating the weighted sum vector. Attention scores are multiplied by the Value (input) vectors which are then summed to produce the weighted sum vector.

We have already discussed how traditional RNNs struggle to retain information for distant input due to the sequential nature of their hidden state. Attention, on the other hand, allows the model to consider the weights of all inputs and by summing them up. The resulting vector incorporates information from all inputs with the proper weights assigned to them. This allows the model to have a context of all input, while focusing on the most relevant information in the sequence, regardless of their distance.

2. Multi-head attention

When we consider a sentence, we do not consider it one word at a time. Instead, we look at each specific word in the sentence and consider whether it is the subject or the object. We also consider the overall grammar to make sense of the sentence and what it is trying to convey.

The same analogy applies when calculating attention. Instead of performing a single attention calculation for the Q, K, V vectors, multiple calculations are performed each on a single attention head, such that each head looks at a different pattern or relationship in the sequence. This is the concept of multi-head attention which allows the parallel processing of the Q, K, V vectors. It allows the model to look at different patterns or relationships within the sequence.

3. Masked multi-head self-attention

Masking ensures that the head focuses only on the tokens received so far when generating output, without looking ahead into the input sequence to generate the next token.

  • Attention score: The dot product of the Q and K matrices is used to determine the alignment of each Query with each Key, producing a square matrix reflecting the relationship between all input tokens.
  • Masking: A mask is applied to the resulting attention matrix to positions the model is not allowed to access, thus preventing it from ‘peeking’ when predicting the next token.
  • Softmax: After masking, the attention score is converted into probability using the Softmax function. The Softmax function applies a probability distribution to a vector whose size matches the vocabulary of the model, called logits. For example, if the model has a vocabulary of 50,000 words, the output logits vector will have a dimension of 50,000. Each element in the logits vector corresponds to a score for one specific token in the vocabulary. The Softmax function takes the logits vector as input, and outputs a probability vector that represents the model’s predicted probability distribution over the entire vocabulary of the model for the current position in the output sequence. 

When it calculates attention for the Q, K, V vectors, the model does not recalculate attention for the same original Q, K, V vectors. Rather, it learns separate linear projections for each head. If we have h attention heads, then each head i learns the projection matrices

Image 18

Each head performs the scaled Dot-Product Attention calculation using its projected Qi, Ki, Vi:

Image 13

Where dis the dimension of the Ki vectors within each head. Each projection of Qi, Ki, Vi allows a head to focus on and learn from a different representation of the original input. By running these calculations in parallel the model can learn about different types of relationships within the sequence.

4. Output and concatenation

The final step is to concatenate the output from all attention heads and apply a linear projection, using a learned weight matrix, to the concatenated output. The concatenated output is fed into another linear feedforward layer, where it is normalized back to a constant size to preserve the original meaning of the input before it is passed deeper into additional layers in the network [8].

Conclusion

There is no doubt that transformers have revolutionized the way LLMs have been deployed and applied in a variety of applications, including chatbots, content creation, agents and code completion. By relying on large and ever-growing volume of parameters, and an architecture that is designed for scalability and parallel computing, we are only beginning to discover the breadth of applications transformers can have.

As the challenges facing LLMs continue to be overcome, such as the ethical and environmental concerns, we can expect them to continue to become more efficient, more powerful and ultimately more intelligent. While LLMs and their revolutionary transformer technology continue to impress us with new milestones, their foundations are deeply rooted in decades of research conducted in neural networks at countless institutions, and through the work of countless researchers.

References

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin. “Attention Is All You Need.” In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017). arXiv:1706.03762 cs.CL, 2017

[2] The history, timeline, and future of LLMs

[3] Cornell Chronicle

[4] Deep Neural Networks: The 3 Popular Types (MLP, CNN and RNN)

[5] Large language models: their history, capabilities and limitations

[6] RNN Recap for Sequence Modeling

[7] Transformer Explainer

[8] What is a transformer model?

Large Language Models: Principles, Examples, and Technical Foundations

Introduction

Large Language Models (LLMs) are Artificial Intelligence algorithms that use massive data sets to summarize, generate and reason about new content. LLMs use deep learning techniques capable of broad range Natural Language Processing (NLP) tasks. NLP tasks are those involving analysis, answering questions through text translation, classification, and generation [1][2].

Put simply, LLMs are computer programs that can interpret human input and complex data extremely well given large enough datasets to train and learn from. The generative AI technologies that have been enabled by LLMs have transformed how organizations serve their customers, how workers perform their jobs and how users perform daily tasks when searching for information and leveraging intelligent systems.

The core principles of LLMs

LLMs are built on a set of neural networks based on the transformer architecture. Each transformer consists of encoders and decoders that can understand text and the relationship between words and phrases in it. The transformer architecture relies on the next-word prediction principle which allows predicting the most probable next work based on a text prompt from the user. Transformers can process sequences in parallel which enables them to learn and train much faster [2][3][4]. This is due to their self-attention mechanism which enables transformers to process sequences and capture distant dependencies much more effectively than previous architectures.

The transformer architecture consists of three key components:

  • Embedding: To generate text using the transformer model, the input must be converted into a numerical format that can be understood by the model. This process involves three steps that involve 1) tokenizing the input and breaking it down into smaller, more manageable pieces 2) embedding the tokens in a matrix that allows the model to assign semantic meaning to each token 3) encoding the position of each token in the input prompt 4) final embedding by taking the sum of the tokens and positional encodings, and capturing the position of the tokens in the input sequence.
  • Transformer block: comprises of multi-head self-attention and a multi-layer Perceptron layer. Most models stack these blocks sequentially, allowing the token representations to evolve though layers of blocks which in turn allows the model to build an intricate understanding of each token.
  • Output probabilities: Once the input has been processed through the transformer blocks, it passes through the final layer that prepares it for token prediction. This step projects a final representation into a dimensional space where each token is assigned a likelihood of being the next word. A probability distribution is applied to determine the next token based on its likelihood, which in turn enables text generation.
Image 10

LLM applications

The transformer architecture allows LLMs to achieve a massive scale of billions of parameters. LLMs begin with pre-training on large datasets of text, grammar, facts and context. Once pre-trained, the models undergo fine-tuning where labeled datasets are used to adapt the models to specific tasks. The ability of LLMs to use billions of parameters, combined with their efficient attention mechanisms and their training capabilities, allows LLMs to power modern AI applications such as chatbots, content creation, code completion and translation.

Text generation: Chatbots and content creation

Text generation is one of the most prominent applications of LLMs where coherent and context-relevant text is automatically produced. This application of LLMs powers chatbots, like ChatGPT, that interact with users by answering questions, providing recommendations, generating images and conducting research [5].

GPT 4.5 and 4o feature multimodal capabilities allowing them to handle text and images for versatile use in different applications, and they can both handle text processing capacity of 25,000 words, though the amount of computational resources varies between the two models.

By leveraging their vast datasets, LLMs are also used for content creation such as social media posts, product descriptions and marketing. Tools like Copy.ai and Grammarly use LLMs to generate marketing copy, and assist with grammar and text editing. DeepL Translator uses LLMs trained on linguistic data for language translation.

Agents

Agentic LLMs refer to conversational programs such as chatbots and intelligent assistants that use transformer-based architectures and Recurrent Neural Networks (RNNs) to interpret user input, process sequential data such as text, and generate coherent, personalized responses [6]. Personalized responses to input text are achieved through context-awareness and analyzing conversations.

Agentic LLMs are also capable of managing complex workflows and can collaborate with other AI agents for better analysis. Vast datasets can be leveraged to support a variety of domains such as healthcare, finance and customer support.

Code completion

Code completion is a leading application of LLMs that uses the transformer-based  architecture to generate and suggest code by predicting next tokens, statements or entire code blocks. In this context, transformer models are trained using self-attention mechanisms to enable code understanding and completion predictions [7]. The encoder-decoder transformer model is used such that the input is code surrounding the cursor (converted into tokens), and the output is a set of suggestions to complete the current or multiple lines.

Challenges and future directions

Large Language Models are still facing challenges related to ethical and privacy concerns, maintaining accuracy, avoiding bias, and managing high resource consumption [8].

  • Ethical concerns: LLMs are trained on massive datasets. There are still open questions as to who can use these datasets, and how and when they can be used. These datasets can also be biased and lead to biased output from LLMs, which can lead to misinformation and hate speech.
  • Data privacy: The use of massive datasets containing large amounts of user data poses significant privacy concerns. Safeguards in the use of data are required to train a model without compromising user privacy. As the use of LLMs becomes more mainstream, and as the size of datasets used to train them increases, so do the privacy concerns around their use.
  • Output bias: Existing biases in the available training data can cause LLMs to amplify those biases, leading to inaccurate and misleading results. This is particularly important for areas that require objective data analysis and output, such as law, healthcare and economics.
  • Hallucinations: LLMs are prone to “hallucinations” where the model output may seem reasonable, yet the information provided is incorrect. Hallucinations can be addressed through better training and validation methodologies to enhance the reliability of generated content.
  • Environmental impact: Training and deploying LLMs requires an extensive amount of energy resources, leading to increased carbon emissions. There is a need to develop more efficient algorithms while also investing in renewable and efficient energy generation that will lower the carbon footprint of LLMs, especially as their use and application accelerate.

Addressing these and other challenges such as regulatory compliance, security and cyber attacks will ensure that LLMs continue to use the correct input datasets while producing the correct output in an ethical, fair and unbiased manner. The integration of domain-specific knowledge through specialized fine tuning will also enable LLMs to produce more accurate and context-aware information that will maximize their benefits.

Conclusion

LLMs power a variety of applications, including chatbots to content creation, code completion and domain-specific automation. Using the transformer architecture and vast datasets to train and learn, they have emerged as a transformative discipline of artificial intelligence. LLMs have proved their outstanding capabilities in understanding, generating, and reasoning with natural language. While there are challenges to overcome for LLMs such as bias, accuracy, environmental impact, and domain specialization, it is expected that LLMs will become more efficient and trustworthy as algorithms improve and innovations are achieved through better fact-checking and human oversight.

References

[1] What are large language models (LLMs)

[2] What are large language models (LLMs)

[3] What is LLM (Large Language Model)?

[4] Transformer Explainer

[5] 10+ Large Language Model Examples & Benchmark 2025

[6] Chatbot Architecture: RNNs and Transformers

[7] ML-Enhanced Code Completion Improves Developer Productivity

[8] Raza, M., Jahangir, Z., Riaz, M.B. et al. Industrial applications of large language models. Sci Rep 15, 13755 (2025). https://doi.org/10.1038/s41598-025-98483-1

Retail Is Entering Its Agentic AI Era

The retail landscape is being quickly transformed by Agentic AI programs that are driving a competitive race to lead in autonomy, speed and personalized customer experiences. In 2025, retailers cannot afford not to move quickly and aggressively in implementing agentic AI in all business functions or they risk being left behind, or worse, forced to exit.  

Introduction

AI agents are redefining retail and are evolving into autonomous assistants that plan, recommend and take action. One of the most prominent examples of this shift is Walmart’s “Sparky”, a conversational AI shopping assistant in the mobile app that can understand customers shopping needs, suggest relevant products, answer questions and provide recommendations based on preferences [1]. Walmart is betting big on AI to drive its e-commerce growth and is aiming for online sales to account for 50% of its total sales [2].  

Amazon, another retail giant, is using AI on a different scale by creating a harmonious ecosystem of AI and Machine Learning (ML) models across the different functional areas of the business. For example, demand forecasting is accomplished using models that leverage sales history, social media, economic trends and weather to predict demand more accurately. Machine learning (ML) algorithms use data across the supply chain to optimize stock levels and replenishment strategies to ensure alignment with predicted demand. Amazon is also using AI to automate inventory management and using AI-driven robots to manage the movement of good within warehouses. Other AI models optimize delivery routes in real-time using inputs like traffic conditions and weather among other factors [3].  

Retailers that make use of AI and ML will ensure they maintain a competitive edge, and those that do not, risk being left behind or forced to exit. Amazon’s example of creating an ecosystem that uses the output from one AI model as input into another ensures that the business continues to add efficiencies and boost future profitability.  Across the U.S., retailers are investing heavily in AI agents, with 83% of companies claiming AI is a top priority in business plans [4].  

These statistics bring about an interesting question: what if every customer and every employee had their own AI agent, helping find products and optimize their shopping experience, or helping with labor-intensive tasks? AI agents are evolving from pilot projects to front-line and business critical applications, enabling businesses to gain a competitive edge and attract customers with better online shopping experiences.

What Are AI Agents? 

In the context of AI, “agentic” refers to autonomous systems capable of making decisions and acting independently. AI agents are a more advanced form of AI that can make decisions and take actions with little or no human intervention. Agentic AI can combine multiple interconnected AI agents that are continuously learning, reasoning and acting proactively. Businesses can customize AI agents to meet their needs, given the flexibility and adaptability of AI agents for a wide range of industries and applications [5][6]. 

The key features of agentic AI include: 

  • Autonomy: the ability to work autonomously to analyze data and solve problems in real-time with little human intervention. 
  • Collaboration: the ability of multiple AI agents to work together leveraging Large Language Models (LLMs) and complex reasoning to solve complex business problems. 
  • Learning and adaptation: dynamically evolving by interacting with its environment, and refining strategies, based on feedback and real-time data. 
  • Reasoning and proactivity: identifying issues and forecast trends to make decisions such as reordering inventory or resolving customer complaints.  

The adoption of Agentic AI in 2025 is gaining momentum as businesses aim to move from insight to action at greater speed and efficiency. Agentic AI solves the problem of scarce human resources needed to deal with the growing volume, complexity and inter-dependence of data. By moving at the speed of machine computation, agentic AI allows businesses to be more agile in real-time, act on business-critical insights more quickly, and scale more rapidly.  

The competitive edge introduced by agentic AI is driving its rapid adoption, and it is due to the following factors [7][8][9]: 

  • Speed: Businesses must move and react to customer needs, supply chain factors and market conditions at unprecedented speeds in 2025. It is no longer sufficient to use traditional AI that still relies on human intervention. Agentic AI is not only able to forecast problems and issues, but it can also act and execute upon them. Agentic AI can forecast and resolve customer issues before they even occur, and it can react to supply chain disruptions by forecasting and acting upon them, before they happen. 
  • Reduce reliance on humans, not replace them: Agentic AI does not aim to replace humans and take away jobs, but rather to augment them. It acts as a co-worker that enhances productivity by focusing on analysis of repetitive data-intensive processes, creating forecasts that enable faster decision-making, and enabling employees to focus on business strategy and the creative, innovative decisions that will allow the business to continue to grow. Agentic AI allows businesses to increase performance while cutting costs without the need for increased human intervention. 
  • Cost reduction and improved ROI: Agentic AI is also unlocking vast opportunities for cost reduction, through quick evaluation of data, testing strategies and adjusting operations in real-time. By automating repetitive and data-intensive processes, AI agents reduce the dependence on manual labor, minimize errors that translate to rework and add cost-effectiveness and efficiency that in turn result in higher ROI.  
  • Enhanced customer experience: AI agents are capable of contextual understanding, proactive assistance and continuous learning. This allows them to boost customer satisfaction and loyalty by offering instant, real-time assistance and answers to customers queries while reducing wait times and improving resolutions rates.  
  • Business must adapt or die: Agentic AI allows businesses to remain at the forefront of their market by learning and adapting in real-time. In 2025, customers expect instant and personalized service. It is becoming easier for businesses to integrate agentic AI into their various systems, especially with the introduction of Model Context Protocol (MCP) integration framework enabling intelligent agents to interact with external systems in a standardized, secure, and contextual way. User-friendly applications allow businesses to easily connect and deploy AI agents via a visual workflow builder without coding. Business have the opportunity to adapt by leveraging the technologies and capabilities available to them today to implement agentic AI.   

The following table illustrates how AI is being implemented across various areas within Retail. 

Functional Areas Applications Examples 
Customer experience • Personalize services, answer questions, and process orders 
• Offer product and project guidance 
• Smart kiosks assist with product search, availability, and location 
• AI delivers instant answers, recommendations, and smoother shopping 
• Walmart’s “Sparky” suggests products and summarizes reviews Lowe’s AI assistant offers DIY and product support via app H&M’s chatbot recommends outfits, boosting satisfaction by 30% [10] 
Inventory Management  • Streamline store ops and inventory management AI robots track stock and automate restocking 
• Smart shelves auto-detect low stock and reorder 
• Forecast demand using sales and market data 
• AI schedules staff based on foot traffic forecasts Video analytics detect theft and safety issues 
• Zara’s AI cut stockouts by 20% and excess by 15% by using data from sales, customer behavior and market trends to forecast demand 
• Walmart uses robots for real-time shelf scanning 
• Home Depot AI helps staff quickly access data and gain necessary information to help customers 
Supply Chain • Adjust orders and routing using sales, weather, and trend data 
• Track shipments, suppliers, and logistics for full supply chain visibility 
• Improve forecasting to optimize supply chain operations 
• Cut costs by aligning forecasts with supply chain efficiency 
• Kroger’s AI forecasting cut food waste by 10% and improved inventory accuracy by 25% 
• Unilever’s AI use reduced supply chain costs by 15% and improved delivery times by 10% 
• Walmart also achieved major gains through AI-driven supply chain improvements 
Marketing  • Agentic AI manages end-to-end customer journeys across commerce, content, loyalty, and service [11] 
• AI interprets real-time journey data to adapt marketing strategies 
• Retailers use AI insights to keep campaigns fast, relevant, and effective 
• AI analyzes feedback to spot improvements and cut manual tasks 
• Nike uses AI to predict purchases and personalize marketing, boosting engagement by 20% and driving sales 
• Coca-Cola uses predictive analytics to shift budget to high-performing channels, increasing Instagram spend by 20% and sales by 15%.  
 Table 1: Retail Examples Where AI Is Already Driving Impact

What Executives Should Do To Drive The Agentic AI Shift 

AI agents are changing how organizations can deliver value to their customers, improve customer experience and manage risks. Executives are becoming increasingly aware that agentic AI is not just an automation tool, but rather a new way to drive deep business innovation and, if harnessed correctly, a way to maintain a competitive advantage. 

Executives must lead the shift in the organization towards agentic AI by aligning governance and priorities to support IT and data investments required. To facilitate this shift to agentic AI the CEO must focus on [12][13]: 

  • Investing in labor and technical infrastructure: this is accomplished by removing the barriers across the various systems in the organization to enable AI agents to operate across the various functional areas. In addition, upskilling and retraining the workforce is required to learn how to work with the new technologies introduced by agentic AI. 
  • Lead the organizational shift: establish the goals and intended values of using agentic AI in the organization, and how it is to be used as a partner in creating value. The goal should not be simply to focus on optimizing headcount and reducing costs, it is about leading the organization into the future of retail. 
  • Highlight key projects: by spearheading key and high-value projects in areas of the organization such as supply chain management, operations and customer service, the CEO can help build momentum and rally resources. They can also demonstrate the value of agentic AI by tracking key KPIs. 
  • Oversee risk, compliance, and ethics: it is essential for the CEO to oversee all regulatory, privacy, transparency and risk issues related to the adoption of agentic AI. This is crucial in allowing the organization to proceed with confidence in implementing the various technical and IT infrastructures needed, and to realize the value and gains from agentic AI quickly and efficiently.  

It is important to note that organizations that can quickly adopt and adapt to agentic AI will gain the competitive edge. The value proposition for executives in adopting this technology can be summarized in the following key elements: 

  • Business transformation through automation and productivity: Agentic AI goes beyond the range of capabilities offered by Gen AI and can handle complex workflows through autonomous decision-making. This allows staff to work alongside AI agents and use its output while focusing on strategic and high-value tasks that boost workers productivity and allow them to use their time efficiently.  
  • Gaining a competitive edge: AI agents work continuously adapting to real-time issues, learning and making decisions quickly. This enhances customer experience, boosts innovation and resilience against market changes.  
  • Boost ROI and increase revenues: Studies have shown that agentic AI contributes up to 18% improvement in customer satisfaction, employee productivity, and market share, with $3.50 in return for every $1 invested realized over a 14-month payback period [14]. This is driven primarily by redirecting human resources from focusing on repetitive low-value tasks to more strategic and high-value ones.  

Enable rapid scaling and agility: AI agents can help lead the transformation of the organization to be more forward-looking and competitive, by driving business transformation, upskilling the workforce and enabling the rapid scaling and adaptation of business models. 

Implementation Priorities: How to Get Started 

The diagram below illustrates the interconnected functional areas and visually describes how they intersect with Inventory Management in an omnichannel retail environment.  The data that flows between each area is what is used in AI models to enhance decision making. The interconnected data that flows between functions feed AI models which generate insights needed to optimize inventory, fulfillment, and customer responsiveness. 

Image 2
 Figure 1: Inventory Management across Functional areas in Retail

The table below outlines key functional areas, the associated data points, and how AI is applied to enhance operational efficiency. 

Inventory Layer Key Data Points AI Usage to Improve Efficiency 
Factory / Seller* • Proforma Invoice 
• Commercial Invoice 
• Packing List 
• Predict lead times and invoice anomalies 
• Detect supply risk patterns 
Shipper • Advanced Shipping Notice (ASN) • Predict shipment delays  
• Optimize dock scheduling at warehouse 
Warehouse • Putaway Status 
• Inventory Quantity & Location 
• SKU Detail  
• Cycle Count Accuracy 
• Labor Handling Time 
• Predict slotting needs 
• Detect discrepancies 
• Optimize workforce allocation 
Available Inventory • On-hand quantity  
• Committed vs Free inventory 
• Safety stock levels 
• Dynamic Available to Pick (ATP) calc 
• Reallocation suggestions 
• Overstock / stockout alerts 
Allocation • Demand forecasts 
• Store sales velocity 
• Promotion calendar 
• Optimize first allocation  
• Recommend flow-through allocation 
Replenishment • Sell-through data 
• Min/max thresholds 
• Lead times 
• Auto-trigger replenishment 
• Predict out-of-stock risk  
• Dynamic reorder points 
Store Inventory • Store on-hand inventory 
• Returns & damages 
• Shelf vs backroom split 
• Optimize replenishment routing 
• Detect phantom inventory 
Customer Order • SKU ordered 
• Delivery preference 
• Fulfillment location 
• Predict best node to fulfill  (e.g., ship-from-store vs DC) 
• Reduce split shipments 
Fulfillment / Distribution • Pick time 
• Delivery time  
• On-time %  
• Exception logs 
• Route optimization 
• Predict fulfillment delays  
• Auto rerouting 
Reorder Loop • Real-time sales data 
• Inventory velocity  
• Reorder frequency 
• Adaptive reorder intervals  
• Prevent overstock / stockouts 
Table 2: How Data Enables AI to Improve Inventory Across the Supply Chain
*Assumes FOB Incoterms  

Implementing Agentic AI follows a multi-phased approach that integrates technology, business and culture. This approach can be iterative and repeated as necessary depending on the complexity and scope of the processes being automated [15]. 

Readiness ➡ Design ➡ Pilot ➡ Scale 

Assessing readiness 

Assessing readiness involves evaluating and auditing workflows, data infrastructures and IT capabilities to ensure compatibility with the agentic AI needs. These include ensuring that AI model outputs will be compatible with the organization’s future audit needs and that IT infrastructures can support the AI models data requirements.  

It is also important to evaluate the company’s culture and assess the adaptability and openness to automation. This is a good opportunity to address any resistance and skill gaps through education and training to ensure that teams see the value agentic AI will add to their work. 

The readiness phase is also a good opportunity to identify high-impact business use cases that can be used to pilot the implementation of agentic AI processes, and scale as necessary to the rest of the organizations, as these processes are further developed and defined.     

Design 

The design phase is important in defining objectives and scope, ensuring leadership buy-in and that data systems are properly integrated to meet the needs of the agentic AI models.  

  • Defining scope and objectives involves setting clear and measurable business goals and aligning AI initiatives with the overall company strategy. This is best achieved by identifying key business processes and applications that could provide the highest impact, show the best ROI and serve as the benchmark for future projects and applications. 
  • Securing leadership and cross-functional team buy-in is also critical in ensuring that AI models are fully adopted into the various business processes, and that communicated ROIs are realized to their fullest potential. This is achieved by securing sponsorship at the executive level, and assembling multi-disciplinary teams from IT, data science and engineering, operations and compliance. It is essential that clear, attainable and measurable ROIs are clearly communicated to ensure that teams work collectively towards achieving the defined goals and objectives.  
  • Mapping data and systems integration ensures that agentic AI systems have easy and real-time access to data across various silos including CRM, EPR and other cloud applications. This is essential in allowing agentic AI models to ingest all data required for the algorithms and produce accurate and timely outputs to guide their decisions. It is essential that close attention is paid to upgrading the security of all systems as they are integrated to ensure that no vulnerabilities are introduced as part of this process. 

Pilot 

Deploy the AI models in a contained environment that allows collecting live data for training. This is a great opportunity to train, fine-tune and iterate on the agents to ensure they produce accurate output, ROIs are met and compliance is achieved. Correct errors in the models and the algorithms, monitor output and behavior, and document outcomes.  

Scale 

Scale the phased approach across additional business functions and processes while increasing integration across the various AI agents as they are scaled. Continue to retrain agents and monitor their performance and output, paying close attention to monitoring and updating the risks and adding controls as necessary. It is also essential to continue to train and upskill employees to enable them to collaborate productively with agents. 

Risks, Realities, and Responsible Scaling 

Agentic AI is projected to automate up to 15% of day-to-day enterprise decisions by 2028, and potentially resolve 80% of standard customer service issues [16]. However, this also introduces a large risk surface, especially for critical systems.  

  • Increased cyber-attack and security risks – agentic AI systems are designed to act autonomously across multiple systems with access to various data silos across the organization. This creates a multitude of entry points and vulnerabilities for traditional cyber threats such as data leaks and hijacking. More novel and emergent threats can also be introduced such as “agent hijacking”, which allows malicious software to control agent behavior, directing it to perform unauthorized actions and access to data, and potentially collaborate with other agents through interactions that are difficult to detect and monitor.  
  • Loss of control & unintended outcomes – by reducing human involvement and interactions, agentic AI increases the risk for agents to make incorrect, inappropriate or harmful decisions. This is especially true for LLMs that can misinterpret data and context and lead to unintended outcomes on a potentially massive scale.  
  • Compliance, privacy and operational risks – agentic AI consumes and acts upon large amounts of sensitive data. Without proper oversight this opens the organization to risks of breaching privacy laws. It can also be difficult for large organizations to trace agentic AI decision making, thus making it difficult to audit, correct errors and perform disaster recovery.     

In 2025, most enterprises are implementing and running agentic AI pilots, especially in functions like customer service and supply chain management. However, enterprises have yet to achieve true end-to-end adoption of agentic AI across their various business functions. To achieve this requires strong cross-functional alignment and adoption of agentic AI, something that is rare and hard to achieve.  

Agentic AI has also been able to deliver value and efficiencies in domain-specific areas such as customer service and logistics, but it has yet to reliably deliver the same value for mission-critical business functions. There are still reliability challenges to overcome for agentic AI in these domain-agnostic areas. 

As the market became flooded with a multitude of vendors and start-ups hoping to capitalize on the acceleration of AI technologies, the tools and frameworks offered for agentic AI have become fragmented and difficult to standardize. The pace of demand for these tools continues to far outstrip the pace at which these tools are offered. 

What Kind of Retailer Will You Be? 

The retail landscape is being quickly transformed by Agentic AI programs that are driving a competitive race to lead in autonomy, speed and personalized customer experiences. In 2025, retailers cannot afford not to move quickly and aggressively in implementing agentic AI in all business functions or they risk being left behind, or worse, forced to exit.  

To be on track or ahead of the agentic AI trend in 2025, retailers must already be piloting or implementing it in one or more domains that were identified to have high ROI. Businesses can identify one or more functions such as customer support, supply chain and inventory management or marketing automation, where agentic AI can be strategically deployed to realize high ROIs.  

IT infrastructures and systems must also be revamped through APIs and data pipelines that allow for seamless integration of AI agents across various data silos across POS, supply chain and CRM platforms. While these actions are taking place, it is critical for retailers to ensure proper governance and frameworks are put in place to manage agentic AI risks, ethics and compliance. This can be done through maintaining proper audit trails, real-time monitoring of AI agents output and decision-making, and clear disaster recovery plans.  

It is also critical for retailers to ensure that employees are continuously educated, trained and upskilled in collaborating with and using AI agents. Maximizing ROIs does not rely entirely on the performance of AI agents. It also requires that employees learn and understand how to use AI agents to gain strategic insights that allow to focus on creative and impactful decisions.  

Retailers can also establish agentic AI centers of excellence to ensure proper governance and compliance, manage risks and lead strategies for responsible scaling of agentic AI at the enterprise level. Training and upskilling of employees to collaborate with Agentic AI is also critical. These actions can also be further strengthened through the formation of vendor partnerships to collaborate with AI solutions providers that allow for rapid deployment capabilities and quicker realization of ROIs. Retailers can also participate is industry consortiums to benchmark, share knowledge and establish standards and risk mitigation strategies. 

References

How My Human-Computer Interaction (HCI) Research Shaped My Design Career

“Research in HCI continues to be the primary contributor of the methodologies, technologies and tools we use to support modern application design, and it continues to remind us that the origins of design as a discipline have always been deeply rooted in how humans interact with computers.”

Introduction

The methodology and best practices behind design are constantly evolving, yet they have always been deeply rooted in Human-Computer Interaction (HCI). I think about how my career progressed in context with the rapidly changing nature and landscape of design and usability, especially when it comes to the lightning speed with which AI technologies have evolved and the ubiquity of user interfaces and technologies supporting them.

I have always viewed my time at Queen’s University and my research as a graduate student as the foundation of my career. I did not set out to pursue a career in design or usability when I started my graduate studies. In fact, I thought that my career would evolve around software development or solutions architecture. I had a good theoretical foundation during my undergraduate studies, yet the idea of choosing a research topic that is yet to be explored seemed daunting to me at first. Among all the specialized fields of study in Computer Science such as Data Mining, Machine Learning, and Parallel Computing, I knew that Software Engineering was a topic that I was interested in exploring further.

The research I embarked on with the help of my advisor, Prof. Nick Graham, involved researching and programming user interface libraries that developers would use to write applications [1]. It would take me years after completing this work to realize the significance of its contribution to HCI. That’s because I was initially focused on the execution of the ideas in my research, and designing and writing code to implement user interface libraries. However, the two years I spent doing this work would prove to be transformative in my understanding of application design, and in how my research would shape my thinking and work as a user experience designer, a product designer and an interaction designer.

The Origins of User Experience in HCI

To understand the origins of user experience design and how it evolved, it is important to shed light on how deeply rooted it is in HCI.

The term “User Experience” (UX) was originally coined by Don Norman while working at Apple in the early 1990s. On why he coined the term Norman writes [2]:

“I invented the term because I thought Human Interface and Usability were too narrow. I wanted to cover all aspects of the person’s experience with the system.”

Long before Norman proposed the term “User Experience”, Human-Computer Interaction emerged as a formal research field in Computer Science in the early 1970s and 1980s.

In 1982, the Special Interest Group on Computer-Human Interaction (SIGCHI) was established under the Association of Computing Machinery (ACM). SIGCHI was established as a global body to focus on the emergence of Human-Computer Interaction in the 1970s and 1980s as a major field of Computer Science research, and the rapid shift in computing from command-line interfaces to graphical user interfaces (GUIs). This shift highlighted the importance of human factors, cognitive psychology and ergonomics as key elements in the design of interactive systems.

Since its establishment, SIGCHI has become the most prominent international conference where top HCI researchers and design practitioners present new theories, models and technologies that have helped shape the field of usability and user experience design. SIGCHI cemented the role of HCI as a discipline of Computer Science and established the core theories, principles and methodologies behind the user experience design practice as we know it today, including usability testing, interaction design and service design.

In the Psychology of Human-Computer Interaction, Card, Moran and Newell [3] highlighted the user as the key information processor. Therefore, good systems design must focus on understanding human perception, memory and problem solving rather than hardware and programming. Furthermore, since human attention, memory and perception are limited and predictable, the system must be designed with these considerations in mind.

Card, Moran and Newell established key models that helped establish the foundations of UX research and usability today, with the most notable one being the Model Human Processor (MHP) model. The MHP model identifies the human mind as comprising of three main subsystems:

  • Perceptual – responsible for sensory, visual and auditory input and output.
  • Cognitive – responsible for thinking, reasoning and short-term memory.
  • Motor – responsible for all motor skills required for a user to interact with a system such typing, mouse movement and pointing, and eye tracking.

Jakob Nielsen helped further shift the focus in application development on the user when he formalized the role of usability in software engineering in his book Usability Engineering [4]. Nielsen argued that usability must be an integral part of the software design and development cycle through rapid, iterative, and low-cost methods. Nielsen also defined the five key components of usability as learnability, efficiency, memorability, errors and satisfaction. These components remain the cornerstones of user experience design and its role in the software development lifecycle today.

My Research in HCI Shaped My Mindset As a Designer

My research focused on the topic of User Interface Plasticity, which turned into a published article in an HCI journal [1]. I explored how simple user interface widgets such as a menu and a scrollbar could behave on a desktop computer and a digital whiteboard. I wrote libraries that allowed developers to write an application that automatically rendered scrollbar and menu widgets and adapted them to the device they were running on. In other words, the menu and scrollbar widgets retained plastic properties, which meant they could be ‘molded’ to match the the device they are deployed on. In designing the menu and scrollbar widget libraries, I needed to shift my focus from implementing libraries for the menu and scrollbar widgets to thinking about how these widgets would be used by the users along with the context and device. This relates back to the need to consider the perceptual, cognitive and motor subsystems discussed by Card, Moran and Newell in the MHP model.

The launch of the iPhone (2007) a few years after my research was published, and the iPad a few years after that, sparked a rapid pace of development in UI frameworks for mobile devices and tablets. This pace of development was propelled by the growing adoption of the web coupled with a user base that became increasingly sophisticated, with clear expectations on how applications should behave depending on the devices they were using. The launch of the iPhone and iPad allowed me to understand the importance of my research in defining how user interfaces behaved on different devices.

More importantly, my research shaped my understanding of how applications should behave on difference devices, and how application design overall is governed by the principles of HCI. Until that point, I was trained to write command-line programs on Linux using C, and as long as the program behaved as expected on the command line by providing the right prompts to the user, receiving the required inputs and producing the correct output, the program was considered successful.

Conclusion

The rise of Human-Computer Interaction in the 1970s and 1980s came out of a growing need to enable software applications to better serve users. It was no longer sufficient to expect that command line interfaces would be able to satisfy the needs of all users. As devices and the web evolved, so did users’ expectations of how applications should behave on the variety of devices available.

HCI was still growing as field of research in Computer Science when I embarked on my research at Queen’s University with Prof. Graham, yet it profoundly shaped my mindset as a designer, and how I approached design problems in various industries throughout my career. My research helped lay the conceptual foundations for device-independent UI frameworks that fed into ubiquitous computing, multi-platform design frameworks, adaptive UIs for smart devices and early thinking about device independence and context awareness. Through this work I was able to practice novel concepts at the time such as the MHP model, and other concepts introduced by Nielsen on usability in software engineering.

All of this work helped focus me on solving design problems and designing applications with a clear focus on user interaction. This is why I believe that as designers we must ensure that we always maintain a thorough understanding of the theory and research the HCI field offers. The core foundations of design have always been deeply rooted in Human-Computer Interaction, in Computer Science and in Psychology. Research in HCI continues to be the primary contributor of the methodologies, technologies and tools we use to support modern application design, and it continues to remind us that the origins of design as a discipline have always been deeply rooted in how humans interact with computers.

If this story resonates with you — or if you’re tackling challenges at the intersection of UX design, usability, and emerging technologies like AI — I’d love to connect.

Whether you’re working on adaptive interfaces, modernizing legacy systems, or simply want to apply HCI principles more deeply in your product design, I help teams bridge research, strategy, and practical execution.

Feel free to reach out through LinkedIn. Let’s explore how thoughtful, human-centered design can transform your next project.

References

[1] Jabarin, B., & Graham, T. C. N. (2003). Architectures for widget-level plasticity. In Proceedings of the 10th International Workshop on Design, Specification, and Verification of Interactive Systems.

[2] Norman, D. A. (n.d.). The Definition of User Experience (UX). Nielsen Norman Group. Retrieved July 9, 2025, from https://www.nngroup.com/articles/definition-user-experience/

[3] Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

[4] Nielsen, J. (1993). Usability Engineering. Boston, MA: Academic Press.

Why AI Won’t Replace Designers: The Human-Centered Core of Design

Introduction

Artificial intelligence (AI) is introducing new capabilities across various professions, including design. As AI continues to evolve, it will increasingly be able to execute tasks that professionals have spent years developing, mastering and specializing in. In design, AI is transforming established design methodology through its ability to generate design drafts significantly faster than what human designers can traditionally achieve.

There is concern that AI could eventually replace designers due to its ability to produce designs of quality that matches or even surpasses those created by experienced professionals. Nonetheless, envisioning a world where AI is able to perfect what is fundamentally a human-centered discipline remains challenging. Design, as a profession that is fundamentally centered on human interaction, is particularly well-positioned to challenge AI’s influence on our daily lives and on our professional practices, and this concept can be extended to other professions and not just design.

In this article, I discuss why design serves as an excellent example of a profession that defines how AI can assist designers by allowing them to produce superior designs with greater efficiency, rather than supplanting the essential skills that proficient designers contribute to their field. I show how AI can make the future of design more exciting and promising as the technology continues to evolve and enable designers to do more. In the process, AI will enable designers to focus on developing the design skills that matter, namely those anchored deeply in design thinking, empathy and user research.  

Design is rooted in Human Factors

Design, as a discipline, is rooted in the ability to comprehend, empathize with and relate to user behaviour and mental models. This is achieved through the designer’s ability to identify and resolve problems by connecting with users, building trust and cultivating empathy. This is how a meaningful co-creative environment is established, where design is genuinely focused on addressing user needs and providing solutions that positively influence human agency and fosters social, organizational and demographic constructs.

Effective design empathises with users by first diving deep into their requirement while taking into account the thoughts, feelings and emotions they would experience through their interaction with an application. This human-centered approach to design is at the core of why artificial intelligence may not be able to fully supplant designers.

User behaviour is unpredictable

User behaviour does not always follow predictable patterns that are documented and defined through data. Each design problem has unique requirements based on the users, their context and environment, and how they navigate their surroundings. Real world user behaviour is fluid and not always predictable. Physical and social contexts profoundly influence how users think, understand, and act. The concept of situated action is essential to understanding why AI, which relies on predefined and existing models, fails to capture the complexities of human-centered design [1].

In describing embodied interaction, Paul Dourish [2] emphasizes the importance of considering the connection between mind and body when addressing design problems, rather than solely focusing on immediate issues. This approach necessitates observing and engaging with users, acknowledging that the intricacies of their daily lives can influence their actions, thus requiring design solutions that go beyond the linear and well-defined models characteristic of AI.

Designers anticipate complexity

A designer can pose intuitive questions to foresee potential challenges users might encounter, especially in unique or ambiguous situations. Don Norman’s example of “Norman Doors” [3] effectively demonstrates the importance of human-centered design in conveying functionality through affordance and feedback. It is therefore challenging for AI to anticipate and predict complexities in design and effectively address user problems.

Artificial intelligence can only identify issues when provided with comprehensive sets of data that encompasses as many patterns and probabilities of human behavior, and how all these patters and probabilities can be applied to solutions for various design challenges. This task is further complicated by the uniqueness of individual users’ thinking and behavior. Designers, on the other hand, endeavor to discern overarching patterns in user behavior and common themes, and pinpoint opportunities for design enhancement through usability testing and user research.

A small percentage of users will always manifest unique needs, perspectives, and methods of interacting with the user interface, requiring the designers to make strategic and deliberate design decisions on how best to accommodate these users while balancing the overall goals of the application. The key takeaway is that user behavior patterns are constantly evolving and can be unique to different users and user groups, making it impractical to accurately encapsulate these behavior patterns through data to be used by AI models.

AI cannot co-create meaningfully with users

For design to be usable and effective for users, it must be executed with them rather than for them. This concept is inspired by the political and social context of Scandinavian trade unions in the 1970s and 1980s, which advocated for greater participation in the design of IT systems utilized in their workplaces [4]. It underscores the notion that design is inherently collaborative, focusing not only on creating tools to provide solutions but also on developing tools that navigate human agency and organizational structures. Designers create for eve evolving user groups with diverse ages, socio-economic backgrounds, geographical locations, and professional contexts, and the key to successful design lies in co-creating solutions that serves the needs of these diverse user groups.

Designers often lead this co-creation process by building trust and fostering principles of shared goals and collaboration. This approach helps deliver meaningful products that genuinely assist users in achieving their objectives and addressing their needs. AI cannot replace the invaluable ability of designers to navigate power dynamics, facilitate feedback, and ensure inclusive design.

What AI Can and Cannot Do

AI presents valuable opportunities for designers by serving as a collaborative partner. It can greatly enhance the designer’s output in several ways, such as:

  • Rapidly generating visual mockups tailored to the designer’s specifications.
  • Searching through extensive datasets of design patterns.
  • Automating tasks like adding content and creating simple flows.
  • Generating functional prototypes and interactive user interfaces from designs or prompts.

However, as previously discussed, AI has limitations when it comes to essential design tasks. Specifically, it is unable to:

  • Navigate power dynamics and feedback loops during stakeholder presentations and design reviews.
  • Perceive users’ feelings and emotions with sensitivity during user research sessions.
  • Establish deeply meaningful trust and authentic co-creation relationships with stakeholders and end users.
  • Comprehend users’ needs thoroughly and understand how complex contextual factors can influence their behavior.

Real-World Design Challenges Require Human Judgment

Design must not only ensure that user needs are addressed in an application but also meet the requirements of stakeholders and the business behind the application. Otherwise, poorly designed applications can lead to financial losses in financial applications, exclusion of user populations such as those with accessibility needs in government applications, and potential harm in healthcare applications. In addition to human factors, design demands deep empathy, accountability, and the ability to anticipate future risks. These characteristics are intrinsic to the human-led design process and cannot be easily automated or replaced by AI.

Therefore, AI should be considered as a designer’s creative partner rather than their replacement, providing powerful tools to produce designs more efficiently and create highly interactive and code-ready prototypes. Designers who learn to leverage AI in their work will shape its role in design and can spearhead the movement towards more informed and user-centered design using this innovative technology. AI will not independently shape the future of design, instead designers will drive AI’s integration into the design process while maintaining the core principles of design such as empathy, design thinking, and user research, principles that AI cannot easily and reliably adopt.

Conclusion

Our apprehensions regarding AI may be justified if it were capable of independent thinking and applying human-centered design principles to individual design problems, addressing user pain points, needs, and goals. Such concerns might also be warranted if AI could effectively communicate with end users and stakeholders, understand their requirements, and lead an ongoing process of refinement and interaction to achieve outstanding design outcomes. Despite continuous advancements in AI, the inherently human-centered nature of design ensures that it remains focused on understanding people rather than merely producing data-driven results. AI will continue to serve as a tool that enhances the designer’s mindset and skill set, which are profoundly rooted in humanity.

References

[1] Suchman, L. A. (1987). Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.

[2] Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. MIT Press.

[3] Norman, D. A. (1988). The Design of Everyday Things. Basic Books.

[4] Bødker, S., Ehn, P., Sjögren, D., & Sundblad, Y. (2000), Co-operative Design – Perspectives on 20 Years with the Scandinavian IT Design Model, Proceedings of DIS 2000