Large Language Models: Principles, Examples, and Technical Foundations

Introduction

Large Language Models (LLMs) are Artificial Intelligence algorithms that use massive data sets to summarize, generate and reason about new content. LLMs use deep learning techniques capable of broad range Natural Language Processing (NLP) tasks. NLP tasks are those involving analysis, answering questions through text translation, classification, and generation [1][2].

Put simply, LLMs are computer programs that can interpret human input and complex data extremely well given large enough datasets to train and learn from. The generative AI technologies that have been enabled by LLMs have transformed how organizations serve their customers, how workers perform their jobs and how users perform daily tasks when searching for information and leveraging intelligent systems.

The core principles of LLMs

LLMs are built on a set of neural networks based on the transformer architecture. Each transformer consists of encoders and decoders that can understand text and the relationship between words and phrases in it. The transformer architecture relies on the next-word prediction principle which allows predicting the most probable next work based on a text prompt from the user. Transformers can process sequences in parallel which enables them to learn and train much faster [2][3][4]. This is due to their self-attention mechanism which enables transformers to process sequences and capture distant dependencies much more effectively than previous architectures.

The transformer architecture consists of three key components:

  • Embedding: To generate text using the transformer model, the input must be converted into a numerical format that can be understood by the model. This process involves three steps that involve 1) tokenizing the input and breaking it down into smaller, more manageable pieces 2) embedding the tokens in a matrix that allows the model to assign semantic meaning to each token 3) encoding the position of each token in the input prompt 4) final embedding by taking the sum of the tokens and positional encodings, and capturing the position of the tokens in the input sequence.
  • Transformer block: comprises of multi-head self-attention and a multi-layer Perceptron layer. Most models stack these blocks sequentially, allowing the token representations to evolve though layers of blocks which in turn allows the model to build an intricate understanding of each token.
  • Output probabilities: Once the input has been processed through the transformer blocks, it passes through the final layer that prepares it for token prediction. This step projects a final representation into a dimensional space where each token is assigned a likelihood of being the next word. A probability distribution is applied to determine the next token based on its likelihood, which in turn enables text generation.

LLM applications

The transformer architecture allows LLMs to achieve a massive scale of billions of parameters. LLMs begin with pre-training on large datasets of text, grammar, facts and context. Once pre-trained, the models undergo fine-tuning where labeled datasets are used to adapt the models to specific tasks. The ability of LLMs to use billions of parameters, combined with their efficient attention mechanisms and their training capabilities, allows LLMs to power modern AI applications such as chatbots, content creation, code completion and translation.

Text generation: Chatbots and content creation

Text generation is one of the most prominent applications of LLMs where coherent and context-relevant text is automatically produced. This application of LLMs powers chatbots, like ChatGPT, that interact with users by answering questions, providing recommendations, generating images and conducting research [5].

GPT 4.5 and 4o feature multimodal capabilities allowing them to handle text and images for versatile use in different applications, and they can both handle text processing capacity of 25,000 words, though the amount of computational resources varies between the two models.

By leveraging their vast datasets, LLMs are also used for content creation such as social media posts, product descriptions and marketing. Tools like Copy.ai and Grammarly use LLMs to generate marketing copy, and assist with grammar and text editing. DeepL Translator uses LLMs trained on linguistic data for language translation.

Agents

Agentic LLMs refer to conversational programs such as chatbots and intelligent assistants that use transformer-based architectures and Recurrent Neural Networks (RNNs) to interpret user input, process sequential data such as text, and generate coherent, personalized responses [6]. Personalized responses to input text are achieved through context-awareness and analyzing conversations.

Agentic LLMs are also capable of managing complex workflows and can collaborate with other AI agents for better analysis. Vast datasets can be leveraged to support a variety of domains such as healthcare, finance and customer support.

Code completion

Code completion is a leading application of LLMs that uses the transformer-based  architecture to generate and suggest code by predicting next tokens, statements or entire code blocks. In this context, transformer models are trained using self-attention mechanisms to enable code understanding and completion predictions [7]. The encoder-decoder transformer model is used such that the input is code surrounding the cursor (converted into tokens), and the output is a set of suggestions to complete the current or multiple lines.

Challenges and future directions

Large Language Models are still facing challenges related to ethical and privacy concerns, maintaining accuracy, avoiding bias, and managing high resource consumption [8].

  • Ethical concerns: LLMs are trained on massive datasets. There are still open questions as to who can use these datasets, and how and when they can be used. These datasets can also be biased and lead to biased output from LLMs, which can lead to misinformation and hate speech.
  • Data privacy: The use of massive datasets containing large amounts of user data poses significant privacy concerns. Safeguards in the use of data are required to train a model without compromising user privacy. As the use of LLMs becomes more mainstream, and as the size of datasets used to train them increases, so do the privacy concerns around their use.
  • Output bias: Existing biases in the available training data can cause LLMs to amplify those biases, leading to inaccurate and misleading results. This is particularly important for areas that require objective data analysis and output, such as law, healthcare and economics.
  • Hallucinations: LLMs are prone to “hallucinations” where the model output may seem reasonable, yet the information provided is incorrect. Hallucinations can be addressed through better training and validation methodologies to enhance the reliability of generated content.
  • Environmental impact: Training and deploying LLMs requires an extensive amount of energy resources, leading to increased carbon emissions. There is a need to develop more efficient algorithms while also investing in renewable and efficient energy generation that will lower the carbon footprint of LLMs, especially as their use and application accelerate.

Addressing these and other challenges such as regulatory compliance, security and cyber attacks will ensure that LLMs continue to use the correct input datasets while producing the correct output in an ethical, fair and unbiased manner. The integration of domain-specific knowledge through specialized fine tuning will also enable LLMs to produce more accurate and context-aware information that will maximize their benefits.

Conclusion

LLMs power a variety of applications, including chatbots to content creation, code completion and domain-specific automation. Using the transformer architecture and vast datasets to train and learn, they have emerged as a transformative discipline of artificial intelligence. LLMs have proved their outstanding capabilities in understanding, generating, and reasoning with natural language. While there are challenges to overcome for LLMs such as bias, accuracy, environmental impact, and domain specialization, it is expected that LLMs will become more efficient and trustworthy as algorithms improve and innovations are achieved through better fact-checking and human oversight.

References

[1] What are large language models (LLMs)

[2] What are large language models (LLMs)

[3] What is LLM (Large Language Model)?

[4] Transformer Explainer

[5] 10+ Large Language Model Examples & Benchmark 2025

[6] Chatbot Architecture: RNNs and Transformers

[7] ML-Enhanced Code Completion Improves Developer Productivity

[8] Raza, M., Jahangir, Z., Riaz, M.B. et al. Industrial applications of large language models. Sci Rep 15, 13755 (2025). https://doi.org/10.1038/s41598-025-98483-1

How My Human-Computer Interaction (HCI) Research Shaped My Design Career

“Research in HCI continues to be the primary contributor of the methodologies, technologies and tools we use to support modern application design, and it continues to remind us that the origins of design as a discipline have always been deeply rooted in how humans interact with computers.”

Introduction

The methodology and best practices behind design are constantly evolving, yet they have always been deeply rooted in Human-Computer Interaction (HCI). I think about how my career progressed in context with the rapidly changing nature and landscape of design and usability, especially when it comes to the lightning speed with which AI technologies have evolved and the ubiquity of user interfaces and technologies supporting them.

I have always viewed my time at Queen’s University and my research as a graduate student as the foundation of my career. I did not set out to pursue a career in design or usability when I started my graduate studies. In fact, I thought that my career would evolve around software development or solutions architecture. I had a good theoretical foundation during my undergraduate studies, yet the idea of choosing a research topic that is yet to be explored seemed daunting to me at first. Among all the specialized fields of study in Computer Science such as Data Mining, Machine Learning, and Parallel Computing, I knew that Software Engineering was a topic that I was interested in exploring further.

The research I embarked on with the help of my advisor, Prof. Nick Graham, involved researching and programming user interface libraries that developers would use to write applications [1]. It would take me years after completing this work to realize the significance of its contribution to HCI. That’s because I was initially focused on the execution of the ideas in my research, and designing and writing code to implement user interface libraries. However, the two years I spent doing this work would prove to be transformative in my understanding of application design, and in how my research would shape my thinking and work as a user experience designer, a product designer and an interaction designer.

The Origins of User Experience in HCI

To understand the origins of user experience design and how it evolved, it is important to shed light on how deeply rooted it is in HCI.

The term “User Experience” (UX) was originally coined by Don Norman while working at Apple in the early 1990s. On why he coined the term Norman writes [2]:

“I invented the term because I thought Human Interface and Usability were too narrow. I wanted to cover all aspects of the person’s experience with the system.”

Long before Norman proposed the term “User Experience”, Human-Computer Interaction emerged as a formal research field in Computer Science in the early 1970s and 1980s.

In 1982, the Special Interest Group on Computer-Human Interaction (SIGCHI) was established under the Association of Computing Machinery (ACM). SIGCHI was established as a global body to focus on the emergence of Human-Computer Interaction in the 1970s and 1980s as a major field of Computer Science research, and the rapid shift in computing from command-line interfaces to graphical user interfaces (GUIs). This shift highlighted the importance of human factors, cognitive psychology and ergonomics as key elements in the design of interactive systems.

Since its establishment, SIGCHI has become the most prominent international conference where top HCI researchers and design practitioners present new theories, models and technologies that have helped shape the field of usability and user experience design. SIGCHI cemented the role of HCI as a discipline of Computer Science and established the core theories, principles and methodologies behind the user experience design practice as we know it today, including usability testing, interaction design and service design.

In the Psychology of Human-Computer Interaction, Card, Moran and Newell [3] highlighted the user as the key information processor. Therefore, good systems design must focus on understanding human perception, memory and problem solving rather than hardware and programming. Furthermore, since human attention, memory and perception are limited and predictable, the system must be designed with these considerations in mind.

Card, Moran and Newell established key models that helped establish the foundations of UX research and usability today, with the most notable one being the Model Human Processor (MHP) model. The MHP model identifies the human mind as comprising of three main subsystems:

  • Perceptual – responsible for sensory, visual and auditory input and output.
  • Cognitive – responsible for thinking, reasoning and short-term memory.
  • Motor – responsible for all motor skills required for a user to interact with a system such typing, mouse movement and pointing, and eye tracking.

Jakob Nielsen helped further shift the focus in application development on the user when he formalized the role of usability in software engineering in his book Usability Engineering [4]. Nielsen argued that usability must be an integral part of the software design and development cycle through rapid, iterative, and low-cost methods. Nielsen also defined the five key components of usability as learnability, efficiency, memorability, errors and satisfaction. These components remain the cornerstones of user experience design and its role in the software development lifecycle today.

My Research in HCI Shaped My Mindset As a Designer

My research focused on the topic of User Interface Plasticity, which turned into a published article in an HCI journal [1]. I explored how simple user interface widgets such as a menu and a scrollbar could behave on a desktop computer and a digital whiteboard. I wrote libraries that allowed developers to write an application that automatically rendered scrollbar and menu widgets and adapted them to the device they were running on. In other words, the menu and scrollbar widgets retained plastic properties, which meant they could be ‘molded’ to match the the device they are deployed on. In designing the menu and scrollbar widget libraries, I needed to shift my focus from implementing libraries for the menu and scrollbar widgets to thinking about how these widgets would be used by the users along with the context and device. This relates back to the need to consider the perceptual, cognitive and motor subsystems discussed by Card, Moran and Newell in the MHP model.

The launch of the iPhone (2007) a few years after my research was published, and the iPad a few years after that, sparked a rapid pace of development in UI frameworks for mobile devices and tablets. This pace of development was propelled by the growing adoption of the web coupled with a user base that became increasingly sophisticated, with clear expectations on how applications should behave depending on the devices they were using. The launch of the iPhone and iPad allowed me to understand the importance of my research in defining how user interfaces behaved on different devices.

More importantly, my research shaped my understanding of how applications should behave on difference devices, and how application design overall is governed by the principles of HCI. Until that point, I was trained to write command-line programs on Linux using C, and as long as the program behaved as expected on the command line by providing the right prompts to the user, receiving the required inputs and producing the correct output, the program was considered successful.

Conclusion

The rise of Human-Computer Interaction in the 1970s and 1980s came out of a growing need to enable software applications to better serve users. It was no longer sufficient to expect that command line interfaces would be able to satisfy the needs of all users. As devices and the web evolved, so did users’ expectations of how applications should behave on the variety of devices available.

HCI was still growing as field of research in Computer Science when I embarked on my research at Queen’s University with Prof. Graham, yet it profoundly shaped my mindset as a designer, and how I approached design problems in various industries throughout my career. My research helped lay the conceptual foundations for device-independent UI frameworks that fed into ubiquitous computing, multi-platform design frameworks, adaptive UIs for smart devices and early thinking about device independence and context awareness. Through this work I was able to practice novel concepts at the time such as the MHP model, and other concepts introduced by Nielsen on usability in software engineering.

All of this work helped focus me on solving design problems and designing applications with a clear focus on user interaction. This is why I believe that as designers we must ensure that we always maintain a thorough understanding of the theory and research the HCI field offers. The core foundations of design have always been deeply rooted in Human-Computer Interaction, in Computer Science and in Psychology. Research in HCI continues to be the primary contributor of the methodologies, technologies and tools we use to support modern application design, and it continues to remind us that the origins of design as a discipline have always been deeply rooted in how humans interact with computers.

If this story resonates with you — or if you’re tackling challenges at the intersection of UX design, usability, and emerging technologies like AI — I’d love to connect.

Whether you’re working on adaptive interfaces, modernizing legacy systems, or simply want to apply HCI principles more deeply in your product design, I help teams bridge research, strategy, and practical execution.

Feel free to reach out through LinkedIn. Let’s explore how thoughtful, human-centered design can transform your next project.

References

[1] Jabarin, B., & Graham, T. C. N. (2003). Architectures for widget-level plasticity. In Proceedings of the 10th International Workshop on Design, Specification, and Verification of Interactive Systems.

[2] Norman, D. A. (n.d.). The Definition of User Experience (UX). Nielsen Norman Group. Retrieved July 9, 2025, from https://www.nngroup.com/articles/definition-user-experience/

[3] Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

[4] Nielsen, J. (1993). Usability Engineering. Boston, MA: Academic Press.

Why AI Won’t Replace Designers: The Human-Centered Core of Design

Introduction

Artificial intelligence (AI) is introducing new capabilities across various professions, including design. As AI continues to evolve, it will increasingly be able to execute tasks that professionals have spent years developing, mastering and specializing in. In design, AI is transforming established design methodology through its ability to generate design drafts significantly faster than what human designers can traditionally achieve.

There is concern that AI could eventually replace designers due to its ability to produce designs of quality that matches or even surpasses those created by experienced professionals. Nonetheless, envisioning a world where AI is able to perfect what is fundamentally a human-centered discipline remains challenging. Design, as a profession that is fundamentally centered on human interaction, is particularly well-positioned to challenge AI’s influence on our daily lives and on our professional practices, and this concept can be extended to other professions and not just design.

In this article, I discuss why design serves as an excellent example of a profession that defines how AI can assist designers by allowing them to produce superior designs with greater efficiency, rather than supplanting the essential skills that proficient designers contribute to their field. I show how AI can make the future of design more exciting and promising as the technology continues to evolve and enable designers to do more. In the process, AI will enable designers to focus on developing the design skills that matter, namely those anchored deeply in design thinking, empathy and user research.  

Design is rooted in Human Factors

Design, as a discipline, is rooted in the ability to comprehend, empathize with and relate to user behaviour and mental models. This is achieved through the designer’s ability to identify and resolve problems by connecting with users, building trust and cultivating empathy. This is how a meaningful co-creative environment is established, where design is genuinely focused on addressing user needs and providing solutions that positively influence human agency and fosters social, organizational and demographic constructs.

Effective design empathises with users by first diving deep into their requirement while taking into account the thoughts, feelings and emotions they would experience through their interaction with an application. This human-centered approach to design is at the core of why artificial intelligence may not be able to fully supplant designers.

User behaviour is unpredictable

User behaviour does not always follow predictable patterns that are documented and defined through data. Each design problem has unique requirements based on the users, their context and environment, and how they navigate their surroundings. Real world user behaviour is fluid and not always predictable. Physical and social contexts profoundly influence how users think, understand, and act. The concept of situated action is essential to understanding why AI, which relies on predefined and existing models, fails to capture the complexities of human-centered design [1].

In describing embodied interaction, Paul Dourish [2] emphasizes the importance of considering the connection between mind and body when addressing design problems, rather than solely focusing on immediate issues. This approach necessitates observing and engaging with users, acknowledging that the intricacies of their daily lives can influence their actions, thus requiring design solutions that go beyond the linear and well-defined models characteristic of AI.

Designers anticipate complexity

A designer can pose intuitive questions to foresee potential challenges users might encounter, especially in unique or ambiguous situations. Don Norman’s example of “Norman Doors” [3] effectively demonstrates the importance of human-centered design in conveying functionality through affordance and feedback. It is therefore challenging for AI to anticipate and predict complexities in design and effectively address user problems.

Artificial intelligence can only identify issues when provided with comprehensive sets of data that encompasses as many patterns and probabilities of human behavior, and how all these patters and probabilities can be applied to solutions for various design challenges. This task is further complicated by the uniqueness of individual users’ thinking and behavior. Designers, on the other hand, endeavor to discern overarching patterns in user behavior and common themes, and pinpoint opportunities for design enhancement through usability testing and user research.

A small percentage of users will always manifest unique needs, perspectives, and methods of interacting with the user interface, requiring the designers to make strategic and deliberate design decisions on how best to accommodate these users while balancing the overall goals of the application. The key takeaway is that user behavior patterns are constantly evolving and can be unique to different users and user groups, making it impractical to accurately encapsulate these behavior patterns through data to be used by AI models.

AI cannot co-create meaningfully with users

For design to be usable and effective for users, it must be executed with them rather than for them. This concept is inspired by the political and social context of Scandinavian trade unions in the 1970s and 1980s, which advocated for greater participation in the design of IT systems utilized in their workplaces [4]. It underscores the notion that design is inherently collaborative, focusing not only on creating tools to provide solutions but also on developing tools that navigate human agency and organizational structures. Designers create for eve evolving user groups with diverse ages, socio-economic backgrounds, geographical locations, and professional contexts, and the key to successful design lies in co-creating solutions that serves the needs of these diverse user groups.

Designers often lead this co-creation process by building trust and fostering principles of shared goals and collaboration. This approach helps deliver meaningful products that genuinely assist users in achieving their objectives and addressing their needs. AI cannot replace the invaluable ability of designers to navigate power dynamics, facilitate feedback, and ensure inclusive design.

What AI Can and Cannot Do

AI presents valuable opportunities for designers by serving as a collaborative partner. It can greatly enhance the designer’s output in several ways, such as:

  • Rapidly generating visual mockups tailored to the designer’s specifications.
  • Searching through extensive datasets of design patterns.
  • Automating tasks like adding content and creating simple flows.
  • Generating functional prototypes and interactive user interfaces from designs or prompts.

However, as previously discussed, AI has limitations when it comes to essential design tasks. Specifically, it is unable to:

  • Navigate power dynamics and feedback loops during stakeholder presentations and design reviews.
  • Perceive users’ feelings and emotions with sensitivity during user research sessions.
  • Establish deeply meaningful trust and authentic co-creation relationships with stakeholders and end users.
  • Comprehend users’ needs thoroughly and understand how complex contextual factors can influence their behavior.

Real-World Design Challenges Require Human Judgment

Design must not only ensure that user needs are addressed in an application but also meet the requirements of stakeholders and the business behind the application. Otherwise, poorly designed applications can lead to financial losses in financial applications, exclusion of user populations such as those with accessibility needs in government applications, and potential harm in healthcare applications. In addition to human factors, design demands deep empathy, accountability, and the ability to anticipate future risks. These characteristics are intrinsic to the human-led design process and cannot be easily automated or replaced by AI.

Therefore, AI should be considered as a designer’s creative partner rather than their replacement, providing powerful tools to produce designs more efficiently and create highly interactive and code-ready prototypes. Designers who learn to leverage AI in their work will shape its role in design and can spearhead the movement towards more informed and user-centered design using this innovative technology. AI will not independently shape the future of design, instead designers will drive AI’s integration into the design process while maintaining the core principles of design such as empathy, design thinking, and user research, principles that AI cannot easily and reliably adopt.

Conclusion

Our apprehensions regarding AI may be justified if it were capable of independent thinking and applying human-centered design principles to individual design problems, addressing user pain points, needs, and goals. Such concerns might also be warranted if AI could effectively communicate with end users and stakeholders, understand their requirements, and lead an ongoing process of refinement and interaction to achieve outstanding design outcomes. Despite continuous advancements in AI, the inherently human-centered nature of design ensures that it remains focused on understanding people rather than merely producing data-driven results. AI will continue to serve as a tool that enhances the designer’s mindset and skill set, which are profoundly rooted in humanity.

References

[1] Suchman, L. A. (1987). Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.

[2] Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. MIT Press.

[3] Norman, D. A. (1988). The Design of Everyday Things. Basic Books.

[4] Bødker, S., Ehn, P., Sjögren, D., & Sundblad, Y. (2000), Co-operative Design – Perspectives on 20 Years with the Scandinavian IT Design Model, Proceedings of DIS 2000

Case Study: Designing an AI-Driven Product with Strategic Ownership

“In complex AI systems, clarity is not a bonus—it’s the core feature. Product Designers must lead the charge, not just in how things look, but in how they think and work.”

Project Overview

In this case study, we examine a team that has recognized the potential of Machine Learning (ML) and Artificial Intelligence (AI) to refine and enhance a longstanding methodology for forecasting product order volumes. By leveraging AI and ML, the team can achieve more precise ordering based on those forecasts, while also gaining the capability to monitor market prices and receive insights on how to adjust orders to minimize costs. Historically, this team has relied on Excel for manual and meticulous user input, maintaining continuous communication among members and adhering to a process honed over several decades. While this approach has been effective, they have now realized that integrating AI and ML can significantly enhance their workflow by handling larger datasets at a faster pace and generating profound insights aimed at maximizing efficiency, reducing costs, and driving business growth.  

Role of the Product Designer

The Product Designer in this role helped initiate a significant project aimed at transforming a long-standing, Excel-based process into a web-based application. This endeavor required not only a user-centered design approach but also an understanding of how AI and ML can be applied effectively to realize the project goals and meet the users needs.

To achieve this, the Product Designer addressed the following considerations:

  • What processes were being followed?
  • How did users use Excel for data entry and forecasting?
  • Which identified business processes needed to be maintained, which could be enhanced with AI, and which could be replaced entirely?
  • Where could AI and ML introduce efficiencies and savings, and provide valuable insights?
  • Were the identified AI efficiencies and insights aligned with user expectations?
  • How would the design help users provide inputs easily, and act upon AI-generated outputs and insights?
  • How would the design help users enhance their work efficiency and enable them to focus on more strategic and human-centric tasks?

At this stage, and before any of the AI models and algorithms were developed, the Product Designer assumed a strategic role in defining the foundational framework for the data the AI models would use and the outputs and insights they would generate for user consumption and action.

By adopting a product owner’s mindset and taking on a strategic role in determining the user requirements as they pertained to the AI models, the Product Designer shaped the product through the following actions:

  • Identifying key stakeholders through close collaboration with the project manager
  • Gaining an understanding of the current business process and the value the project proposes to achieve
  • Developing interview scripts designed to deepen understanding of:
    • The current business process and how Excel is used to generate forecasts
    • The ideal product vision and how it aligns with the user needs and expectations, particularly as it applies to the use of AI and ML
    • How the product could add value to users’ day-to-day work
  • Scheduling and facilitating interviews with stakeholders.
  • Maintaining an openness to any additional insights gained during the interviews such as including additional stakeholders, and exploring other areas of the business as necessary.
  • Collecting and synthesizing feedback from the stakeholder interviews.
  • Establishing a framework based on the feedback analysis for user personas, user journeys and user flows.

These activities enabled the Product Designer to acquire deep insight into the current business process, the stakeholders and users and their roles, and most importantly, insights into the AI and ML user needs and what outputs would be expected .

Research & Discovery

Using the insights gained from stakeholder interviews, the Product Designer was able to:

  • Identify user personas based on the roles and users discussed during the interviews.
  • Conduct additional workshops to further refine the identified user personas.
  • Discover new opportunities for additional user personas not previously identified and develop them further.
  • Develop user journeys for key user personas or those with the most critical needs.
  • Create user flows that outline the application’s essential features, associated screens, inputs, and outputs.
  • Refine user flows, enhance identified features, and ascertain any missing components and data.
  • Develop an information architecture (IA) aimed at providing easy and intuitive navigation that prioritized productivity and ease of navigation between various sections of the application.

The outcome of this process was a well-defined product roadmap and vision that provided a clear framework for technical teams. Data scientists, engineers, back-end and front-end developers could use this roadmap, along with the user needs and technical requirements identified, to begin designing and developing AI models. This structured approach ensured that the AI models were not only functional but also optimized to enhance the user experience.

Designing with Clarity and Logic

The research and discovery phase provided essential insights, enabling the Product Designer to conceptualize how the application would meet users’ needs and allow the AI models to generate the necessary outcomes. The user flow diagrams established an information architecture that formed the basis for the features the application offered and the overall user experience. With the information architecture now established, wireframes and mockups were created to facilitate discussions with users regarding the design direction.

  • Wireframes and mockups enabled stakeholders and end users to understand how input is provided into the AI models and what the output would look like.
  • This stage was crucial in the product design process as it helped establish the foundation for robust AI models that would directly meet the users’ needs.
  • The wireframes and mockups were refined based on feedback from users and stakeholders through recurring reviews and workshops.

Wireframes, mockups, and user flows helped the Product Designer build a prototype that:

  • Assisted data scientists, engineers, front-end, and back-end developers in visualizing user input and the generated output in the application.
  • Assisted data scientists and engineers in understanding the requirements for data input and output processing, and design algorithms to meet these requirements.
  • Illustrated the detailed interactions necessary to enable users to calibrate their inputs into AI models.
  • Facilitated collaboration between back-end and front-end developers with data scientists and engineers to design and build APIs that support the flow of data and insights from AI models.
  • Allowed running usability testing sessions with end users to validate the design and iterate based on the feedback received.

The development of a prototype represented a significant milestone and underscored the strategic role of the Product Designer played in guiding stakeholders and users through discovery sessions, workshops and design reviews. The Product Designer’s efforts in understanding data input and output requirements, and visualizing them clearly as part of a prototype, enabled technical teams to design AI models and algorithms that met those requirement. This hybrid mindset adopted by the Product Designer, functioning as an intermediary between business strategy and technical execution, was pivotal in fostering collaboration among product, engineering, and data teams, ensuring a clear understanding of the product vision and roadmap.

To drive the success of this AI-enabled product, the Product Designer delivered a strategic and structured design process that included:

  • User personas, journeys, flows, and an information architecture that defined core behaviours and ensured the experience aligned with user needs.
  • Interactive prototypes, refined through multiple rounds of usability testing and stakeholder input.
  • Detailed interactions clearly outlining user inputs and the data required to support model performance.
  • Insight-driven visualizations, which shaped how AI outputs were presented and guided model design.
  • An end-to-end product roadmap, mapping the full product vision while enabling the extraction of an MVP and a plan for iterative, future releases.

Lessons Learned

In this case study, the product designer started their work by identifying users’ needs and pain points related to an existing business process. Stakeholders and users wanted to explore how AI and ML could introduce savings and efficiencies into their business.

The designer’s responsibilities included identifying and analyzing the problem, and integrating ML and AI solutions through extensive collaboration with stakeholders, product managers, engineers, and data teams. This collaboration was crucial for the designer to articulate the product vision using a user-centered approach while also providing comprehensive insights into the data engineering efforts needed to optimize AI model outcomes.

Designing and developing AI and ML models for a product is a time and resource-intensive process. Therefore, it is essential for organizations to ensure these resources and efforts are invested in a manner that maximizes benefits and potential gains. In this case study, the Product Designer’s role was vital in establishing the product vision and roadmap, and in helping the various project teams understand and acheive this vision.


Beyond Design: Why Top Product Designers Think Like Owners and Analysts

The rapid integration of Artificial Intelligence (AI) and Machine Learning (ML) capabilities into products and applications necessitates a shift in the role of Product Designers.

AI offers significant opportunities for organizations to address complex business problems. Users are now able to provide input into detailed data models that can process extensive datasets and generate insights through what-if scenarios and simulations.

AI-driven scenarios and simulations often rely on substantial input data and calibration tailored to user needs. As output complexity increases, it becomes essential for Product Designers to be at the forefront, understanding these sophisticated models and shaping designs that present the results in an intuitive, user-friendly way.

Consequently, Product Designers must move beyond translating requirements into mock-ups and they must instead lead the vision for how human-interactive design can refine data input, guide calibration, and surface outputs that make AI models more actionable and aligned with business objectives. In this article, I’ll explore the following key areas and what’s needed to succeed:

  • Adopt a Product Owner’s Mindset
  • Design with Clarity and Logic
  • Lead with a Hybrid Mindset
  • Growing into the Role

Adopt a Product Owner’s Mindset

The Product Designer’s role has evolved to extend far beyond interface design, especially in AI product development. By adopting a Product Owner’s mindset, designers become key contributors to defining and delivering value. They shape business-aligned product strategies that build user trust in AI outputs, accelerate decision-making, and guide teams in aligning technical execution with business goals.

This mindset grounds Product Designers in structured, outcome-oriented thinking. It enables them to ask the right questions and lead cross-functional collaboration with clarity:

  • Who are the core and secondary user groups, and what are their needs?
  • What real problems does this product need to solve?
  • How do users currently generate insights, and where can AI improve the process?
  • What does the ideal product vision look like, and how can it be prototyped?
  • What defines the MVP, and how can it evolve into the ideal-state solution?
  • How will we validate both versions with end users before development?

With this approach, the Product Designer leads the development of a comprehensive product framework that informs decisions across the lifecycle, from early discovery through MVP delivery to long-term iteration. They help align teams around a shared vision, providing structure, clarity, and strategic direction that ensures product decisions are rooted in business value and user impact.

Designing With Clarity and Logic

The role of the Product Designer has become a critical part of AI product development, ensuring the product aligns with both business and user objectives while delivering the intended outcomes. Increasingly, data scientists, engineers, and developers rely on insights and outcomes from discovery work led by Product Designers in collaboration with stakeholders and end users.

By deeply understanding user personas, journeys, and workflows, the Product Designer brings clarity to complex systems. Their work translates business intent and user behavior into logical structures that guide product development and model behavior. This structured approach lays the foundation for AI-powered experiences by turning vision and research into clear, actionable design requirements.

Through this work, the Product Designer can:

  • Provide detailed, logically structured definitions of features and requirements.
  • Deliver a comprehensive understanding of the product vision from end to end.
  • Clearly map how AI models should present results and insights.
  • Clarify business rules and data relationships within a unified design logic.
  • Address stakeholder needs with precise, high-fidelity prototypes.
  • Supply robust data validation requirements based on business-aligned designs.

By designing with clarity and logic, the Product Designer empowers cross-functional teams to move forward with confidence, ensuring that every design decision is grounded in purpose, informed by data, and aligned with user expectations.

Lead with a Hybrid Mindset

As AI and data-driven products reshape the landscape, the Product Designer’s role is evolving. Today’s most effective designers lead with a hybrid mindset, one that combines user empathy, business strategy, and technical fluency.

This mindset is not about owning a product backlog, but about thinking and communicating like a Product Owner or Business Analyst. It’s about reducing ambiguity for technical teams, earning stakeholder trust, and helping cross-functional collaborators understand how design decisions tie to business outcomes.

When Product Designers operate with this hybrid mindset, they:

  • Translate complex user journeys into actionable design decisions.
  • Align design efforts with broader business objectives.
  • Help define clear roadmaps for MVPs and ideal-state experiences.
  • Communicate stakeholder needs clearly through prototypes and interactions.
  • Build team confidence in the product’s value and impact.
  • Serve as strategic partners, connecting vision with execution.

By integrating the language of business and technology into the design process, Product Designers become trusted leaders, not just creative contributors. They provide the connective tissue that links user needs, stakeholder priorities, and technical realities into cohesive, AI-powered product experiences.

Growing Into the Role

A Product Designer can cultivate the following competencies to attain a high level of strategic thinking and leadership:

  • Focus on core business problems and user needs.
  • Dive deep with data scientists and data engineers and collaborate on design AI models that meet user needs.
  • Create value from AI models by clearly visualizing insights and provide clear data calibration.
  • Turn insights into fast, testable prototypes and iterate.
  • Collaborate across teams to shape the product framework.
  • Define the ideal state and guide releases from MVP to full launch.

It is essential that Product Designers work closely with Business Analysts and Product Owners to shape clear, roadmap-aligned backlogs that reflect both user intent and business priorities. These collaborative skills are essential for defining intuitive user interactions within complex, AI-enabled applications. Mastering this intersection of design, strategy, and systems thinking elevates the Product Designer from contributor to strategic leader, capable of influencing both product direction and delivery.

Final Thoughts

Design is no longer just about aesthetics or interaction, it’s about enabling users to extract clarity and insight from complexity, particularly in AI-driven environments. The most effective Product Designers operate as strategists, analysts, and owners of the product experience, guiding teams through ambiguity to unlock value. Whether defining a user flow or shaping a new feature, ask: Does this design simply function, or does it help solve a deeper problem? When it does the latter, you’re not just designing, you’re leading. And in today’s AI-powered applications that leadership is what shapes truly impactful products.