Understanding Quantum Cryptography: The Key to Digital Trust

In facing the challenges posed by quantum cryptography, the importance of human-centred design and thoughtful product development cannot be overstated. Ultimately, the value of quantum-secure systems lies not in the complexity of their mathematics, but in the confidence they inspire in users.

Introduction

Quantum computing is advancing rapidly, transitioning from theoretical frameworks to an integral component of technology strategy. No longer confined to research laboratories, quantum computers are influencing global policy, enabling innovative startup ecosystems, and prompting critical discussions concerning digital trust. A future where quantum computers possess the power to compromise contemporary encryption algorithms is becoming increasingly plausible, making cybersecurity a focal point for emerging risks.

Traditional digital security has depended on computationally intensive mathematics; algorithms such as RSA and Elliptic Curve Cryptography (ECC) have protected emails, financial accounts, and government information by leveraging the limitations of classical computers in solving complex mathematical problems efficiently. The development of cryptographically relevant quantum computers (QRQC) will enable the decryption of these longstanding protective mechanisms.

The term “Y2Q,” meaning “years to quantum,” draws inspiration from the Y2K event. Unlike Y2K, which was characterized by a defined timeline and anticipated disruption, Y2Q’s arrival and impact remain uncertain. The event known as Q-day, sometimes referred to as the “Quantum Apocalypse,” will occur without warning. Consequently, data currently being collected and archived could be vulnerable to decryption when these advanced capabilities become available [1].

The Quantum Threat

Quantum machines leverage principles such as superposition, entanglement, and interference to evaluate numerous computational pathways simultaneously. As a result, certain problems that are infeasible for classical computers may become solvable using quantum algorithms.

The mathematical foundations of RSA and ECC encryption are vulnerable to quantum computers, which can efficiently solve them through Shor’s algorithm unless reinforced by quantum cryptographic standards. Notably, Shor’s algorithm is capable of factoring large numbers exponentially faster than the best-known classical techniques, thereby threatening the security of most existing public-key cryptography once sufficiently advanced quantum hardware becomes available.

While government entities responsible for managing classified information have led initiatives in post-quantum standardization, sectors such as banking, financial services, healthcare, and intellectual property development are increasingly adopting post-quantum cryptography standards to safeguard their customers and proprietary assets.

Fig 1. Shor’s algorithm, when implemented on a quantum computer, poses a risk to RSA and ECC by solving their challenging mathematical problems.

A primary motivator for the adoption of quantum cryptography standards among both governmental and private organizations is the increasing risk of “harvest now, decrypt later” (HNDL) attacks. In these scenarios, adversaries are actively accumulating encrypted data with the intention of decrypting it in the future, once cryptographically relevant quantum computers (QRQC) become available. The potential consequences of such attacks are severe, as they threaten the confidentiality of critical financial and governmental information.

Given the substantial and accelerating investment in quantum computing, the timeline for the emergence of QRQC remains uncertain. Most organizations have yet to establish comprehensive transition strategies for migrating to post-quantum cryptography, making the duration and complexity of this shift unclear. This uncertainty incentivizes early developers of QRQC to maintain secrecy regarding their progress, given the far-reaching implications of such advancements [2]

Post-Quantum Cryptography: The Counter-Move

The U.S. National Institute of Standards and Technology (NIST) has played a pivotal role in strengthening the future of digital security by advancing the Post-Quantum Cryptography (PQC) project since its inception in 2016. This initiative was launched in response to the mounting threat posed by quantum computers to many current cryptographic techniques.

Over several years, NIST has coordinated global collaboration among cryptographers, researchers, and industry experts to rigorously analyze and benchmark candidate algorithms for their resilience against quantum attacks as well as their practicality for real-world deployment. In August 2024, NIST reached a significant milestone by releasing its principal PQC standards. These new standards define protocols for both key establishment (securely exchanging encryption keys) and digital signatures (ensuring authenticity and integrity), which are integral for secure communications and transactions in a quantum future.

Central to these standards are robust lattice-based schemes—specifically, key encapsulation and digital signature algorithms derived from the CRYSTALS‑Kyber and CRYSTALS‑Dilithium families. These schemes were selected for their strong security foundations and operational efficiency, having undergone extensive cryptanalysis and practical evaluation throughout the standardization process. In addition, NIST has approved a hash-based signature standard, which offers an alternative approach to digital signatures and demonstrates resilience to quantum attack vectors [3].

Following the publication of these landmark PQC standards, NIST has strongly recommended that organizations across all sectors proactively initiate the migration to quantum-resistant cryptography. This migration is not simply a technical update but a strategic imperative to safeguard sensitive data and critical infrastructure against future quantum-enabled threats. The process should begin with a comprehensive assessment of existing cybersecurity products, services, and communication protocols to identify where vulnerable, quantum-insecure algorithms—such as RSA or ECC—are currently deployed. Once identified, organizations must develop detailed transition plans to replace or upgrade these algorithms with NIST-approved quantum-safe alternatives.

In parallel, NIST continues its work by monitoring the performance and security of newly developed algorithms and by supporting research into additional candidates for future standardization. This ongoing evaluation helps ensure that emerging threats can be addressed promptly, and that the cryptographic landscape remains robust and adaptive as quantum computing technology advances. Ultimately, widespread adoption of PQC is essential for preserving the confidentiality, integrity, and authenticity of digital information in a post-quantum era.

PQC vs. Quantum-Native Security

It’s important to recognize the difference between post-quantum cryptography (PQC) and security methods that are inherently quantum. PQC relies on new classical algorithms designed to withstand quantum attacks, whereas quantum-native strategies leverage physical principles. A prime example is Quantum Key Distribution (QKD), which utilizes quantum physics for the secure exchange of symmetric encryption keys.

QKD operates by transmitting photons—tiny light particles—over optical fibers. In quantum mechanics, simply observing a quantum particle changes its state. When a digital bit’s value is encoded onto a single quantum particle, any attempt at eavesdropping becomes an observation, causing the system’s state to collapse. This leads to detectable errors in the bit sequence shared by the sender and receiver. If these errors are observed, the participants know that someone may have tried to access their key [4].

China has established a quantum communication network spanning thousands of kilometers and connecting cities like Beijing and Shanghai, providing QKD-based security to banks, grid operators, and government facilities. In Europe, all 27 EU member states are supporting EuroQCI, a continental initiative to build quantum-secure communications infrastructure. The United States, meanwhile, has prioritized PQC, expressing skepticism about QKD’s practicality for most applications and favouring robust PQC algorithms as the preferred approach [5].

Designing Trust in a Quantum Future

In facing the challenges posed by quantum cryptography, the importance of human-centred design and thoughtful product development cannot be overstated. Ultimately, the value of quantum-secure systems lies not in the complexity of their mathematics, but in the confidence they inspire in users. Trust is built through intuitive experiences—when people feel secure, they are more likely to embrace and rely on new technologies, regardless of their understanding of the underlying cryptographic principles.

As organizations transition to post-quantum cryptography, design must take a leading role in making these advanced protections seamless and reassuring. Thoughtful interface and product design will be essential, not only in ensuring that end users remain unaware of the underlying complexity, but also in clearly communicating the benefits and necessity of this evolution to all stakeholders.

By embedding trust and clarity into every touchpoint, design provides the roadmap for integrating quantum security into real-world applications, ensuring that the promise of quantum-safe technology is realized in ways that serve and protect everyone.

Sorry, But AI Deserves ‘Please’ Too!

As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

Introduction

Large language models (LLMs) and AI chatbots have become woven into the fabric of our workplaces and personal lives, inviting us to reflect on the profound shift in our interaction with technology. As we navigate this new landscape, we find ourselves reevaluating the role of artificial intelligence (AI) in our daily routines. These advancements have not merely changed how we access information, seek advice, and perform research; they have opened a door to an era where insights and solutions are unveiled with remarkable speed and efficiency. As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

As these AI systems evolve to simulate human-like interactions, an intriguing phenomenon has emerged: people often address AI with polite phrases like “please” and “thank you,” echoing the social etiquette typically reserved for human conversations. This shift reflects a deeper societal change, where individuals begin to attribute a sense of agency and respect to machines, blurring the lines between human and artificial interaction. Furthermore, as AI continues to improve, this trend may lead to even more sophisticated relationships, encouraging users to engage with AI in ways that foster collaboration and mutual understanding, ultimately enhancing productivity and satisfaction in both personal and professional interactions.

With AI entities now entrenched in collaborative environments, one must ask: how do we, as humans, truly treat these so-called conversational agents? Despite AI’s lack of real emotions and its indifference to our so-called politeness, the patterns of user interaction reveal deep-seated beliefs about technology and the essence of human-AI relationships. LLMs are crafted to imitate human communication, creating an illusion of agency that drivers users to apply familiar social norms. In collaborative contexts, politeness becomes not just a nicety, but a catalyst for cooperation, compelling users to extend the very same respectful behavior to AI that they reserve for their human colleagues. [1]

Politeness Towards Machines and the CASA Paradigm

Politeness plays a vital role in shaping social interactions, particularly in environments where individuals must navigate complex power dynamics. It promotes harmony, reduces misunderstandings, and fosters cooperation among participants. Rather than being a rigid set of linguistic rules, politeness is a dynamic process involving the negotiation of social identities and power dynamics. These negotiations are influenced by participants’ backgrounds, their relationships with one another, and the specific context in which the interaction takes place [2].

Extending the concept of politeness to interactions with machines highlights the broader question of social engagement with technology. The Computers Are Social Actors (CASA) paradigm states that humans interact with computers in a fundamentally social manner, not because they consciously believe computers are human-like, nor due to ignorance or psychological dysfunction. Rather, this social orientation arises when people engage with computers, revealing that human-computer interactions are biased towards applying social norms similar to those used in human-to-human communication [3].

The CASA approach demonstrates that users unconsciously transfer rules and behaviours from human-to-human interactions, including politeness, to their engagements with AI. However, research examining young children’s interactions with virtual agents revealed contrasting patterns. Children often adopted a command-based style of communication with virtual agents, and this behaviour sometimes extended to their interactions with parents and educators in their personal lives [4].

Further studies into human-robot interaction have shown that the choice of wake-words can influence how users communicate with technology. For instance, using direct wake-words such as “Hey, Robot” may inadvertently encourage more abrupt or rude communication, especially among children, which could spill over into their interactions with other people. Conversely, adopting polite wake-words like “Excuse me, Robot” was found to foster more respectful and considerate exchanges with the technology [5].

Human-AI Interaction Dynamics

Research demonstrates that attributing agency to artificial intelligence is not necessarily the primary factor influencing politeness in user interactions. Instead, users who believe they are engaging with a person—regardless of whether the entity on the other end is human or computer—tend to exhibit behaviours typically associated with establishing interpersonal relationships, including politeness. Conversely, when users are aware that they are communicating with a computer, they are less likely to display such behaviours [6].

This pattern may help explain why users display politeness to large language models (LLMs) and generative AI agents. As these systems become more emotionally responsive and socially sophisticated, users increasingly attribute human-like qualities to them. This attribution encourages users to apply the same interpersonal communication mechanisms they use in interactions with other humans, thereby fostering polite exchanges.

Politeness in human-AI interactions often decreases as the interaction progresses. While users typically start out polite when engaging with AI, this politeness tends to diminish as their focus shifts to completing their tasks. Over time, users become more accustomed to interacting with AI and the complexity of their tasks may lessen, both of which contribute to a reduction in polite behaviour. For example, a user querying an LLM about a relatively low-risk scenario—such as running a snack bar—may quickly abandon polite language once the context becomes clear. In contrast, when faced with a higher-stakes task—such as understanding a legal concept—users may maintain politeness for longer, possibly due to increased cognitive demands or the seriousness of the task. In such scenarios, politeness may be perceived as facilitating better outcomes or advice, especially when uncertainty is involved.

Conclusion

Politeness in human-AI interactions is shaped by a complex interplay of social norms, individual user characteristics, and system design choices—such as the use of polite wake-words and emotionally responsive AI behaviours. While attributing agency to AI may not be the primary driver of politeness, users tend to display interpersonal behaviours like politeness when they perceive they are interacting with a person, regardless of whether the entity is human or computer.

As AI agents become more emotionally and socially sophisticated, users increasingly apply human-like communication strategies to these systems. However, politeness tends to wane as familiarity grows and task complexity diminishes, with higher-stakes scenarios sustaining polite engagement for longer. Recognizing these dynamics is crucial for designing AI systems that foster respectful and effective communication, ultimately supporting positive user experiences and outcomes.

Exploring Quantum User Experience: Future of Human-Computer Interaction

Introduction

In today’s rapidly evolving digital world, our interactions with technology are no longer limited to simple interfaces and straightforward feedback. The growth of cloud infrastructure, the explosion of big data, and the integration of advanced Artificial Intelligence (AI) have transformed the very foundation of how humans connect with systems. As applications become more intelligent, pervasive, and context-aware—operating seamlessly across devices in real-time—users expect more fluid, responsive, and personalized experiences than ever before. These expectations challenge the boundaries of traditional Human-Computer Interaction (HCI) and User Experience (UX) models, which were not designed for such complexity or dynamism.

Legacy frameworks like Human-Centred Design (HCD), User-Centred Design (UCD), the Double Diamond model, and Design Thinking have provided structure for decades, yet their sequential stages and lengthy feedback loops can’t keep pace with the demands of today’s interconnected, AI-driven world. To design experiences that truly resonate, we must rethink our approach—moving beyond rigid methodologies and embracing new paradigms that account for the unpredictable, adaptive nature of modern technology. The future of HCI calls for innovative, human-centred AI design strategies that acknowledge the unique capabilities and limitations of intelligent systems from the very start [1].

Quantum User Experience (QUX): An Alternative Perspective for HCI

When examining ways to enhance Human-Computer Interaction (HCI) methodologies, it becomes apparent that these approaches share similarities with the fundamental phenomena underlying nature and life itself. Drawing inspiration from Quantum Mechanics, we can establish a theoretical analogy to HCI, which gives rise to the concept of Quantum User Experience (QUX). QUX offers a distinct departure from traditional User Experience (UX) models in several ways [2]:

  • Fluidity: QUX is not rigid, thus enabling experiences to be more dynamic and adaptable.
  • Adaptability: QUX is not linear, allowing experiences to evolve in response to users’ changing needs.
  • Mathematical Foundation: QUX is not random; it leverages mathematical data derived from measurements and research on user behaviour. This approach integrates feedback from previous processes and their associated data into new experiences.
  • Temporal Definition: QUX is not infinite; experiences are distinctly defined in time, avoiding endless cycles of testing and iteration.

To reinforce this theoretical framework, the digital cosmos is conceptualized as the global network—such as the internet—functioning on two key scales: the macroscopic scale, which encompasses phenomena that are detectable, analyzable, and able to be influenced at a broad level across the network; and the microscopic scale, which refers to aspects that are perceived and felt, yet not directly observed or measured.

In the digital cosmos, three core entities serve as the foundation for its structure: Agents, Interactions, and Objects. Agents encompass not only users, but also any entity capable of engaging with others, such as bots, engines, or platforms. Interactions refer to the exchanges or communications that occur between these agents, facilitating connections and the flow of information. Objects represent the wide range of content found within the global network, including text, audio, and visual elements, all of which contribute to the richness and diversity of the digital environment.

Within the Quantum User Experience (QUX) framework, experiences are categorized as either macro- or micro-experiences: macro-experiences encompass collective behavioural patterns that involve multiple entities and significantly influence individuals or groups, while micro-experiences represent individual responses to notable stimuli or events that hold personal significance and shape one’s overall perception or feelings. Together, these experiences form the cumulative fabric of QUX, contributing to the flow of information across the vast structure of the digital cosmos.

Quantum-Scale Behaviour of Agents, Interactions, and Objects in QUX

Within the QUX framework, the behaviours of Agents, Interactions, and Objects can be interpreted through a quantum lens, revealing how microscopic traits and probabilistic states shape the digital cosmos.

Agents are entities that exhibit microscopic characteristics—such as cognitive, social, or psychological attributes—which manifest as observable macroscopic behaviours. In the QUX perspective, Agents are conceptualized as “strings,” drawing a parallel to String Theory. This analogy suggests that all observable phenomena within the digital cosmos, at a macroscopic level, originate from the various vibrational states of these strings.

Interactions encompass the diverse behaviours that occur between agents (as vibrating strings) and objects, influenced by users’ perceptions and experiences over time. These interactions function as probabilistic quantum systems in superposition, described by their intensity (the strength of the interaction) and frequency (the variance of the interaction over time). Upon measurement, these probabilistic states collapse into observable Quantum Experience (QXE) states, allowing for a flexible and probabilistic approach to modelling user engagement.

Objects are defined as probabilistic Experience Elements (XL), which serve as the fundamental building blocks—such as buttons—of a system, application, or service. These objects possess a spectrum of possible values governed by probabilities and are characterized by discrete Experience Quanta (XQ) energy units resulting from user interactions. This framework supports real-time adaptability and multivariate evaluation of experiences, surpassing the limitations of traditional, rigid A/B testing methods.

Fig 1. QUX phase breakdown. Micro experiences combine together to form a superposition of states, collapsing into highest probability quantum user experiences.

Conclusion: Quantum Mechanics as Inspiration for QUX

The QUX framework draws profound inspiration from quantum mechanics, reshaping our understanding of human-computer interaction by adopting concepts like superposition, probabilistic states, and collapse. By viewing agents as vibrating strings—much like those in String Theory—QUX reimagines the digital cosmos as a domain where microscopic traits and behaviours coalesce into observable, macroscopic phenomena. Interactions function as quantum systems, existing in probabilistic superpositions until measured, at which point they collapse into tangible Quantum Experience (QXE) states. Objects, conceptualized as probabilistic Experience Elements, embody the quantum notion of possible values and energy units that adapt in real-time to user input. This quantum-inspired perspective enables flexible modelling of engagement, surpassing the limitations of classical, deterministic approaches.

As cloud computing, artificial intelligence, and quantum technologies advance, QUX motivates a paradigm shift in human-centred design and methodology. It champions adaptability, multivariate evaluation, and responsiveness to evolving user needs—mirroring the uncertainty and dynamism at the heart of quantum mechanics. In this way, QUX not only offers a novel theoretical foundation, but also empowers designers to meet the demands of a rapidly evolving technological landscape, ensuring that applications and systems remain attuned to the nuanced and changing nature of user experience.

Rethinking Reality: The Unwritten Story of Time

The universe is not a closed book, but rather a narrative in progress—its future pages unwritten, its history ever-expanding. The digits that define time, instead of being predetermined, seem to emerge in concert with our unfolding experience, hinting at a reality that is both participatory and creative.

Introduction

Time: we all experience its steady march, feel its passing in our bodies, and witness its effects as trees stretch skyward, animals age, and objects wear down. Our everyday understanding of time is one of motion—a ceaseless flow from past, to present, into an open future. Yet, what if the very nature of time is not what it seems? Physics offers a perspective that is at odds with our intuition, challenging us to rethink everything we believe about reality.

Albert Einstein’s revolutionary theory of relativity upended this familiar notion, proposing that time is not merely a backdrop to events, but a fourth dimension intricately woven into the fabric of the universe. In his “block universe” concept, the past, present, and future exist together in a four-dimensional space-time continuum, and every moment—from the birth of the cosmos to its distant future—is already etched into reality. In this cosmic tapestry, the initial conditions of the universe determine all that follows, leaving little room for the unfolding uncertainty we sense in our lives [1].

Contrasting Views: Einstein, Quantum Mechanics, and the Nature of Time

Most physicists today accept Einstein’s pre-determined view of reality, in which all events—past, present, and future—are fixed within the space-time continuum. However, some physicists who explore the concept of time more deeply find themselves troubled by the implications of this theory, particularly when the quantum mechanical perspective is considered. At the quantum scale, particles act in a probabilistic manner, existing in multiple states at once until measured; it is only through measurement that a particle assumes a single, definite state.

While each measurement of a particle is random and unpredictable, the overall results tend to conform to predictable statistical patterns. The behaviour of quantum particles is described by the evolution of their wave function over time. Quantum wave functions require a fixed spacetime, whereas relativity treats spacetime as dynamic and observer-dependent. This fundamental difference complicates efforts to develop a theory of quantum gravity capable of quantizing spacetime—a major challenge in modern physics [2].

Relativity, in contrast, insists that time and space be treated equally, making it necessary to introduce time as an operator and place it on the same level as position coordinates. In quantum mechanics, each particle is part of a system with many particles, and time and space coordinates are not treated equally. In such systems, there are as many position variables as there are particles, but only a single time variable, which represents a flaw in the theory. To overcome this, scientists have developed the many-time formalism, where a system of N particles is described by N distinct time and space variables, ensuring equal treatment of space and time [3].

If physicists are to solve the mystery of time, they must weigh not only Einstein’s space-time continuum, but the fact that the universe if fundamentally quantum, governed by probability and uncertainty. Quantum theory treats time in a very different way than Einstein’s theory. Time in quantum mechanics is rigid, not intertwined with the dimensions of space as it is in relativity.

Gisin’s Intuitionist Approach and Indeterminacy

Swiss physicist Nicolas Gisin has published papers aiming to clarify the uncertainty surrounding time in physics. Gisin argues that time—both generally and as we experience it in the present—can be expressed in intuitionist mathematics, a century-old framework that rejects numbers with infinitely many digits.

Using intuitionist mathematics to describe the evolution of physical systems reveals that time progresses only in one direction, resulting in the creation of new information. This stands in stark contrast to the deterministic approach implied by Einstein’s equations and the unpredictability inherent in quantum mechanics. If numbers are finite and limited in precision, then nature itself is imprecise and inherently unpredictable.

Gisin’s approach can be likened to weather forecasting: precise predictions are impossible because the initial conditions of every atom on Earth cannot be known with infinite accuracy. In intuitionist mathematics, the digits specifying the weather’s state and future evolution are revealed in real time as the future unfolds. Thus, reality is indeterministic and the future remains open, with time not simply unfolding as a sequence of predetermined events. Instead, the digits that define time are continuously created as time passes—a process of creative unfolding.

Gisin’s ideas attempt to establish a common indeterministic language for both classical and quantum physics. Quantum mechanics establishes that information can be shuffled or moved around, but never destroyed. However, if digits defining the state of the universe grow with time as Gisin proposes, then new information is also being created. Thus, according to Gisin information is not preserved in the universe since new information is being created by the mere process of measurement.

The Evolving Nature of Time

As we survey the landscape of contemporary physics, it becomes apparent that our classical conception of time is far from settled. Instead, it stands at the crossroads of discovery—a concept perpetually reshaped by new theories and deeper reflection. Einstein’s vision of a pre-determined reality, where all moments are frozen within the space-time continuum, offers comfort in its order and predictability. Yet, this view is challenged by the quantum world, where uncertainty reigns, and events transpire in a haze of probability until measurement brings them into sharp relief.

The friction between the determinism of relativity and the indeterminacy of quantum mechanics compels us to look beyond conventional frameworks. Quantum mechanics treats time as an inflexible backdrop, severed from the intricacies of space, whereas relativity insists on weaving time and space together, equal and dynamic. Gisin’s intuitionist approach further invites us to reflect on the very bedrock of reality—questioning whether information is static or endlessly generated as the universe unfolds.

This ongoing dialogue between classical physics and emerging quantum perspectives not only exposes the limitations of our current understanding but also sparks a profound sense of curiosity. If, as Gisin suggests, information is continuously created, then the universe is not a closed book, but rather a narrative in progress—its future pages unwritten, its history ever-expanding. The digits that define time, instead of being predetermined, seem to emerge in concert with our unfolding experience, hinting at a reality that is both participatory and creative.

Exploring Quantum Computing and Wormholes: A New Frontier

 As we continue to unlock the secrets of quantum gravity and teleportation, each discovery invites us to ponder just how much more there is to unveil, a testament to the infinite possibilities that lie hidden within the quantum tapestry of our universe. The next revelation may be just around the corner, waiting to astonish us all over again, bringing us closer to understanding our universe, and our place within it.

Introduction

Imagine voyaging across the galaxy at warp speed, like in Star Trek or Star Wars, where starships zip through cosmic shortcuts called wormholes. While these cinematic adventures may seem far-fetched, the wildest twist is that wormholes aren’t just a figment of Hollywood’s imagination—quantum physics hints they might truly exist, emerging from the very fabric of quantum entanglement. This remarkable idea flips our understanding of the universe: space and time could actually spring from invisible quantum connections, reshaping what we know about black holes and the universe itself.

This revolutionary perspective burst onto the scene in 2013, thanks to Juan Maldacena and Leonard Susskind, who suggested that whenever two systems are maximally entangled, a wormhole connects them, anchoring each system at opposite ends [1]. Building on the pioneering work of Einstein, Podolsky, and Rosen (EPR) on quantum entanglement and the Einstein-Rosen (ER) description of wormholes, Maldacena and Susskind daringly bridged quantum physics with general relativity, inviting us to think of our universe as far stranger, and far more interconnected, than we ever imagined [2].

Einstein-Rosen Bridges and the Origins of Wormholes

In their seminal paper, Einstein and Rosen encountered the concept of wormholes while seeking to describe space-time and the subatomic particles suspended within it. Their investigation centred on disruptions in the fabric of space-time, originally revealed by German physicist Karl Schwarzschild in 1916, just months after Einstein published his theory of relativity.

Schwarzschild demonstrated that mass can become so strongly self-attractive due to gravity that it concentrates infinitely, causing a sharp curvature in space-time. At these points, the variables in Einstein’s equations escalate to infinity, leading the equations themselves to break down. Such regions of concentrated mass, known as singularities, are found throughout the universe and are concealed within the centres of black holes. This hidden nature means that singularities cannot be directly described or observed, underscoring the necessity for quantum theory to be applied to gravity.

Einstein and Rosen utilized Schwarzschild’s mathematical framework to incorporate particles into general relativity. To resolve the mathematical challenges posed by singularities, they extracted these singular points from Schwarzschild’s equations and introduced new variables. These variables replaced singularities with an extra-dimensional tube, which connects to another region of space-time. They posited that these “bridges,” or wormholes, could represent particles themselves.

Interestingly, while attempting to unite particles and wormholes, Einstein and Rosen did not account for a peculiar particle phenomenon they had identified months earlier with Podolsky in the EPR paper: quantum entanglement. Quantum entanglement led quantum gravity researchers to fixate on entanglement as a way to explain the space-time hologram.

Space-Time as a Hologram

The concept of space-time holography emerged in the 1980s, when black hole theorist John Wheeler proposed that space-time, along with everything contained within it, could arise from fundamental information. Building on this idea, Dutch physicist Gerard ‘t Hooft and others speculated that the emergence of space-time might be similar to the way a hologram projects a three-dimensional image from a two-dimensional surface. This notion was further developed in 1994 by Leonard Susskind in his influential paper “The World as a Hologram,” wherein he argued that the curved space-time described by general relativity is mathematically equivalent to a quantum system defined on the boundary of that space.

A major breakthrough came a few years later when Juan Maldacena demonstrated that anti-de Sitter (AdS) space—a theoretical universe with negative energy and a hyperbolic geometry—acts as a true hologram. In this framework, objects become infinitesimally small as they move toward the boundary, and the properties of space-time and gravity inside the AdS universe precisely correspond with those of a quantum system known as conformal field theory (CFT) defined on its boundary. This discovery established a profound connection between the geometry of space-time and the information encoded in quantum systems, suggesting that the universe itself may operate as a vast holographic projection.

ER = EPR

Recent advances in theoretical and experimental physics have leveraged the SYK (Sachdev-Ye-Kitaev) model to explore the practical realization of wormholes, particularly in relation to quantum entanglement and teleportation. Building on Maldacena’s 2013 insight that suggested a deep connection between quantum entanglement (EPR pairs) and wormhole bridges (ER bridges)—summarized by the equation ER = EPR—researchers have used the SYK model to make these ideas more tangible. The SYK model, which describes a system of randomly interacting particles, provides a mathematically tractable framework that mirrors the chaotic behaviour of black holes and the properties of quantum gravity.

In 2017, Daniel Jafferis, Ping Gao, and Aaron Wall extended the ER = EPR conjecture to the realm of traversable wormholes, using the SYK model to design scenarios where negative energy can keep a wormhole open long enough for information to pass through. They demonstrated that this gravitational picture of a traversable wormhole directly corresponds to the quantum teleportation protocol, in which quantum information is transferred between two entangled systems. The SYK model enabled researchers to simulate the complex dynamics of these wormholes, making the abstract concept of quantum gravity more accessible for experimental testing.

Fig 1. How a quantum computer simulated a wormhole

By 2019, Jafferis and Gao, in collaboration with others, successfully implemented wormhole teleportation using the SYK model as a blueprint for their experiments on Google’s Sycamore quantum processor. They encoded information in a qubit and observed its transfer from one quantum system to another, effectively simulating the passage of information through a traversable wormhole as predicted by the SYK-based framework. This experiment marked a significant step forward in the study of quantum gravity, as it provided the first laboratory evidence for the dynamics of traversable wormholes, all made possible by the powerful insights offered by the SYK model.

Conclusion

Much like the mind-bending scenarios depicted in Hollywood blockbusters such as Star Trek and Star Wars, where spaceships traverse wormholes and quantum teleportation moves characters across galaxies, the real universe now seems to be catching up with fiction.

The remarkable journey from abstract mathematical conjectures to tangible laboratory experiments has revealed a universe far stranger, and more interconnected, than we could have ever imagined. The idea that information can traverse cosmic distances through the fabric of space-time, guided by the ghostly threads of quantum entanglement and the mysterious passageways of wormholes, blurs the line between science fiction and reality.

 As we continue to unlock the secrets of quantum gravity and teleportation, each discovery invites us to ponder just how much more there is to unveil, a testament to the infinite possibilities that lie hidden within the quantum tapestry of our universe. The next revelation may be just around the corner, waiting to astonish us all over again, bringing us closer to understanding our universe, and our place within it.

Beyond Barriers: How Quantum Tunneling Powers Our Digital and Cosmic World

From memory devices to the heart of stars

Consider the operation of a flash memory card, such as an SSD or USB drive, which is capable of data retention even when powered off; the immense energy output from the sun and stars; or research indicating the occurrence of enzyme catalysis and DNA mutation [1]. These diverse applications are unified by the quantum mechanical phenomenon known as quantum tunneling.

Quantum tunneling refers to the capacity of particles to penetrate energy barriers despite lacking the requisite energy to surpass these obstacles according to classical mechanics. This effect arises from superposition, which imparts wave-like characteristics to quantum-scale particles and permits probabilistic presence across multiple locations. The transmission coefficient, which quantifies the likelihood of tunneling, is determined by the barrier’s height and width, in addition to the particle’s mass and energy [2].

Application of the time-independent Schrödinger equation allows the decomposition of the particle’s wave function into components situated within and outside the barrier. By ensuring continuity of the wave functions at the boundaries, the transmission coefficient can be derived. This theoretical framework has been effectively utilized in various fields, including the development of scanning tunneling microscopes and quantum dots.

Running your digital world

Modern electronics exist in a delicate balance with quantum tunneling. At the heart of today’s microprocessors are advanced transistors, which depend on the quantum ability of electrons to traverse ultra-thin insulating barriers. This tunneling enables transistors to switch on and off at remarkable speeds while using minimal energy, supporting the drive for faster, more energy-efficient devices. As technology advances and the insulating layers within transistors are made thinner to fit more components onto a single chip, the probability of electrons tunneling through these barriers inevitably increases. This leads to unwanted leakage currents, which can generate excess heat and disrupt circuit performance. Such leakage is a major challenge, setting hard physical boundaries on how much further Moore’s law—the trend of doubling transistor density— can be extended.

Yet, the same quantum effect that poses challenges in mainstream electronics is ingeniously exploited in specialized components. Tunnel diodes, for example, are engineered with extremely thin junctions that encourage electrons to quantum tunnel from one side to the other. This property allows tunnel diodes to switch at incredibly high speeds, making them invaluable for high-frequency circuits and telecommunications technologies where rapid response times are essential.

Quantum tunneling is also fundamental to how data is stored in non-volatile memory devices such as flash drives and solid-state drives (SSDs). In these devices, information is retained by manipulating electrons onto or off a “floating gate,” separated from the rest of the circuit by a thin oxide barrier. When writing or erasing data, electrons tunnel through this barrier, and once in place, they remain trapped, even if the device is disconnected from power. This is why your photos, documents, and other files remain safely stored on a USB stick or SSD long after you unplug them.

In summary, quantum tunneling is both a challenge and a tool in modern electronics. Engineers must constantly innovate to suppress unwanted tunneling in ever-smaller transistors, while simultaneously designing components that rely on controlled tunneling for speed, efficiency, and reliable data storage. This duality underscores how quantum mechanics is not merely an abstract scientific theory, but a practical force shaping the infrastructure of everyday digital life.

Powering stars, chips, and qubits

On a cosmic scale, quantum tunneling is fundamental to the process by which stars, including the Sun, emit light. It facilitates the fusion of protons within stellar cores by enabling them to overcome their mutual electrostatic repulsion, thus allowing nuclear fusion to occur at temperatures lower than those required in a strictly classical context. The existence of life on Earth relies on this mechanism, as it powers the energy output of stars that sustain our planet. Insights into tunneling continue to inform research efforts aimed at developing fusion reactors, where analogous physical principles must be managed under controlled conditions rather than governed by stellar gravity.

In superconducting circuits, which comprise materials capable of conducting electric current without resistance, pairs of electrons known as Cooper pairs tunnel through thin insulating barriers called Josephson junctions. When cooled to near absolute zero, these systems enable billions of paired electrons to behave collectively as a single quantum entity. This phenomenon has resulted in devices with exceptional sensitivity for measuring voltage and magnetic fields. Additionally, Josephson junctions play a central role in the architecture of superconducting qubits, where precision control of tunneling between quantum states enables reliable quantum logic operations.

The Nobel Prize in Physics 2025 was awarded to John Clarke, Michael H. Devoret, and John M. Martinis for their pioneering work in designing a macroscopic system utilizing a Josephson junction. The system was composed of two superconductors separated by an ultra-thin oxide layer, only a few nanometers thick. This layer permitted electron tunneling, and the observed discrete energy levels were in complete conformity with quantum mechanical predictions, a notable accomplishment from both experimental and theoretical standpoints [3].

A feature, a bug, and a design principle

Imagine a world where the chemical foundations of life and technology remain a mystery. Without quantum mechanics, our understanding of chemical bonds would be impossibly incomplete, the very medicines that save lives daily could never be designed, and the machines and electronics we rely on in our daily lives would not be possible.

Quantum tunneling stands as a striking testament that quantum phenomena are not mere scientific oddities; they are the bedrock of modern innovation. The same quantum effect that challenges engineers by causing troublesome current leaks in ever-smaller transistors is deliberately harnessed for breakthroughs: non-volatile memory, lightning-fast diodes, atomic-resolution microscopes, and the frontier of quantum computing all depend on it.

Every second, billions of electrons tunnel invisibly within the technology that surrounds you, their quantum behaviour silently orchestrating our digital universe. Far from being an abstract theory, quantum mechanics is the invisible engine driving your phone, your computer, your lasers, and LEDs—the essential infrastructure of twenty-first century life. Our entire technological existence pivots on the strange but real phenomena of the quantum world, challenging us to see science not as distant or esoteric, but as the very substance of our everyday reality.

Cultivating Optimism: A Skill for Success

Optimism is a remarkable and transformative belief that invites us to view our circumstances through a lens of potential and hope. It asserts that improvement is not just a distant dream but a reality within our grasp, even in the face of complex challenges and significant constraints. Instead of waiting for absolute certainty to take action, we are encouraged to trust that clarity and solutions will emerge through our engagement with the world around us.

This perspective extends far beyond a mere positive attitude; it fundamentally shapes our approach to life’s obstacles. By allowing us to let go of fear and embrace new possibilities, optimism empowers us to move forward, even when the path ahead seems uncertain. It prevents us from being immobilized by the risks and potential failures that often overshadow our ambitions.

Where fear may narrow our focus and amplify doubts, optimism broadens our horizon, revealing opportunities alongside risks. It fosters creativity, bolsters innovative thinking, and instills the confidence needed to confront difficulties. Setbacks transform from mere failures into invaluable learning experiences that guide our next steps. By adopting this optimistic mindset, we pave the way for decisive action amid uncertainty, driven by the belief that our efforts will ultimately yield positive outcomes. Over time, this approach diminishes the hold of fear and fortifies our confidence in the pursuit of success.

The Art of Navigating Uncertainty

In professional environments, optimism shows up when we share ideas, ask for feedback, and see pushback as helpful instead of negative. In our personal lives, it means making progress even when we don’t know the outcome, trusting that moving forward will make things clearer.

People who expect success and move forward with confidence stay engaged longer. Instead of viewing setbacks as reasons to give up, they see them as signs to change their approach. This attitude promotes ongoing improvement in our ideas. Rather than asking, “What if this fails?” optimism leads us to think, “What if this helps me grow and teaches me something important?”

Research shows that optimism is more than just feeling good; it can lead to real success. A well-known study found that optimistic workers performed better than those who weren’t as hopeful, not just because they worked hard, but because they believed success was possible, even when challenges appeared [1]. By focusing on our ideas’ potential for success rather than fears of failure, we can turn setbacks into chances to learn and adopt a mindset where anything seems possible, even with tough problems.

Stories of Growth in Design

When faced with criticism or obstacles, an optimistic designer doesn’t back down – instead, they listen carefully, rethink their approach, and make improvements. Take, for example, a designer presenting ideas to stakeholders who seem doubtful; rather than viewing skepticism as rejection, optimism encourages open dialogue and teamwork, which can lead to better outcomes. This collaborative atmosphere can foster creative solutions that might not have been considered otherwise.

Additionally, the influence of optimism on problem solving is profound. An optimistic designer approaches challenges with a solution-oriented mindset, exploring multiple angles and possibilities. They become adept at adjusting their strategies in real-time, allowing them to pivot when necessary. This flexibility not only enhances the design process but also builds resilience, enabling designers to bounce back from setbacks with renewed vigor.

This positive mindset helps designers stay focused on what works, learn from setbacks, and see every challenge as something that can be overcome, even when the problems are tough. It nurtures a culture of innovation, where experimentation and risk-taking are encouraged. This can lead to groundbreaking ideas that elevate design projects. When designers embrace an optimistic view, they are more likely to inspire those around them, fostering an environment where creativity thrives.

Furthermore, optimism extends beyond design into everyday life and long-term goals. It shapes how we interact with colleagues, clients, and even family, promoting stronger relationships built on trust and mutual respect. Instead of worrying about being judged or failing, optimism encourages us to engage with the world in meaningful ways. This perspective allows us to approach uncertainty with confidence, viewing our efforts, big or small, as essential steps in our journey toward success. It empowers individuals to take initiative, volunteer for new projects, and seek out opportunities for growth, ultimately contributing to both personal and professional development.

Embracing the Power of Optimism

Optimism is not a magical gift; it’s a skill you can nurture through purposeful habits:

  • Shift your self-talk: See setbacks as valuable lessons rather than proof of shortcomings, without being consumed by how the setback made you feel. Our self-talk drives our actions, and the more we focus on cultivating positive self-talk, the more we will see that reflect in our actions and the outcomes we see, whether at work or in everyday life. This process requires intentional effort and mindfulness; we can start by recognizing negative patterns in our self-talk and consciously replacing them with affirming statements. Over time, by continuously reinforcing positive narratives, we establish a healthier mindset, which not only benefits our personal growth but also enhances our relationships and overall well-being.
  • Acknowledge progress: Small achievements build confidence, making bigger goals seem within reach and more attainable. Achieving what might seem like the smallest task can significantly impress our subconscious mind, fostering a positive feedback loop that encourages us to continue pushing ourselves to achieve even more every day. This recognition not only propels our motivation but also reinforces our belief in our capabilities, allowing us to set and pursue increasingly ambitious objectives with a sense of purpose and enthusiasm.
  • Connect with supportive people: Our choices in company and environment shape our mindset and actions. Surrounding ourselves with individuals who uplift, encourage, and inspire can significantly influence our personal growth and overall well-being. Seeking out a community that shares similar values, aspirations, and interests can foster a sense of belonging and motivation, enhancing our ability to navigate life’s challenges and achieve our goals.
  • Express gratitude: Pay attention to what you have and what matters now, instead of dwelling on what’s absent. Practicing gratitude can have a profound impact on your overall well-being and happiness. It helps you appreciate the positive aspects of your life, leading to a clearer perspective. Gratitude changes our mindset to one of opportunity and abundance rather than one of lack. This shift in focus is essential, as it trains our subconscious mind to find the opportunities that surround us, ultimately fostering a more fulfilling and enriched life rather than concentrating solely on what’s lacking. By regularly acknowledging and valuing the good in your life, you can create a positive feedback loop that enhances your emotional resilience and overall life satisfaction.

Conclusion

Building optimism enriches both our personal and professional lives, enabling us to approach uncertainty with curiosity rather than apprehension. When we examine design, careers, relationships, and ambitions through an optimistic lens, we empower ourselves as a collective: fear no longer constrains our choices or actions. Instead, we learn to identify new possibilities, viewing setbacks as stepping stones toward success because we trust in our collective capacity to take constructive action.

By replacing fear with optimism, we unlock a wealth of opportunities that the universe has to offer. An optimistic perspective allows us to see the abundance that surrounds us, revealing resources and connections that we might have previously overlooked. As we cultivate an optimistic outlook together, we align ourselves with the flow of life, inviting growth and prosperity into our shared experiences.

In essence, optimism acts as a magnifying glass that amplifies our awareness of possibilities, encouraging us to reach for what is available and achievable. As we embrace this powerful mindset collectively, we not only enhance our potential as individuals but also inspire those around us to recognize the vast opportunities that lie ahead. This shift towards optimism nurtures a culture of collaboration and support, where everyone is empowered to explore their unique paths while contributing to the abundance available in the universe.

[1] Seligman, M. E. P. (2006). Learned optimism: How to change your mind and your life (2nd ed.). Vintage Books.

Quantum Revolution: How Max Planck Tapped Into the Universe’s Zero-Point Mysteries

Unveiling the Ever-Vibrant Fabric of Reality

Introduction

At the dawn of the twentieth century, Max Planck embarked on a quest to unravel how energy is absorbed and emitted by the filaments within light bulbs, aiming to maximize their efficiency and illuminate more while consuming less power. In doing so, Planck not only resolved practical engineering challenges, but also ignited a scientific revolution that fundamentally reshaped our comprehension of physics and the universe itself.

Planck’s investigations shattered the classical notion that energy flows in a seamless, continuous stream. Instead, he revealed that energy is exchanged in tiny, indivisible packets known as quanta. This radical insight gave birth to quantum theory, a new framework that challenged long-held assumptions and transformed our understanding of the physical world, from the behaviour of the smallest particles to the structure of the cosmos.

The significance of Planck’s discovery extends far beyond theoretical physics. By demonstrating that energy exchanges are quantized, he opened the door to a wave of scientific breakthroughs, paving the way for technologies such as semiconductors, lasers, and quantum computing. Moreover, subsequent research based on Planck’s work uncovered the existence of zero-point energy: even in the coldest conceivable state, where classical theory predicted absolute stillness, quantum systems retain a subtle but unceasing vibrancy. This revelation overturned the classical thermodynamic belief that all motion ceases at absolute zero, unveiling a universe in perpetual motion at its most fundamental level.

Planck’s legacy is profound, not only did he lay the foundations for quantum mechanics, but his insights continue to inspire new discoveries that help us probe the mysteries of existence. By deepening our grasp of reality’s underlying fabric, Planck’s work has transformed how we see our place in the universe, inviting us to explore how the strange and wonderful quantum world shapes everything from the nature of matter to the emergence of life itself.

The Black Body Problem and Ultraviolet Catastrophe

As the nineteenth century turned, new technologies such as the light bulb drove increased interest in the interaction between materials and radiation. Efficient engineering of light bulbs demanded a deeper understanding of how materials absorb and emit energy, especially the filaments inside the bulbs. In the early 1890s, the German Bureau of Standards commissioned Planck to optimize light bulb efficiency by identifying the temperature at which bulbs would radiate mainly in the visible spectrum while minimizing energy loss in the ultraviolet and infrared regions [1].

Prior attempts to explain the behaviour of heated materials, notably the Raleigh-Jeans law, predicted infinite energy emission at short wavelengths – the so-called ultraviolet catastrophe. These models often relied on the concept of an ideal material that perfectly absorbs all wavelengths, termed a black body. The ultraviolet catastrophe led directly to the “black body problem,” as experimental results contradicted the notion that materials like lightbulb filaments would emit infinite energy at high temperatures.

Planck addressed this issue by conducting experiments with electrically charged oscillators in cavities filled with black body radiation. He discovered that the oscillator could only change its energy in minimal increments, later quantified as h (Planck’s constant). The energy exchanged was proportional to the frequency of the electromagnetic wave and occurred in discrete quantities, or quanta. This finding gave rise to quantum theory and revealed a deeper truth: energy remains with the oscillator (or the atoms in the material) even at absolute zero temperature.

Zero-Point Energy and Its Implications

By solving the ultraviolet catastrophe through his black body absorption equation, Planck discovered zero-point energy (ZPE). Unlike the catastrophe, the existence of zero-point energy was verified experimentally, overturning classical thermodynamics’ expectation that all molecular motion would cease at absolute zero.

Zero-point energy accounts for phenomena such as vacuum-state fluctuations, where even an electromagnetic field with no photons is not truly empty but exhibits constant fluctuations due to ZPE. One of the most fascinating examples is the Gecko – a lizard capable of traversing walls and ceilings on nearly any material. The Gecko exploits quantum vacuum fluctuations present in the zero-point energy of the electromagnetic field. Its feet are covered with millions of microscopic hairs that interact with the quantum vacuum fluctuations of any nearby surface, resulting in an attractive force known as van der Waals force, a microscopic form of the Casimir effect. Through this process, the Gecko draws energy from the vacuum field, demonstrating nature’s ability to harness zero-point energy.

Experimental Advances in Harnessing Zero-Point Energy

Research teams from Purdue University and the University of Colorado Boulder have shown that energy from the vacuum state can be accessed through the Casimir force, which acts on micro-sized plates in experimental setups. Although the effect is small and produces limited energy, more efficient methods may be possible using quantum vacuum density and spin. The impact of spin is visible in fluid systems like hurricanes and tornadoes. By inducing high angular momentum vortices with plasma coupled to the quantum vacuum, researchers can create energy gradients much larger than those observed with simple non-conductive plates in the Casimir effect.

These pioneering investigations illuminate how quantum phenomena, once confined to abstract theory, are now being harnessed in the laboratory to extract measurable effects from the very fabric of space. While the practical application of zero-point energy remains in its infancy, the ongoing refinement of experimental techniques – such as manipulating spin and plasma interactions – offers glimpses of a future where the subtle energy fields underlying all matter could become a resource for technological innovation. Each advance deepens our appreciation for the intricate interplay between quantum mechanics and the observable world, suggesting that the restless energy pervading the vacuum is not merely a curiosity, but a potential wellspring of discovery and transformation that may one day reshape our understanding of both energy and existence.

Conclusion

Max Planck’s pursuit to optimize the humble light bulb did far more than revolutionize technology, it opened a window into the deepest workings of the universe. By questioning how filaments absorb and emit energy, Planck uncovered the quantum nature of reality, revealing that energy is exchanged in discrete packets, or quanta, rather than in a continuous flow. This insight not only solved the black body problem and the ultraviolet catastrophe but also led to the discovery of zero-point energy, the realization that even at absolute zero, particles never truly rest, and the universe itself is in perpetual motion. 

Zero-point energy shows us that nothing in the cosmos is permanent. Particles continuously move, shift, and even appear and disappear, embodying a universe that is dynamic and ever-changing. As humans, we are inseparable from this cosmic dance. Our bodies, thoughts, and lives are woven from the same quantum fabric, always in flux, always evolving. Planck’s work reminds us that change is not just inevitable, it is fundamental to existence itself. In understanding zero-point energy, we come to see that reality is not a static backdrop, but a vibrant, restless sea of possibility, where both matter and meaning are constantly being created and transformed.

Transformative Discovery: Integrating Coaching Principles for Project Success

The Human-Centered Approach to Discovery

At the core of effective discovery work lies the importance of coaching when gathering requirements. Over time, I’ve realized that meaningful insights rarely emerge from rigid templates or formal interviews; instead, they arise through genuine conversations where people feel supported enough to pause, think deeply, and express what they need.

Often, an initial request such as “We need a dashboard,” or “Can you shorten this workflow?” uncovers more fundamental issues like decision-making, team alignment, confidence, or communication barriers. By approaching discovery with a coaching mindset, we can reveal these underlying concerns rather than just addressing superficial symptoms. If you’ve ever experienced a discovery session that seemed more like coaching than interviewing, you’ll recognize the value of intentionally cultivating this dynamic.

Reflecting on my recent years of interviews, I’ve noticed a shift, they increasingly resemble coaching sessions. Initially, I thought I was merely “collecting requirements,” but over time, it became clear I was guiding people in clarifying their actual needs. Rather than just recording their requests, I was facilitating their thinking.

In early design meetings, users typically begin with basic asks: “We want a dashboard,” “Can you make this workflow shorter,” “Can we have a button that does X?” These are useful starting points, but they seldom tell the whole story. When I consciously adopt a coaching approach, slowing down, listening attentively, and posing thoughtful questions, the dialogue changes dramatically. At that moment, our focus shifts beyond the user interface into deeper topics: friction, decision-making processes, confidence, accountability, ambiguity, and the human elements hidden beneath feature requests.

Many professionals who have spent decades in their roles rarely get the chance to reflect on the patterns shaping their daily work. So, when I ask something as straightforward as, “What’s the hardest part about planning next season?” the answer often reveals gaps and bottlenecks behind the scenes, rather than issues with the software itself. These stories simply don’t surface during standard meetings.

Uncovering Deeper Insights through Curiosity and Coaching

Curiosity allows us to explore areas untouched by process charts and requirement documents. Prioritizing the individual over the process exposes context that’s invisible on paper, like emotional burden, workplace politics, quiet worries, workarounds, and shared tribal knowledge. Coaching fosters an environment where all these factors come to light, transforming them into valuable material for design decisions.

I used to think the better I got at systems, the less I’d need to do this. But it turned out the opposite is true. The better the system, the more human the conversations become. Coaching is almost like a bridge, helping people cross from “I think I need this feature” to “Here’s what I’m actually trying to solve.”

Active Listening and Guided Curiosity

Active listening forms the core of my approach, ensuring I deeply understand not just participants’ words but the meaning behind them. I reflect statements back — such as, “So it sounds like the challenge isn’t entering the data, it’s aligning on which data to trust, right?” — to confirm genuine understanding. This often transforms technical discussions into conversations about alignment, ownership, or governance.

A key tool is the “Five Whys” technique, which I use as a guide for curiosity rather than a rigid checklist. If someone requests better notifications, I’ll probe: “Why is that important?” and follow with questions like, “Why is it hard to notice things right now?” or, “What happens when you miss something?” By the fourth or fifth ‘why,’ the conversation surfaces underlying factors such as workload, confidence, or fear of missing out, revealing emotional and operational triggers beneath the initial request.

In workplaces, these deeper issues often connect to organizational culture. For example, a request for faster workflows sometimes indicates a real need for predictability or reduced chaos, rooted in communication or authority structures rather than the system itself. Recognizing these patterns enables more effective design decisions by addressing root causes instead of just symptoms.

Intentional silence is another valuable technique. After asking a question, I resist filling the pause, giving participants space to think and speak freely. This silence often prompts unfiltered insights, especially when someone is on the verge of articulating something new. Allowing this space helps participants trust and own their insights, leading to more meaningful outcomes.

Future-Focused Exploration and Empowering Language

I also employ future-anchoring questions like, “Imagine it’s six months after launch — what does success look like for you?” or, “If the system made your job easier in one specific way, what would that be?” These help participants shift from immediate concerns to aspirational thinking, revealing priorities such as autonomy or coordination that guide design principles.

Tone and language are critical for psychological safety. I aim to make discovery feel inviting, often assuring participants, “There’s no wrong answer here,” or encouraging them to think out loud. When people use absolutes — “We always have to redo this,” “No one ever gives us the right information” — it signals where they feel stuck. I gently challenge these constraints by asking, “What might need to change for that to be different?” This opens possibilities and helps distinguish between real and internalized limitations. Coaching-based discovery is key to uncovering and addressing these constraints for lasting change.

Reflections and Takeaways

Coaching Tools as Foundational Practice

Initially, I viewed coaching tools as separate from implementation work, and more of an optional soft skill than a crucial element. Over time, my outlook changed: I saw these tools as fundamental to successful outcomes. I noticed that the best results happened when participants truly took ownership of the insights we discovered together. That sense of ownership was strongest when the understanding came from them, even with my guidance. Insights gained this way tend to last longer and have a greater impact.

My approach to discovery has evolved significantly over time. Initially, I viewed discovery as a process focused on extracting insights from users. More recently, it has transitioned into facilitating users’ own self-discovery, enabling them to articulate intuitions and knowledge that may have previously been unexpressed. This progression from a transactional checklist to a collaborative and transformative meaning-making practice has had a substantial impact on my design methodology.

Efficiency through Early Alignment and Clarity

Contrary to prevailing assumptions, coaching-based discovery does not impede project timelines. Although it demands greater initial investment of time, the resulting enhanced alignment and mutual understanding often expedite progress. Early engagement in substantive discussions enables teams to minimize rework, clarify decision-making processes, and avoid misinterpretations, which can ultimately result in projects being completed ahead of schedule due to unified objectives.

Efficiency is driven by clarity. When users feel acknowledged and their perspectives are incorporated, their level of engagement and willingness to collaborate increases. The trust established during these interactions persists throughout testing, feedback, and rollout stages, mitigating many subsequent problems that typically occur when user requirements are not considered from the outset.

Strong Implementation Questions Are Strong Coaching Questions

At their core, effective implementation questions are essentially strong coaching questions. These are fuelled by curiosity, maintain a non-judgmental tone, and aim to empower others. Instead of guiding someone toward a set answer, such questions encourage individuals to uncover their own insights about the work.

Regardless of the type of discovery — be it design, implementation, or workflow — insight comes from those directly involved. Coaching goes beyond mere technique; it represents a mindset based on the belief that people already hold valuable wisdom. The coach’s job is to help draw out this knowledge, using thoughtful questions.

A key moment in coaching-based discovery happens when someone has a sudden realization, saying things like, “I’ve never thought about it that way,” or “Now I understand why this keeps happening.” These moments are where improvements in design and implementation begin.

Such realizations act as anchors throughout a project. When team members shift their understanding, these breakthroughs can be revisited during times of complexity or tough decisions, providing direction as a “north star” to keep teams aligned.

Coaching is not just a resource, it should be demonstrated in everyday interactions. As teams experience its benefits, they often adopt coaching practices with each other, leading to genuine transformation that extends past individual projects and influences wider workplace culture.

Ultimately, the real value of this work lies not just in the solutions themselves, but in the conversations that reshape how people engage with their work.

Understanding Agentic AI: Key Insights for Retail Leaders

Introduction

The term “Agentic AI” is now commonly used in industry conversations, yet its meaning often ranges from simple automation tools to advanced digital workers. Retail leaders typically envision Agentic AI as a capable junior employee able to understand goals, reason, take action across platforms, and learn, setting high expectations for implementation.

This broad perception is close to the research-based definition: systems that pursue goals, understand context, plan, act, and collaborate with other agents. In practice, however, many solutions labeled as agentic simply combine automation, machine learning, language models, and APIs.

In this discussion:

  • Agentic AI means sophisticated, enterprise-level autonomous systems focused on defined objectives.
  • Autonomous Workflow Orchestration (AWO) reflects current retail tools: smart workflows still guided by human priorities.

Key questions covered:

  • What systems are in use today?
  • Which technologies are mislabeled as agentic?
  • What advancements are needed in tech, data, and processes to move from AWO to true agentic AI?

What People Think “Agentic AI” Is (And Why That Matters)

Many view an “agent” as more than a rule-based system. They expect it to handle complex tasks, strategize, and act independently. Technically, such agents should:

  • Understand goals rather than just react to inputs.
  • Make multi-step plans involving various systems.
  • Select and sequence tools or APIs appropriately.
  • Adapt when things go off course.

This distinction affects leadership expectations: if leaders think they’re getting fully capable agents, they may incorrectly assign responsibility. Confusing automation with autonomy can lead to inadequate oversight and accountability gaps. Accurate descriptions of “agentic AI” are crucial, as mislabeling advanced workflow automation may cause governance failures when organizations rely on abilities these systems don’t possess.

What AWO Really Is: Architectural Reality, Not Just Buzz

AWO is an integrated stack supporting autonomous workflows:

  • The Workflow/RPA layer manages tasks between systems.
  • Machine learning models assess risk, sort tickets, predict demand, and spot patterns.
  • LLMs process unstructured text, summarize, draft, and converse.
  • The integration fabric links retail and supply chain apps with APIs and queues.
  • Rules and policies set boundaries, manage thresholds, and handle approvals.

Compared to traditional automation, AWO uses machine learning to trigger workflows based on data, rather than fixed rules. LLMs interpret complex inputs, enabling routing by predictions or classifications instead of basic logic. While adaptable, these systems don’t independently pursue high-level goals; they follow designed workflows.

In retail, AWO can validate return requests, resolve delivery issues, and spot shelf gaps from images. Problems occur when model assumptions fail, rules conflict, or policies change. Because workflows drive actions, solutions often require process redesign, underscoring the gap to fully goal-driven, agentic systems.

The Spectrum of Automation and Agentic Behaviour in Retail

The spectrum of automation and agentic behaviour provides leaders with a framework to benchmark their current capabilities and chart a path for future development. Retail organizations typically progress through four distinct stages, each with its own strengths, weaknesses, and operational implications.

The spectrum: Automation → AWO → Narrow Agents → Agentic Ecosystems

Stage 1: Rules Automation

At this stage, automation is driven by macros, scripts, and Robotic Process Automation (RPA) bots. The primary advantage of this approach is its predictability and controllability. However, these systems are inherently brittle; any change in user interface or data format can cause the automation to break, leading to disruption in operations.

Stage 2: Adaptive Workflow Orchestration (AWO)

AWO systems can adapt within established workflows but lack the ability to modify the workflow structure itself. These systems remain workflow-centric but incorporate machine learning (ML) and large language models (LLMs) to make smarter decisions within the flow. The strength of AWO lies in its ability to handle greater variation and reduce manual handoffs. The limitation, however, is that goals are externally defined and the workflow logic is still hard-coded, constraining the system’s ability to respond to new or unexpected challenges.

Stage 3: Narrow Agents

Narrow agents introduce the capacity to make decisions based on trade-offs, not just rigid rules. These domain-specific agents can reason within a tightly defined scope. For example, a pricing agent can select among pre-approved strategies within established guardrails, while a disruption-management agent may propose and sometimes execute remediation steps. At this stage, the distinction between a “smart workflow” and an “agent” begins to blur, as the system starts to optimize rather than merely execute scripted actions.

Stage 4: Agentic Ecosystems

In this most advanced stage, agents operate under high-level goals and possess autonomy in selecting methods. Multiple agents with different roles and perspectives collaborate, sharing goals or negotiating trade-offs such as margin, service level, and inventory risk. These agents are empowered to choose their tools and may even propose new process variants, reflecting a dynamic and adaptive approach to retail operations.

Current State and Key Takeaway

Most retailers today find themselves between Stages 2 and 3, with Adaptive Workflow Orchestration present in several workflows and a few narrow agent-like pilots underway. Despite these advancements, governance, data foundations, and integration patterns remain rooted in traditional workflow-centric models, rather than in structures that support agents capable of initiating or reshaping work.

Importantly, progression through these stages cannot be achieved in a single leap. Each stage introduces new potential failure modes, ranging from simple bot breakdowns to workflows making poor decisions, to agents optimizing for objectives that may not align with organizational goals. Leaders must be deliberate and explicit about which stage they are designing for, ensuring that systems and processes are properly aligned with their intended capabilities.

Practical Examples: Where Automation Excels and Where It Falls Short

Automated Refunds and Returns: The Limits of Autonomy

Automated refund and return processes demonstrate how advanced orchestration systems streamline routine workflows. The standard – or “happy path” – scenario is handled efficiently: the system classifies the return reason, checks applicable policies, processes the refund, and notifies the customer. However, the process becomes more complex when exceptions arise. Critical questions include: Who is responsible for resolving edge cases such as suspected fraud, chronic returners, or policy conflicts? Is the automated system empowered to weigh cost against customer goodwill, or does that authority remain with humans?

Typically, automation is permitted only within a defined risk band. For instance: if the risk score is below a certain threshold (X), the system approves the refund automatically; if the score falls between X and Y, the case is escalated; if above Y, the refund is blocked. This illustrates classic Adaptive Workflow Orchestration (AWO) – the system applies a business’s predetermined risk appetite on a larger scale but does not set or adjust that appetite itself.

Computer Vision in Planogram Checks: From Task Generation to Strategic Action

In another example, computer-vision-powered systems conduct planogram checks, detecting gaps on shelves and prompting the workflow to generate corrective tasks. The deeper, strategic questions are: Can the system reprioritize these tasks based on factors such as sales impact or labour constraints? Is it able to propose alternative merchandising layouts in response to local store behaviour?

At present, the answer is generally no. The system continues to follow a linear process: detect an issue, then raise a task. True agentic behaviour would involve the system analyzing a store’s unique traffic patterns and sales profile, proposing a new display layout, simulating the impact, and rolling out the change as a test.

The Analytical Gap in Current Automation

A common pattern emerges across these scenarios. The “sense” and “act” phases of automation are becoming more intelligent and hands-off. Yet, determining the broader objectives – deciding what trade-offs are acceptable and which “game” to play – remains mostly a human-driven and static process.

This highlights a key analytical gap. While much is said about “autonomous AI,” closer examination reveals that most autonomy is local and tactical, not global and strategic. As a result, Adaptive Workflow Orchestration delivers strong return on investment (ROI) but does not fundamentally transform the underlying operating model.

A More Rigorous Look at Future Agentic Scenarios

Let’s revisit the future supply chain scenario in a more structured way. When an agent spots a disruption, it goes through several processes: monitoring data continuously, maintaining contextual awareness of business-critical variables, and communicating efficiently with other agents to coordinate responses.

The replenishment agent, in turn, considers constraints like supplier lead times and contractual limits, understands service levels and margin goals, and prioritizes options that best fit business objectives.

As more agents are added, covering margins, stores, and customer interactions, the challenges shift from simply integrating systems to ensuring all agents share accurate information, resolve conflicts, and know when to involve humans.

These issues mean automation is not just about upgrading technology. Key concerns include who defines agent goals, how often they’re reviewed, and what oversight exists for agent decisions. As a result, agentic pilots tend to focus on narrow tasks, such as dynamic pricing or local optimization, rather than handling entire supply chains. The primary hurdles relate to governance, data quality, and accountability, not just technical sophistication.

The Leadership Imperative: Why the AWO vs. Agentic AI Distinction Matters

Mischaracterizing Automated Workflow Orchestration (AWO) as fully agentic artificial intelligence can lead to notable repercussions for leadership and organizational effectiveness. When this distinction is not explicitly acknowledged, three primary challenges frequently emerge: architecture drift, risk blind spots, and talent misalignment.

1. Architecture Drift

Integrating agents into a workflow-centric environment without comprehensive planning often results in their function being limited to advanced decision points rather than serving as fundamental system components. Such an approach neglects critical design considerations including shared memory, a unified goal repository, and event-driven architecture, each essential for enabling agents to operate as integral contributors within the broader ecosystem.

2. Risk Blind Spots

The presumption that “the agent knows what it’s doing” may result in inadequate investment in vital safety and governance controls. These include:

  • Observability: Mechanisms enabling tracing and explanation of agent decisions.
  • Kill Switches: Capabilities to quickly intervene and suspend agent actions when necessary.
  • Sandboxes: Controlled environments for safely testing new agent behaviours prior to deployment.
3. Talent Mismatch

Prioritizing recruitment of only prompt engineers overlooks the comprehensive skills required for effective agentic AI implementation. Beyond technical expertise, organizations benefit from engaging:

  • Professionals skilled in designing robust machine–human workflows.
  • Individuals capable of defining agent objectives, constraints, and developing meaningful evaluation frameworks.
Retail-Specific Sequencing Challenges

Within the retail sector, misconstruing “buying agents” may result in omitting foundational activities such as:

  • Data cleansing and standardization for products, locations, and customers.
  • Streamlining process variants to minimize operational complexity.
  • Establishing standardized integrations across Order Management Systems (OMS), Warehouse Management Systems (WMS), Enterprise Resource Planning (ERP), and e-commerce platforms.

Neglecting these prerequisites often causes agentic initiatives to stagnate or devolve into isolated, non-scalable solutions. This may foster the erroneous belief that agents are inadequate, when in fact, the organization was insufficiently prepared for adoption.

Importance of Distinguishing AWO from Agentic Ecosystems

Differentiating between AWO and agentic ecosystems is imperative, as it significantly influences leadership approaches and talent requirements. While workflow enhancements primarily necessitate expertise in workflow engineering and machine learning/large language models (ML/LLM), transitioning to agentic systems demands reimagining organizational decision-making structures and recruiting individuals adept at architecting resilient socio-technical systems.

Practical Steps for Leaders: Navigating Agentic AI in Retail

If you are a CIO, COO, or Head of Digital responding to board-level questions about “agentic AI,” the following structured approach outlines what you should focus on over the next 12 to 18 months.

1. Maximize the Value of Automated Workflow Orchestration (AWO)
  • Identify five to ten high-volume, rules-based processes. Typical examples include returns management, handling order exceptions, vendor queries, and store-level tasks.
  • Redesign these processes explicitly as AWO, ensuring each has defined inputs, outputs, and key performance indicators (KPIs). Carefully consider where machine learning or large language models (ML/LLMs) can add measurable value.
  • Implement instrumentation for these flows to track and measure improvements such as reduced cycle times, lower error rates, and customer impact.
2. Develop Targeted Agent Pilot Projects
  • Deliberately design one or two narrow agent pilot initiatives. Select domains with clear objectives and manageable risks, such as dynamic pricing within set ranges, markdown optimization, or tuning localized assortments.
  • Allow agents to propose actions within predetermined guardrails. Initially, keep humans in the approval loop, gradually shifting to exception-only review as confidence in the system grows.
  • Treat these pilots as experiments in operational autonomy, not just as new digital tools. Document and analyze any challenges encountered, including data quality issues, policy conflicts, or trust barriers.
3. Lay the Foundation for “Agent Readiness”
  • Data: Clearly define what data agents will need to operate cross-functionally across the organization.
  • Events: Transition from nightly data batches to real-time event streams for key operational signals.
  • Governance: Establish an “autonomy matrix” to clarify which decisions can be fully automated, which require human review, and which should remain exclusively human-driven for the time being.

By systematically following these three steps, you will be building the necessary infrastructure and capabilities to progress from today’s orchestrated copilot models to tomorrow’s more autonomous agentic ecosystems, without exposing your organization to undue risk or succumbing to industry buzzwords.

Reframing “Progress” in Retail AI

The core message is not that “Agentic AI is years away, so wait,” but rather: “Retail is currently experiencing an AWO phase that offers notable value, and the approach taken to AWO will either position businesses for agentic ecosystems in the future or pose significant challenges later.”

If AWO implementations are opaque, rigid, and confined to singular applications, they limit long-term progress. Conversely, instrumented, integrated, and well-governed AWOs serve as foundational platforms for developing agent-based systems. While the underlying technologies may be similar, the resulting strategic trajectories differ substantially.

For organizational leaders, the critical consideration is not simply whether agents have been adopted, but whether today’s automation strategies are being designed to enable greater autonomy in the future, should that become desirable. Affirmative action in this regard ensures that organizations are leveraging current capabilities to strategically prepare for a transition toward autonomous retail operations.