Sorry, But AI Deserves ‘Please’ Too!

As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

Introduction

Large language models (LLMs) and AI chatbots have become woven into the fabric of our workplaces and personal lives, inviting us to reflect on the profound shift in our interaction with technology. As we navigate this new landscape, we find ourselves reevaluating the role of artificial intelligence (AI) in our daily routines. These advancements have not merely changed how we access information, seek advice, and perform research; they have opened a door to an era where insights and solutions are unveiled with remarkable speed and efficiency. As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

As these AI systems evolve to simulate human-like interactions, an intriguing phenomenon has emerged: people often address AI with polite phrases like “please” and “thank you,” echoing the social etiquette typically reserved for human conversations. This shift reflects a deeper societal change, where individuals begin to attribute a sense of agency and respect to machines, blurring the lines between human and artificial interaction. Furthermore, as AI continues to improve, this trend may lead to even more sophisticated relationships, encouraging users to engage with AI in ways that foster collaboration and mutual understanding, ultimately enhancing productivity and satisfaction in both personal and professional interactions.

With AI entities now entrenched in collaborative environments, one must ask: how do we, as humans, truly treat these so-called conversational agents? Despite AI’s lack of real emotions and its indifference to our so-called politeness, the patterns of user interaction reveal deep-seated beliefs about technology and the essence of human-AI relationships. LLMs are crafted to imitate human communication, creating an illusion of agency that drivers users to apply familiar social norms. In collaborative contexts, politeness becomes not just a nicety, but a catalyst for cooperation, compelling users to extend the very same respectful behavior to AI that they reserve for their human colleagues. [1]

Politeness Towards Machines and the CASA Paradigm

Politeness plays a vital role in shaping social interactions, particularly in environments where individuals must navigate complex power dynamics. It promotes harmony, reduces misunderstandings, and fosters cooperation among participants. Rather than being a rigid set of linguistic rules, politeness is a dynamic process involving the negotiation of social identities and power dynamics. These negotiations are influenced by participants’ backgrounds, their relationships with one another, and the specific context in which the interaction takes place [2].

Extending the concept of politeness to interactions with machines highlights the broader question of social engagement with technology. The Computers Are Social Actors (CASA) paradigm states that humans interact with computers in a fundamentally social manner, not because they consciously believe computers are human-like, nor due to ignorance or psychological dysfunction. Rather, this social orientation arises when people engage with computers, revealing that human-computer interactions are biased towards applying social norms similar to those used in human-to-human communication [3].

The CASA approach demonstrates that users unconsciously transfer rules and behaviours from human-to-human interactions, including politeness, to their engagements with AI. However, research examining young children’s interactions with virtual agents revealed contrasting patterns. Children often adopted a command-based style of communication with virtual agents, and this behaviour sometimes extended to their interactions with parents and educators in their personal lives [4].

Further studies into human-robot interaction have shown that the choice of wake-words can influence how users communicate with technology. For instance, using direct wake-words such as “Hey, Robot” may inadvertently encourage more abrupt or rude communication, especially among children, which could spill over into their interactions with other people. Conversely, adopting polite wake-words like “Excuse me, Robot” was found to foster more respectful and considerate exchanges with the technology [5].

Human-AI Interaction Dynamics

Research demonstrates that attributing agency to artificial intelligence is not necessarily the primary factor influencing politeness in user interactions. Instead, users who believe they are engaging with a person—regardless of whether the entity on the other end is human or computer—tend to exhibit behaviours typically associated with establishing interpersonal relationships, including politeness. Conversely, when users are aware that they are communicating with a computer, they are less likely to display such behaviours [6].

This pattern may help explain why users display politeness to large language models (LLMs) and generative AI agents. As these systems become more emotionally responsive and socially sophisticated, users increasingly attribute human-like qualities to them. This attribution encourages users to apply the same interpersonal communication mechanisms they use in interactions with other humans, thereby fostering polite exchanges.

Politeness in human-AI interactions often decreases as the interaction progresses. While users typically start out polite when engaging with AI, this politeness tends to diminish as their focus shifts to completing their tasks. Over time, users become more accustomed to interacting with AI and the complexity of their tasks may lessen, both of which contribute to a reduction in polite behaviour. For example, a user querying an LLM about a relatively low-risk scenario—such as running a snack bar—may quickly abandon polite language once the context becomes clear. In contrast, when faced with a higher-stakes task—such as understanding a legal concept—users may maintain politeness for longer, possibly due to increased cognitive demands or the seriousness of the task. In such scenarios, politeness may be perceived as facilitating better outcomes or advice, especially when uncertainty is involved.

Conclusion

Politeness in human-AI interactions is shaped by a complex interplay of social norms, individual user characteristics, and system design choices—such as the use of polite wake-words and emotionally responsive AI behaviours. While attributing agency to AI may not be the primary driver of politeness, users tend to display interpersonal behaviours like politeness when they perceive they are interacting with a person, regardless of whether the entity is human or computer.

As AI agents become more emotionally and socially sophisticated, users increasingly apply human-like communication strategies to these systems. However, politeness tends to wane as familiarity grows and task complexity diminishes, with higher-stakes scenarios sustaining polite engagement for longer. Recognizing these dynamics is crucial for designing AI systems that foster respectful and effective communication, ultimately supporting positive user experiences and outcomes.

Exploring Quantum User Experience: Future of Human-Computer Interaction

Introduction

In today’s rapidly evolving digital world, our interactions with technology are no longer limited to simple interfaces and straightforward feedback. The growth of cloud infrastructure, the explosion of big data, and the integration of advanced Artificial Intelligence (AI) have transformed the very foundation of how humans connect with systems. As applications become more intelligent, pervasive, and context-aware—operating seamlessly across devices in real-time—users expect more fluid, responsive, and personalized experiences than ever before. These expectations challenge the boundaries of traditional Human-Computer Interaction (HCI) and User Experience (UX) models, which were not designed for such complexity or dynamism.

Legacy frameworks like Human-Centred Design (HCD), User-Centred Design (UCD), the Double Diamond model, and Design Thinking have provided structure for decades, yet their sequential stages and lengthy feedback loops can’t keep pace with the demands of today’s interconnected, AI-driven world. To design experiences that truly resonate, we must rethink our approach—moving beyond rigid methodologies and embracing new paradigms that account for the unpredictable, adaptive nature of modern technology. The future of HCI calls for innovative, human-centred AI design strategies that acknowledge the unique capabilities and limitations of intelligent systems from the very start [1].

Quantum User Experience (QUX): An Alternative Perspective for HCI

When examining ways to enhance Human-Computer Interaction (HCI) methodologies, it becomes apparent that these approaches share similarities with the fundamental phenomena underlying nature and life itself. Drawing inspiration from Quantum Mechanics, we can establish a theoretical analogy to HCI, which gives rise to the concept of Quantum User Experience (QUX). QUX offers a distinct departure from traditional User Experience (UX) models in several ways [2]:

  • Fluidity: QUX is not rigid, thus enabling experiences to be more dynamic and adaptable.
  • Adaptability: QUX is not linear, allowing experiences to evolve in response to users’ changing needs.
  • Mathematical Foundation: QUX is not random; it leverages mathematical data derived from measurements and research on user behaviour. This approach integrates feedback from previous processes and their associated data into new experiences.
  • Temporal Definition: QUX is not infinite; experiences are distinctly defined in time, avoiding endless cycles of testing and iteration.

To reinforce this theoretical framework, the digital cosmos is conceptualized as the global network—such as the internet—functioning on two key scales: the macroscopic scale, which encompasses phenomena that are detectable, analyzable, and able to be influenced at a broad level across the network; and the microscopic scale, which refers to aspects that are perceived and felt, yet not directly observed or measured.

In the digital cosmos, three core entities serve as the foundation for its structure: Agents, Interactions, and Objects. Agents encompass not only users, but also any entity capable of engaging with others, such as bots, engines, or platforms. Interactions refer to the exchanges or communications that occur between these agents, facilitating connections and the flow of information. Objects represent the wide range of content found within the global network, including text, audio, and visual elements, all of which contribute to the richness and diversity of the digital environment.

Within the Quantum User Experience (QUX) framework, experiences are categorized as either macro- or micro-experiences: macro-experiences encompass collective behavioural patterns that involve multiple entities and significantly influence individuals or groups, while micro-experiences represent individual responses to notable stimuli or events that hold personal significance and shape one’s overall perception or feelings. Together, these experiences form the cumulative fabric of QUX, contributing to the flow of information across the vast structure of the digital cosmos.

Quantum-Scale Behaviour of Agents, Interactions, and Objects in QUX

Within the QUX framework, the behaviours of Agents, Interactions, and Objects can be interpreted through a quantum lens, revealing how microscopic traits and probabilistic states shape the digital cosmos.

Agents are entities that exhibit microscopic characteristics—such as cognitive, social, or psychological attributes—which manifest as observable macroscopic behaviours. In the QUX perspective, Agents are conceptualized as “strings,” drawing a parallel to String Theory. This analogy suggests that all observable phenomena within the digital cosmos, at a macroscopic level, originate from the various vibrational states of these strings.

Interactions encompass the diverse behaviours that occur between agents (as vibrating strings) and objects, influenced by users’ perceptions and experiences over time. These interactions function as probabilistic quantum systems in superposition, described by their intensity (the strength of the interaction) and frequency (the variance of the interaction over time). Upon measurement, these probabilistic states collapse into observable Quantum Experience (QXE) states, allowing for a flexible and probabilistic approach to modelling user engagement.

Objects are defined as probabilistic Experience Elements (XL), which serve as the fundamental building blocks—such as buttons—of a system, application, or service. These objects possess a spectrum of possible values governed by probabilities and are characterized by discrete Experience Quanta (XQ) energy units resulting from user interactions. This framework supports real-time adaptability and multivariate evaluation of experiences, surpassing the limitations of traditional, rigid A/B testing methods.

Fig 1. QUX phase breakdown. Micro experiences combine together to form a superposition of states, collapsing into highest probability quantum user experiences.

Conclusion: Quantum Mechanics as Inspiration for QUX

The QUX framework draws profound inspiration from quantum mechanics, reshaping our understanding of human-computer interaction by adopting concepts like superposition, probabilistic states, and collapse. By viewing agents as vibrating strings—much like those in String Theory—QUX reimagines the digital cosmos as a domain where microscopic traits and behaviours coalesce into observable, macroscopic phenomena. Interactions function as quantum systems, existing in probabilistic superpositions until measured, at which point they collapse into tangible Quantum Experience (QXE) states. Objects, conceptualized as probabilistic Experience Elements, embody the quantum notion of possible values and energy units that adapt in real-time to user input. This quantum-inspired perspective enables flexible modelling of engagement, surpassing the limitations of classical, deterministic approaches.

As cloud computing, artificial intelligence, and quantum technologies advance, QUX motivates a paradigm shift in human-centred design and methodology. It champions adaptability, multivariate evaluation, and responsiveness to evolving user needs—mirroring the uncertainty and dynamism at the heart of quantum mechanics. In this way, QUX not only offers a novel theoretical foundation, but also empowers designers to meet the demands of a rapidly evolving technological landscape, ensuring that applications and systems remain attuned to the nuanced and changing nature of user experience.

Rethinking Reality: The Unwritten Story of Time

The universe is not a closed book, but rather a narrative in progress—its future pages unwritten, its history ever-expanding. The digits that define time, instead of being predetermined, seem to emerge in concert with our unfolding experience, hinting at a reality that is both participatory and creative.

Introduction

Time: we all experience its steady march, feel its passing in our bodies, and witness its effects as trees stretch skyward, animals age, and objects wear down. Our everyday understanding of time is one of motion—a ceaseless flow from past, to present, into an open future. Yet, what if the very nature of time is not what it seems? Physics offers a perspective that is at odds with our intuition, challenging us to rethink everything we believe about reality.

Albert Einstein’s revolutionary theory of relativity upended this familiar notion, proposing that time is not merely a backdrop to events, but a fourth dimension intricately woven into the fabric of the universe. In his “block universe” concept, the past, present, and future exist together in a four-dimensional space-time continuum, and every moment—from the birth of the cosmos to its distant future—is already etched into reality. In this cosmic tapestry, the initial conditions of the universe determine all that follows, leaving little room for the unfolding uncertainty we sense in our lives [1].

Contrasting Views: Einstein, Quantum Mechanics, and the Nature of Time

Most physicists today accept Einstein’s pre-determined view of reality, in which all events—past, present, and future—are fixed within the space-time continuum. However, some physicists who explore the concept of time more deeply find themselves troubled by the implications of this theory, particularly when the quantum mechanical perspective is considered. At the quantum scale, particles act in a probabilistic manner, existing in multiple states at once until measured; it is only through measurement that a particle assumes a single, definite state.

While each measurement of a particle is random and unpredictable, the overall results tend to conform to predictable statistical patterns. The behaviour of quantum particles is described by the evolution of their wave function over time. Quantum wave functions require a fixed spacetime, whereas relativity treats spacetime as dynamic and observer-dependent. This fundamental difference complicates efforts to develop a theory of quantum gravity capable of quantizing spacetime—a major challenge in modern physics [2].

Relativity, in contrast, insists that time and space be treated equally, making it necessary to introduce time as an operator and place it on the same level as position coordinates. In quantum mechanics, each particle is part of a system with many particles, and time and space coordinates are not treated equally. In such systems, there are as many position variables as there are particles, but only a single time variable, which represents a flaw in the theory. To overcome this, scientists have developed the many-time formalism, where a system of N particles is described by N distinct time and space variables, ensuring equal treatment of space and time [3].

If physicists are to solve the mystery of time, they must weigh not only Einstein’s space-time continuum, but the fact that the universe if fundamentally quantum, governed by probability and uncertainty. Quantum theory treats time in a very different way than Einstein’s theory. Time in quantum mechanics is rigid, not intertwined with the dimensions of space as it is in relativity.

Gisin’s Intuitionist Approach and Indeterminacy

Swiss physicist Nicolas Gisin has published papers aiming to clarify the uncertainty surrounding time in physics. Gisin argues that time—both generally and as we experience it in the present—can be expressed in intuitionist mathematics, a century-old framework that rejects numbers with infinitely many digits.

Using intuitionist mathematics to describe the evolution of physical systems reveals that time progresses only in one direction, resulting in the creation of new information. This stands in stark contrast to the deterministic approach implied by Einstein’s equations and the unpredictability inherent in quantum mechanics. If numbers are finite and limited in precision, then nature itself is imprecise and inherently unpredictable.

Gisin’s approach can be likened to weather forecasting: precise predictions are impossible because the initial conditions of every atom on Earth cannot be known with infinite accuracy. In intuitionist mathematics, the digits specifying the weather’s state and future evolution are revealed in real time as the future unfolds. Thus, reality is indeterministic and the future remains open, with time not simply unfolding as a sequence of predetermined events. Instead, the digits that define time are continuously created as time passes—a process of creative unfolding.

Gisin’s ideas attempt to establish a common indeterministic language for both classical and quantum physics. Quantum mechanics establishes that information can be shuffled or moved around, but never destroyed. However, if digits defining the state of the universe grow with time as Gisin proposes, then new information is also being created. Thus, according to Gisin information is not preserved in the universe since new information is being created by the mere process of measurement.

The Evolving Nature of Time

As we survey the landscape of contemporary physics, it becomes apparent that our classical conception of time is far from settled. Instead, it stands at the crossroads of discovery—a concept perpetually reshaped by new theories and deeper reflection. Einstein’s vision of a pre-determined reality, where all moments are frozen within the space-time continuum, offers comfort in its order and predictability. Yet, this view is challenged by the quantum world, where uncertainty reigns, and events transpire in a haze of probability until measurement brings them into sharp relief.

The friction between the determinism of relativity and the indeterminacy of quantum mechanics compels us to look beyond conventional frameworks. Quantum mechanics treats time as an inflexible backdrop, severed from the intricacies of space, whereas relativity insists on weaving time and space together, equal and dynamic. Gisin’s intuitionist approach further invites us to reflect on the very bedrock of reality—questioning whether information is static or endlessly generated as the universe unfolds.

This ongoing dialogue between classical physics and emerging quantum perspectives not only exposes the limitations of our current understanding but also sparks a profound sense of curiosity. If, as Gisin suggests, information is continuously created, then the universe is not a closed book, but rather a narrative in progress—its future pages unwritten, its history ever-expanding. The digits that define time, instead of being predetermined, seem to emerge in concert with our unfolding experience, hinting at a reality that is both participatory and creative.

Exploring Quantum Computing and Wormholes: A New Frontier

 As we continue to unlock the secrets of quantum gravity and teleportation, each discovery invites us to ponder just how much more there is to unveil, a testament to the infinite possibilities that lie hidden within the quantum tapestry of our universe. The next revelation may be just around the corner, waiting to astonish us all over again, bringing us closer to understanding our universe, and our place within it.

Introduction

Imagine voyaging across the galaxy at warp speed, like in Star Trek or Star Wars, where starships zip through cosmic shortcuts called wormholes. While these cinematic adventures may seem far-fetched, the wildest twist is that wormholes aren’t just a figment of Hollywood’s imagination—quantum physics hints they might truly exist, emerging from the very fabric of quantum entanglement. This remarkable idea flips our understanding of the universe: space and time could actually spring from invisible quantum connections, reshaping what we know about black holes and the universe itself.

This revolutionary perspective burst onto the scene in 2013, thanks to Juan Maldacena and Leonard Susskind, who suggested that whenever two systems are maximally entangled, a wormhole connects them, anchoring each system at opposite ends [1]. Building on the pioneering work of Einstein, Podolsky, and Rosen (EPR) on quantum entanglement and the Einstein-Rosen (ER) description of wormholes, Maldacena and Susskind daringly bridged quantum physics with general relativity, inviting us to think of our universe as far stranger, and far more interconnected, than we ever imagined [2].

Einstein-Rosen Bridges and the Origins of Wormholes

In their seminal paper, Einstein and Rosen encountered the concept of wormholes while seeking to describe space-time and the subatomic particles suspended within it. Their investigation centred on disruptions in the fabric of space-time, originally revealed by German physicist Karl Schwarzschild in 1916, just months after Einstein published his theory of relativity.

Schwarzschild demonstrated that mass can become so strongly self-attractive due to gravity that it concentrates infinitely, causing a sharp curvature in space-time. At these points, the variables in Einstein’s equations escalate to infinity, leading the equations themselves to break down. Such regions of concentrated mass, known as singularities, are found throughout the universe and are concealed within the centres of black holes. This hidden nature means that singularities cannot be directly described or observed, underscoring the necessity for quantum theory to be applied to gravity.

Einstein and Rosen utilized Schwarzschild’s mathematical framework to incorporate particles into general relativity. To resolve the mathematical challenges posed by singularities, they extracted these singular points from Schwarzschild’s equations and introduced new variables. These variables replaced singularities with an extra-dimensional tube, which connects to another region of space-time. They posited that these “bridges,” or wormholes, could represent particles themselves.

Interestingly, while attempting to unite particles and wormholes, Einstein and Rosen did not account for a peculiar particle phenomenon they had identified months earlier with Podolsky in the EPR paper: quantum entanglement. Quantum entanglement led quantum gravity researchers to fixate on entanglement as a way to explain the space-time hologram.

Space-Time as a Hologram

The concept of space-time holography emerged in the 1980s, when black hole theorist John Wheeler proposed that space-time, along with everything contained within it, could arise from fundamental information. Building on this idea, Dutch physicist Gerard ‘t Hooft and others speculated that the emergence of space-time might be similar to the way a hologram projects a three-dimensional image from a two-dimensional surface. This notion was further developed in 1994 by Leonard Susskind in his influential paper “The World as a Hologram,” wherein he argued that the curved space-time described by general relativity is mathematically equivalent to a quantum system defined on the boundary of that space.

A major breakthrough came a few years later when Juan Maldacena demonstrated that anti-de Sitter (AdS) space—a theoretical universe with negative energy and a hyperbolic geometry—acts as a true hologram. In this framework, objects become infinitesimally small as they move toward the boundary, and the properties of space-time and gravity inside the AdS universe precisely correspond with those of a quantum system known as conformal field theory (CFT) defined on its boundary. This discovery established a profound connection between the geometry of space-time and the information encoded in quantum systems, suggesting that the universe itself may operate as a vast holographic projection.

ER = EPR

Recent advances in theoretical and experimental physics have leveraged the SYK (Sachdev-Ye-Kitaev) model to explore the practical realization of wormholes, particularly in relation to quantum entanglement and teleportation. Building on Maldacena’s 2013 insight that suggested a deep connection between quantum entanglement (EPR pairs) and wormhole bridges (ER bridges)—summarized by the equation ER = EPR—researchers have used the SYK model to make these ideas more tangible. The SYK model, which describes a system of randomly interacting particles, provides a mathematically tractable framework that mirrors the chaotic behaviour of black holes and the properties of quantum gravity.

In 2017, Daniel Jafferis, Ping Gao, and Aaron Wall extended the ER = EPR conjecture to the realm of traversable wormholes, using the SYK model to design scenarios where negative energy can keep a wormhole open long enough for information to pass through. They demonstrated that this gravitational picture of a traversable wormhole directly corresponds to the quantum teleportation protocol, in which quantum information is transferred between two entangled systems. The SYK model enabled researchers to simulate the complex dynamics of these wormholes, making the abstract concept of quantum gravity more accessible for experimental testing.

Fig 1. How a quantum computer simulated a wormhole

By 2019, Jafferis and Gao, in collaboration with others, successfully implemented wormhole teleportation using the SYK model as a blueprint for their experiments on Google’s Sycamore quantum processor. They encoded information in a qubit and observed its transfer from one quantum system to another, effectively simulating the passage of information through a traversable wormhole as predicted by the SYK-based framework. This experiment marked a significant step forward in the study of quantum gravity, as it provided the first laboratory evidence for the dynamics of traversable wormholes, all made possible by the powerful insights offered by the SYK model.

Conclusion

Much like the mind-bending scenarios depicted in Hollywood blockbusters such as Star Trek and Star Wars, where spaceships traverse wormholes and quantum teleportation moves characters across galaxies, the real universe now seems to be catching up with fiction.

The remarkable journey from abstract mathematical conjectures to tangible laboratory experiments has revealed a universe far stranger, and more interconnected, than we could have ever imagined. The idea that information can traverse cosmic distances through the fabric of space-time, guided by the ghostly threads of quantum entanglement and the mysterious passageways of wormholes, blurs the line between science fiction and reality.

 As we continue to unlock the secrets of quantum gravity and teleportation, each discovery invites us to ponder just how much more there is to unveil, a testament to the infinite possibilities that lie hidden within the quantum tapestry of our universe. The next revelation may be just around the corner, waiting to astonish us all over again, bringing us closer to understanding our universe, and our place within it.

Beyond Barriers: How Quantum Tunneling Powers Our Digital and Cosmic World

From memory devices to the heart of stars

Consider the operation of a flash memory card, such as an SSD or USB drive, which is capable of data retention even when powered off; the immense energy output from the sun and stars; or research indicating the occurrence of enzyme catalysis and DNA mutation [1]. These diverse applications are unified by the quantum mechanical phenomenon known as quantum tunneling.

Quantum tunneling refers to the capacity of particles to penetrate energy barriers despite lacking the requisite energy to surpass these obstacles according to classical mechanics. This effect arises from superposition, which imparts wave-like characteristics to quantum-scale particles and permits probabilistic presence across multiple locations. The transmission coefficient, which quantifies the likelihood of tunneling, is determined by the barrier’s height and width, in addition to the particle’s mass and energy [2].

Application of the time-independent Schrödinger equation allows the decomposition of the particle’s wave function into components situated within and outside the barrier. By ensuring continuity of the wave functions at the boundaries, the transmission coefficient can be derived. This theoretical framework has been effectively utilized in various fields, including the development of scanning tunneling microscopes and quantum dots.

Running your digital world

Modern electronics exist in a delicate balance with quantum tunneling. At the heart of today’s microprocessors are advanced transistors, which depend on the quantum ability of electrons to traverse ultra-thin insulating barriers. This tunneling enables transistors to switch on and off at remarkable speeds while using minimal energy, supporting the drive for faster, more energy-efficient devices. As technology advances and the insulating layers within transistors are made thinner to fit more components onto a single chip, the probability of electrons tunneling through these barriers inevitably increases. This leads to unwanted leakage currents, which can generate excess heat and disrupt circuit performance. Such leakage is a major challenge, setting hard physical boundaries on how much further Moore’s law—the trend of doubling transistor density— can be extended.

Yet, the same quantum effect that poses challenges in mainstream electronics is ingeniously exploited in specialized components. Tunnel diodes, for example, are engineered with extremely thin junctions that encourage electrons to quantum tunnel from one side to the other. This property allows tunnel diodes to switch at incredibly high speeds, making them invaluable for high-frequency circuits and telecommunications technologies where rapid response times are essential.

Quantum tunneling is also fundamental to how data is stored in non-volatile memory devices such as flash drives and solid-state drives (SSDs). In these devices, information is retained by manipulating electrons onto or off a “floating gate,” separated from the rest of the circuit by a thin oxide barrier. When writing or erasing data, electrons tunnel through this barrier, and once in place, they remain trapped, even if the device is disconnected from power. This is why your photos, documents, and other files remain safely stored on a USB stick or SSD long after you unplug them.

In summary, quantum tunneling is both a challenge and a tool in modern electronics. Engineers must constantly innovate to suppress unwanted tunneling in ever-smaller transistors, while simultaneously designing components that rely on controlled tunneling for speed, efficiency, and reliable data storage. This duality underscores how quantum mechanics is not merely an abstract scientific theory, but a practical force shaping the infrastructure of everyday digital life.

Powering stars, chips, and qubits

On a cosmic scale, quantum tunneling is fundamental to the process by which stars, including the Sun, emit light. It facilitates the fusion of protons within stellar cores by enabling them to overcome their mutual electrostatic repulsion, thus allowing nuclear fusion to occur at temperatures lower than those required in a strictly classical context. The existence of life on Earth relies on this mechanism, as it powers the energy output of stars that sustain our planet. Insights into tunneling continue to inform research efforts aimed at developing fusion reactors, where analogous physical principles must be managed under controlled conditions rather than governed by stellar gravity.

In superconducting circuits, which comprise materials capable of conducting electric current without resistance, pairs of electrons known as Cooper pairs tunnel through thin insulating barriers called Josephson junctions. When cooled to near absolute zero, these systems enable billions of paired electrons to behave collectively as a single quantum entity. This phenomenon has resulted in devices with exceptional sensitivity for measuring voltage and magnetic fields. Additionally, Josephson junctions play a central role in the architecture of superconducting qubits, where precision control of tunneling between quantum states enables reliable quantum logic operations.

The Nobel Prize in Physics 2025 was awarded to John Clarke, Michael H. Devoret, and John M. Martinis for their pioneering work in designing a macroscopic system utilizing a Josephson junction. The system was composed of two superconductors separated by an ultra-thin oxide layer, only a few nanometers thick. This layer permitted electron tunneling, and the observed discrete energy levels were in complete conformity with quantum mechanical predictions, a notable accomplishment from both experimental and theoretical standpoints [3].

A feature, a bug, and a design principle

Imagine a world where the chemical foundations of life and technology remain a mystery. Without quantum mechanics, our understanding of chemical bonds would be impossibly incomplete, the very medicines that save lives daily could never be designed, and the machines and electronics we rely on in our daily lives would not be possible.

Quantum tunneling stands as a striking testament that quantum phenomena are not mere scientific oddities; they are the bedrock of modern innovation. The same quantum effect that challenges engineers by causing troublesome current leaks in ever-smaller transistors is deliberately harnessed for breakthroughs: non-volatile memory, lightning-fast diodes, atomic-resolution microscopes, and the frontier of quantum computing all depend on it.

Every second, billions of electrons tunnel invisibly within the technology that surrounds you, their quantum behaviour silently orchestrating our digital universe. Far from being an abstract theory, quantum mechanics is the invisible engine driving your phone, your computer, your lasers, and LEDs—the essential infrastructure of twenty-first century life. Our entire technological existence pivots on the strange but real phenomena of the quantum world, challenging us to see science not as distant or esoteric, but as the very substance of our everyday reality.