Sorry, But AI Deserves ‘Please’ Too!

As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

Introduction

Large language models (LLMs) and AI chatbots have become woven into the fabric of our workplaces and personal lives, inviting us to reflect on the profound shift in our interaction with technology. As we navigate this new landscape, we find ourselves reevaluating the role of artificial intelligence (AI) in our daily routines. These advancements have not merely changed how we access information, seek advice, and perform research; they have opened a door to an era where insights and solutions are unveiled with remarkable speed and efficiency. As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

As these AI systems evolve to simulate human-like interactions, an intriguing phenomenon has emerged: people often address AI with polite phrases like “please” and “thank you,” echoing the social etiquette typically reserved for human conversations. This shift reflects a deeper societal change, where individuals begin to attribute a sense of agency and respect to machines, blurring the lines between human and artificial interaction. Furthermore, as AI continues to improve, this trend may lead to even more sophisticated relationships, encouraging users to engage with AI in ways that foster collaboration and mutual understanding, ultimately enhancing productivity and satisfaction in both personal and professional interactions.

With AI entities now entrenched in collaborative environments, one must ask: how do we, as humans, truly treat these so-called conversational agents? Despite AI’s lack of real emotions and its indifference to our so-called politeness, the patterns of user interaction reveal deep-seated beliefs about technology and the essence of human-AI relationships. LLMs are crafted to imitate human communication, creating an illusion of agency that drivers users to apply familiar social norms. In collaborative contexts, politeness becomes not just a nicety, but a catalyst for cooperation, compelling users to extend the very same respectful behavior to AI that they reserve for their human colleagues. [1]

Politeness Towards Machines and the CASA Paradigm

Politeness plays a vital role in shaping social interactions, particularly in environments where individuals must navigate complex power dynamics. It promotes harmony, reduces misunderstandings, and fosters cooperation among participants. Rather than being a rigid set of linguistic rules, politeness is a dynamic process involving the negotiation of social identities and power dynamics. These negotiations are influenced by participants’ backgrounds, their relationships with one another, and the specific context in which the interaction takes place [2].

Extending the concept of politeness to interactions with machines highlights the broader question of social engagement with technology. The Computers Are Social Actors (CASA) paradigm states that humans interact with computers in a fundamentally social manner, not because they consciously believe computers are human-like, nor due to ignorance or psychological dysfunction. Rather, this social orientation arises when people engage with computers, revealing that human-computer interactions are biased towards applying social norms similar to those used in human-to-human communication [3].

The CASA approach demonstrates that users unconsciously transfer rules and behaviours from human-to-human interactions, including politeness, to their engagements with AI. However, research examining young children’s interactions with virtual agents revealed contrasting patterns. Children often adopted a command-based style of communication with virtual agents, and this behaviour sometimes extended to their interactions with parents and educators in their personal lives [4].

Further studies into human-robot interaction have shown that the choice of wake-words can influence how users communicate with technology. For instance, using direct wake-words such as “Hey, Robot” may inadvertently encourage more abrupt or rude communication, especially among children, which could spill over into their interactions with other people. Conversely, adopting polite wake-words like “Excuse me, Robot” was found to foster more respectful and considerate exchanges with the technology [5].

Human-AI Interaction Dynamics

Research demonstrates that attributing agency to artificial intelligence is not necessarily the primary factor influencing politeness in user interactions. Instead, users who believe they are engaging with a person—regardless of whether the entity on the other end is human or computer—tend to exhibit behaviours typically associated with establishing interpersonal relationships, including politeness. Conversely, when users are aware that they are communicating with a computer, they are less likely to display such behaviours [6].

This pattern may help explain why users display politeness to large language models (LLMs) and generative AI agents. As these systems become more emotionally responsive and socially sophisticated, users increasingly attribute human-like qualities to them. This attribution encourages users to apply the same interpersonal communication mechanisms they use in interactions with other humans, thereby fostering polite exchanges.

Politeness in human-AI interactions often decreases as the interaction progresses. While users typically start out polite when engaging with AI, this politeness tends to diminish as their focus shifts to completing their tasks. Over time, users become more accustomed to interacting with AI and the complexity of their tasks may lessen, both of which contribute to a reduction in polite behaviour. For example, a user querying an LLM about a relatively low-risk scenario—such as running a snack bar—may quickly abandon polite language once the context becomes clear. In contrast, when faced with a higher-stakes task—such as understanding a legal concept—users may maintain politeness for longer, possibly due to increased cognitive demands or the seriousness of the task. In such scenarios, politeness may be perceived as facilitating better outcomes or advice, especially when uncertainty is involved.

Conclusion

Politeness in human-AI interactions is shaped by a complex interplay of social norms, individual user characteristics, and system design choices—such as the use of polite wake-words and emotionally responsive AI behaviours. While attributing agency to AI may not be the primary driver of politeness, users tend to display interpersonal behaviours like politeness when they perceive they are interacting with a person, regardless of whether the entity is human or computer.

As AI agents become more emotionally and socially sophisticated, users increasingly apply human-like communication strategies to these systems. However, politeness tends to wane as familiarity grows and task complexity diminishes, with higher-stakes scenarios sustaining polite engagement for longer. Recognizing these dynamics is crucial for designing AI systems that foster respectful and effective communication, ultimately supporting positive user experiences and outcomes.

Exploring Quantum Computing and Wormholes: A New Frontier

 As we continue to unlock the secrets of quantum gravity and teleportation, each discovery invites us to ponder just how much more there is to unveil, a testament to the infinite possibilities that lie hidden within the quantum tapestry of our universe. The next revelation may be just around the corner, waiting to astonish us all over again, bringing us closer to understanding our universe, and our place within it.

Introduction

Imagine voyaging across the galaxy at warp speed, like in Star Trek or Star Wars, where starships zip through cosmic shortcuts called wormholes. While these cinematic adventures may seem far-fetched, the wildest twist is that wormholes aren’t just a figment of Hollywood’s imagination—quantum physics hints they might truly exist, emerging from the very fabric of quantum entanglement. This remarkable idea flips our understanding of the universe: space and time could actually spring from invisible quantum connections, reshaping what we know about black holes and the universe itself.

This revolutionary perspective burst onto the scene in 2013, thanks to Juan Maldacena and Leonard Susskind, who suggested that whenever two systems are maximally entangled, a wormhole connects them, anchoring each system at opposite ends [1]. Building on the pioneering work of Einstein, Podolsky, and Rosen (EPR) on quantum entanglement and the Einstein-Rosen (ER) description of wormholes, Maldacena and Susskind daringly bridged quantum physics with general relativity, inviting us to think of our universe as far stranger, and far more interconnected, than we ever imagined [2].

Einstein-Rosen Bridges and the Origins of Wormholes

In their seminal paper, Einstein and Rosen encountered the concept of wormholes while seeking to describe space-time and the subatomic particles suspended within it. Their investigation centred on disruptions in the fabric of space-time, originally revealed by German physicist Karl Schwarzschild in 1916, just months after Einstein published his theory of relativity.

Schwarzschild demonstrated that mass can become so strongly self-attractive due to gravity that it concentrates infinitely, causing a sharp curvature in space-time. At these points, the variables in Einstein’s equations escalate to infinity, leading the equations themselves to break down. Such regions of concentrated mass, known as singularities, are found throughout the universe and are concealed within the centres of black holes. This hidden nature means that singularities cannot be directly described or observed, underscoring the necessity for quantum theory to be applied to gravity.

Einstein and Rosen utilized Schwarzschild’s mathematical framework to incorporate particles into general relativity. To resolve the mathematical challenges posed by singularities, they extracted these singular points from Schwarzschild’s equations and introduced new variables. These variables replaced singularities with an extra-dimensional tube, which connects to another region of space-time. They posited that these “bridges,” or wormholes, could represent particles themselves.

Interestingly, while attempting to unite particles and wormholes, Einstein and Rosen did not account for a peculiar particle phenomenon they had identified months earlier with Podolsky in the EPR paper: quantum entanglement. Quantum entanglement led quantum gravity researchers to fixate on entanglement as a way to explain the space-time hologram.

Space-Time as a Hologram

The concept of space-time holography emerged in the 1980s, when black hole theorist John Wheeler proposed that space-time, along with everything contained within it, could arise from fundamental information. Building on this idea, Dutch physicist Gerard ‘t Hooft and others speculated that the emergence of space-time might be similar to the way a hologram projects a three-dimensional image from a two-dimensional surface. This notion was further developed in 1994 by Leonard Susskind in his influential paper “The World as a Hologram,” wherein he argued that the curved space-time described by general relativity is mathematically equivalent to a quantum system defined on the boundary of that space.

A major breakthrough came a few years later when Juan Maldacena demonstrated that anti-de Sitter (AdS) space—a theoretical universe with negative energy and a hyperbolic geometry—acts as a true hologram. In this framework, objects become infinitesimally small as they move toward the boundary, and the properties of space-time and gravity inside the AdS universe precisely correspond with those of a quantum system known as conformal field theory (CFT) defined on its boundary. This discovery established a profound connection between the geometry of space-time and the information encoded in quantum systems, suggesting that the universe itself may operate as a vast holographic projection.

ER = EPR

Recent advances in theoretical and experimental physics have leveraged the SYK (Sachdev-Ye-Kitaev) model to explore the practical realization of wormholes, particularly in relation to quantum entanglement and teleportation. Building on Maldacena’s 2013 insight that suggested a deep connection between quantum entanglement (EPR pairs) and wormhole bridges (ER bridges)—summarized by the equation ER = EPR—researchers have used the SYK model to make these ideas more tangible. The SYK model, which describes a system of randomly interacting particles, provides a mathematically tractable framework that mirrors the chaotic behaviour of black holes and the properties of quantum gravity.

In 2017, Daniel Jafferis, Ping Gao, and Aaron Wall extended the ER = EPR conjecture to the realm of traversable wormholes, using the SYK model to design scenarios where negative energy can keep a wormhole open long enough for information to pass through. They demonstrated that this gravitational picture of a traversable wormhole directly corresponds to the quantum teleportation protocol, in which quantum information is transferred between two entangled systems. The SYK model enabled researchers to simulate the complex dynamics of these wormholes, making the abstract concept of quantum gravity more accessible for experimental testing.

Fig 1. How a quantum computer simulated a wormhole

By 2019, Jafferis and Gao, in collaboration with others, successfully implemented wormhole teleportation using the SYK model as a blueprint for their experiments on Google’s Sycamore quantum processor. They encoded information in a qubit and observed its transfer from one quantum system to another, effectively simulating the passage of information through a traversable wormhole as predicted by the SYK-based framework. This experiment marked a significant step forward in the study of quantum gravity, as it provided the first laboratory evidence for the dynamics of traversable wormholes, all made possible by the powerful insights offered by the SYK model.

Conclusion

Much like the mind-bending scenarios depicted in Hollywood blockbusters such as Star Trek and Star Wars, where spaceships traverse wormholes and quantum teleportation moves characters across galaxies, the real universe now seems to be catching up with fiction.

The remarkable journey from abstract mathematical conjectures to tangible laboratory experiments has revealed a universe far stranger, and more interconnected, than we could have ever imagined. The idea that information can traverse cosmic distances through the fabric of space-time, guided by the ghostly threads of quantum entanglement and the mysterious passageways of wormholes, blurs the line between science fiction and reality.

 As we continue to unlock the secrets of quantum gravity and teleportation, each discovery invites us to ponder just how much more there is to unveil, a testament to the infinite possibilities that lie hidden within the quantum tapestry of our universe. The next revelation may be just around the corner, waiting to astonish us all over again, bringing us closer to understanding our universe, and our place within it.

Beyond Barriers: How Quantum Tunneling Powers Our Digital and Cosmic World

From memory devices to the heart of stars

Consider the operation of a flash memory card, such as an SSD or USB drive, which is capable of data retention even when powered off; the immense energy output from the sun and stars; or research indicating the occurrence of enzyme catalysis and DNA mutation [1]. These diverse applications are unified by the quantum mechanical phenomenon known as quantum tunneling.

Quantum tunneling refers to the capacity of particles to penetrate energy barriers despite lacking the requisite energy to surpass these obstacles according to classical mechanics. This effect arises from superposition, which imparts wave-like characteristics to quantum-scale particles and permits probabilistic presence across multiple locations. The transmission coefficient, which quantifies the likelihood of tunneling, is determined by the barrier’s height and width, in addition to the particle’s mass and energy [2].

Application of the time-independent Schrödinger equation allows the decomposition of the particle’s wave function into components situated within and outside the barrier. By ensuring continuity of the wave functions at the boundaries, the transmission coefficient can be derived. This theoretical framework has been effectively utilized in various fields, including the development of scanning tunneling microscopes and quantum dots.

Running your digital world

Modern electronics exist in a delicate balance with quantum tunneling. At the heart of today’s microprocessors are advanced transistors, which depend on the quantum ability of electrons to traverse ultra-thin insulating barriers. This tunneling enables transistors to switch on and off at remarkable speeds while using minimal energy, supporting the drive for faster, more energy-efficient devices. As technology advances and the insulating layers within transistors are made thinner to fit more components onto a single chip, the probability of electrons tunneling through these barriers inevitably increases. This leads to unwanted leakage currents, which can generate excess heat and disrupt circuit performance. Such leakage is a major challenge, setting hard physical boundaries on how much further Moore’s law—the trend of doubling transistor density— can be extended.

Yet, the same quantum effect that poses challenges in mainstream electronics is ingeniously exploited in specialized components. Tunnel diodes, for example, are engineered with extremely thin junctions that encourage electrons to quantum tunnel from one side to the other. This property allows tunnel diodes to switch at incredibly high speeds, making them invaluable for high-frequency circuits and telecommunications technologies where rapid response times are essential.

Quantum tunneling is also fundamental to how data is stored in non-volatile memory devices such as flash drives and solid-state drives (SSDs). In these devices, information is retained by manipulating electrons onto or off a “floating gate,” separated from the rest of the circuit by a thin oxide barrier. When writing or erasing data, electrons tunnel through this barrier, and once in place, they remain trapped, even if the device is disconnected from power. This is why your photos, documents, and other files remain safely stored on a USB stick or SSD long after you unplug them.

In summary, quantum tunneling is both a challenge and a tool in modern electronics. Engineers must constantly innovate to suppress unwanted tunneling in ever-smaller transistors, while simultaneously designing components that rely on controlled tunneling for speed, efficiency, and reliable data storage. This duality underscores how quantum mechanics is not merely an abstract scientific theory, but a practical force shaping the infrastructure of everyday digital life.

Powering stars, chips, and qubits

On a cosmic scale, quantum tunneling is fundamental to the process by which stars, including the Sun, emit light. It facilitates the fusion of protons within stellar cores by enabling them to overcome their mutual electrostatic repulsion, thus allowing nuclear fusion to occur at temperatures lower than those required in a strictly classical context. The existence of life on Earth relies on this mechanism, as it powers the energy output of stars that sustain our planet. Insights into tunneling continue to inform research efforts aimed at developing fusion reactors, where analogous physical principles must be managed under controlled conditions rather than governed by stellar gravity.

In superconducting circuits, which comprise materials capable of conducting electric current without resistance, pairs of electrons known as Cooper pairs tunnel through thin insulating barriers called Josephson junctions. When cooled to near absolute zero, these systems enable billions of paired electrons to behave collectively as a single quantum entity. This phenomenon has resulted in devices with exceptional sensitivity for measuring voltage and magnetic fields. Additionally, Josephson junctions play a central role in the architecture of superconducting qubits, where precision control of tunneling between quantum states enables reliable quantum logic operations.

The Nobel Prize in Physics 2025 was awarded to John Clarke, Michael H. Devoret, and John M. Martinis for their pioneering work in designing a macroscopic system utilizing a Josephson junction. The system was composed of two superconductors separated by an ultra-thin oxide layer, only a few nanometers thick. This layer permitted electron tunneling, and the observed discrete energy levels were in complete conformity with quantum mechanical predictions, a notable accomplishment from both experimental and theoretical standpoints [3].

A feature, a bug, and a design principle

Imagine a world where the chemical foundations of life and technology remain a mystery. Without quantum mechanics, our understanding of chemical bonds would be impossibly incomplete, the very medicines that save lives daily could never be designed, and the machines and electronics we rely on in our daily lives would not be possible.

Quantum tunneling stands as a striking testament that quantum phenomena are not mere scientific oddities; they are the bedrock of modern innovation. The same quantum effect that challenges engineers by causing troublesome current leaks in ever-smaller transistors is deliberately harnessed for breakthroughs: non-volatile memory, lightning-fast diodes, atomic-resolution microscopes, and the frontier of quantum computing all depend on it.

Every second, billions of electrons tunnel invisibly within the technology that surrounds you, their quantum behaviour silently orchestrating our digital universe. Far from being an abstract theory, quantum mechanics is the invisible engine driving your phone, your computer, your lasers, and LEDs—the essential infrastructure of twenty-first century life. Our entire technological existence pivots on the strange but real phenomena of the quantum world, challenging us to see science not as distant or esoteric, but as the very substance of our everyday reality.

Transforming Data into Actionable Insights through Design

Introduction

At the age of fifteen, I secured a summer position at a furniture factory. To get the job, I expressed my interest in technology and programming to the owner, specifically regarding their newly acquired CNC machine. To demonstrate my capability, I presented my academic record and was hired to support a senior operator with the machine.

That summer, I was struck by the ability to control complex machinery through programmed commands on its control board. The design and layout of the interface, as well as the tangible results yielded from my input, highlighted the intersection of technical expertise and thoughtful design. This experience sparked my curiosity about the origins and development of such systems and functionalities.

I have always maintained that design is fundamentally about clarity, how systems make sense and elicit meaningful responses. It involves translating intricate, technical concepts into experiences that are intuitive and accessible. This perspective has guided my approach throughout my career, whether developing an AI-powered dashboard for Air Canada, creating an inclusive quoting tool for TD Insurance, or designing online public services for Ontario.

The central challenge remains consistent: achieving transparency and trust in complex environments. Effective design bridges the gap between people and systems, supporting purposeful engagement.

My observational nature drives me to understand how systems operate, decisions are reached, and individuals navigate complexity. This curiosity informs my design methodology, which begins by analyzing the foundational elements, people, processes, data, and technology, that must integrate seamlessly to deliver a cohesive experience.

To me, design is not merely an aesthetic layer; it serves as the essential framework that provides structure, clarity, and empathy within multifaceted systems. Designing from this perspective, I prioritize not only usability but also alignment across stakeholders and components.

My core design strengths

Throughout my career, I have found that my most effective work comes from applying a set of foundational strengths to every project. These strengths consistently guide my approach and ensure each solution is thoughtful, impactful, and built for real-world complexity.

Systems Thinking: I make it a priority to look beyond surface-level interfaces. My approach involves examining how data, people, and technology interact and influence each other within a system. By doing so, I can design solutions that are not only visually appealing but also deeply integrated and sustainable across the entire ecosystem.

Human-Centred Design: Every design decision I make is grounded in observation and empathy. I focus on the user’s experience, prioritizing how it feels to engage with the product or service. My aim is to create solutions that resonate with individuals on a practical and emotional level.

Accessibility & Inclusion: Designing for everyone is a fundamental principle for me. I strive to ensure that the experiences I create are not just compliant with accessibility standards, but are genuinely usable and fair for all users. Inclusion is woven into the fabric of my process, shaping outcomes that reflect the diversity of people who will interact with them.

Storytelling & Visualization: I leverage visual storytelling to simplify and clarify complex ideas. Using visuals, I help teams and stakeholders see both what we are building and why it matters. This approach fosters understanding and alignment, making the design process transparent and purposeful.

Facilitation & Collaboration: I believe that the best insights and solutions emerge when diverse voices contribute to the process. By facilitating collaboration, I encourage open dialogue and collective problem-solving, ensuring that outcomes are shaped by a broad range of perspectives and expertise.

If I had to distill all these strengths into a single guiding principle, it would be this: “I design to understand, not just to create.”

My design approach: a cyclical process

Design, for me, is less of a straight line and more of a cycle, a continuous rhythm of curiosity, synthesis, and iteration. This process shapes how I approach every project, ensuring that each step builds upon the previous insights and discoveries.

1. Understand the System: I begin by mapping the entire ecosystem, considering all the people involved, their goals, the relevant data, and any constraints. This foundational understanding allows me to see how different elements interact and influence each other.

2. Observe the Experience: Next, I dedicate time to watch, listen, and learn how people actually engage with the system. Through observation and empathy, I uncover genuine behaviours and needs that may not be immediately apparent.

3. Synthesize & Prioritize: I then translate my findings into clear opportunities and actionable design principles. This synthesis helps to focus efforts on what matters most, guiding the team toward solutions that address real challenges.

4. Visualize the Future: Prototyping and iteration are central to my approach. I work to make complexity feel simple and trustworthy, refining concepts until the design communicates clarity and confidence.

5. Deliver & Educate: Finally, I collaborate with developers, stakeholders, and accessibility teams to bring the vision to life. I also focus on making the solution scalable, ensuring that the impact and understanding extend as the project grows.

Good design isn’t just creative, it’s disciplined, methodical, and deeply human.

Projects that demonstrate impact

Transforming operations at Air Canada

At Air Canada, I was responsible for designing AI dashboards that transformed predictive data into clear, actionable insights. These dashboards provided operations teams with the tools to act quickly and effectively, which resulted in a significant reduction in delay response time, by 25%. This project highlighted the value of turning complex data into meaningful information that drives real-world improvements.

Advancing accessibility at TD Insurance

During my time at TD Insurance, I led an accessibility-first redesign of the Auto and Travel Quoter. My approach was centred on ensuring that the solution met the rigorous standards of WCAG 2.1 AA compliance. The redesign not only made the product fully accessible, but also drove an 18% increase in conversions. This experience reinforced the importance of designing for everyone and demonstrated how accessibility can be a catalyst for business growth.

Simplifying government services for Ontarians

With the Ontario Ministry of Transportation, I took on the challenge of redesigning a complex government service. My focus was on simplifying the process for citizens, making it easier and more intuitive to use. The result was a 40% reduction in form completion time, making government interactions smoother and more efficient for the people of Ontario.

Clarity as a catalyst

What stands out to me about these projects is that each one demonstrates a universal truth: clarity scales. When people have a clear understanding of what they are doing and why, efficiency, trust, and accessibility naturally follow. These outcomes prove that good design is not just about aesthetics, it’s about making information actionable and understandable, leading to measurable impact.

Reflection

The best design doesn’t add more, it removes confusion. It connects people, systems, and intent, turning complexity into clarity.

If your organization is wrestling with complexity, whether that’s data, accessibility, or AI, that’s exactly where design can make the biggest difference.

At Mimico Design House, we specialize in helping teams turn that complexity into clarity, mapping systems, simplifying experiences, and designing interfaces that people actually understand and trust.

Through a combination of human-centered design, systems thinking, and accessibility expertise, I work with organizations to bridge the gap between business strategy and user experience, transforming friction points into moments of understanding.

If your team is facing challenges with alignment, usability, or data-driven decision-making, I’d love to explore how we can help.

You can connect with me directly on LinkedIn or visit mimicodesignhouse.com to learn more about how we help organizations design systems people believe in.

Atom Loss: A Bottleneck in Quantum Computing

It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

Introduction

Until recently, quantum computers have faced a significant obstacle known as ‘atom loss’, which has limited their advancement and ability to operate for long durations. At the heart of these systems are quantum bits, or qubits, which represent information in a quantum state, allowing them to be in the state 0, 1, or both simultaneously, thanks to superposition. Qubits are formed from subatomic particles and engineered through precise manipulation and measurement of quantum mechanical properties.

Historically, this atom loss phenomenon restricted quantum computers to performing computations for only a few milliseconds. Even the most sophisticated machines struggled to operate beyond a few seconds. However, recent breakthroughs by Sandia National Laboratories and Harvard University researchers have changed this landscape dramatically. At Harvard, researchers have built a quantum computer that could sustain operations for over two hours [1], a substantial improvement over previous limitation. This advancement has led scientists to believe they are on the verge of enabling quantum computers to run continuously, potentially without time constraints.

What causes atom loss?

Atom loss presents a significant challenge in quantum computing, as it results in the loss of the fundamental unit of information – the qubit – along with any data it contains. During quantum computations, qubits may be lost from the system due to factors such as noise and temperature fluctuations. This phenomenon can lead to information degradation and eventual system failure. To maintain qubit stability and prevent atom loss, a stringent set of physical, environmental, and engineering conditions must be satisfied.

Environmental fluctuations

Maintaining the integrity of qubits in a quantum computing system is heavily dependent on shielding them from various environmental disturbances. Qubits are highly sensitive to noise, electromagnetic fields, and stray particles, any of which can interfere with their quantum coherence. Quantum coherence describes the ability of a qubit to remain in a stable superposition state over time; the duration of this coherence directly affects how long a qubit can function without experiencing errors.

One fundamental requirement for preserving quantum coherence is the maintenance of cryogenic environments. Qubits must be kept at temperatures near absolute zero, which is essential for eliminating thermal noise and fostering the quantum behaviour necessary for reliable operations. Even slight fluctuations in temperature or the presence of external electromagnetic influences can cause the delicate quantum state of a qubit to degrade or flip unpredictably, leading to information loss and system errors [2].

These stringent environmental controls are critical for ensuring that qubits remain stable and effective throughout quantum computations, highlighting the importance of addressing environmental fluctuations as a key challenge in quantum computing.

Trap imperfections

Neutral atom processors have become a prominent platform for achieving large-scale, fault-tolerant quantum computing [3]. This approach enables qubits to be encoded in states that possess exceptionally long coherence times, often extending up to tens of seconds. The extended coherence time is crucial for maintaining quantum information over prolonged computations, which is essential for complex and reliable quantum operations.

The operation of neutral atom processors relies on the use of optical tweezer arrays. These arrays are dynamically configured, allowing qubits to be trapped in arbitrary geometries and enabling the system to scale to tens of thousands of qubits. The flexibility in configuration and scalability makes neutral atom processors especially suited for advancing quantum computing technology beyond previous limitations.

Despite these advantages, neutral atom processors are not immune to challenges. Atom loss remains a significant issue, arising from several sources. Heating within the system can cause atoms to escape their traps, while collisions with background gas particles further contribute to atom loss. Additionally, during the excitation of an atom from one quantum state to another, such as the transition to a Rydberg state, anti-trapping can occur, leading to the loss of qubits from the processor array.

Readout errors

During the process of reading out quantum information, qubits may be displaced from their positions within the two-dimensional arrays. This readout operation, which involves imaging the qubits to determine their quantum state, can inadvertently lead to the loss of qubits from the processor array. Such atom loss poses a risk to the integrity and continuity of quantum computations.

To address this challenge, neutral atom processor arrays are typically designed with additional qubits that act as a buffer. These extra qubits ensure that, even when some atoms are lost during readout or other operations, enough qubits remain available for the system to continue performing calculations reliably.

Another approach to mitigating atom loss during readout is to slow down the imaging process. By reducing the speed of readout operations, the likelihood of displacing qubits can be minimized, thereby decreasing the rate at which atoms are lost from the array. However, this strategy comes with a trade-off: slowing down readout operations leads to reduced overall system efficiency, as calculations take longer to complete [4]. As a result, there is an inherent balance between maintaining qubit integrity and preserving the speed and efficiency of quantum computations.

Imperfect isolation

Maintaining perfect isolation of qubits from their environment is an immense challenge, primarily because it demands highly sophisticated and costly shielding methods. In practice, it is virtually impossible to completely shield quantum systems from external influences. As a result, stray electromagnetic signals, fluctuations in temperature, and mechanical vibrations can penetrate these defences and interact with quantum systems. Such interactions are detrimental, as they can disrupt the delicate balance required for quantum operations and ultimately lead to atom loss within the processor array [5]. These environmental disturbances compromise the stability and coherence of qubits, posing a significant obstacle to the reliability and scalability of quantum computers.

Recent solutions and research

Multiple research teams are developing ways to reduce atom loss by detecting and correcting missing atoms in quantum systems, improving calculation reliability.

Researchers at Sandia National Laboratories, in collaboration with the University of New Mexico, have published a study demonstrating, for the first time, that qubit leakage errors in neutral atom platforms can be detected without compromising or altering computational outcomes [6]. The team achieved this by utilising the alternating states of entanglement and disentanglement among atoms within the system. In experiments where the atoms were disentangled, results showed substantial deviations compared to those observed during entanglement. This approach enabled the detection of the presence of adjacent atoms without direct observation, thereby preserving the integrity of the information contained within each atom.

Ancilla qubits are essential in quantum error correction and algorithms [7]. These extra qubits help with measurement and gate implementation, yet they do not store information from the main quantum state. By weakly entangling ancilla qubits with the physical qubits, it becomes possible for them to identify errors without disturbing the actual quantum data. Thanks to non-demolition measurements, errors can be detected while keeping the physical qubit’s state intact.

A group of physicists from Harvard University have recently created the first quantum computer capable of continuous operation without needing to restart [1]. By inventing a technique to replenish qubits in optical tweezer arrays as they exit the system, the researchers managed to keep the computer running for more than two hours. Their setup contains 3,000 qubits and can inject up to 300,000 atoms each second into the array, compensating for any lost qubits. This approach enables the system to maintain quantum information, even as atoms are lost and replaced. According to the Harvard team, this innovation could pave the way for quantum systems that can function indefinitely.

Conclusion

It was previously believed that atom loss could seriously hinder the progress of quantum computing. Atom loss and qubit leakage were serious errors that could render calculations unreliable. With the advancements introduced by the researchers at Sandia National Laboratories, the University of New Mexico and Harvard University, and a host of other teams around the world, the revolutionary advancements quantum computers could introduce in scientific research, medicine and finance are becoming closer than ever. It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

[1] Harvard Researchers Develop First Ever Continuously Operating Quantum Computer

[2] Quantum Chips: The Brains Behind Quantum Computing

[3] Quantum Error Correction resilient against Atom Loss

[4] Novel Solutions For Continuously Loading Large Atomic Arrays

[5] Quantum Decoherence: The Barrier to Quantum Computing

[6] A breakthrough in Quantum Error Correction

[7] Ancilla Qubit