Exploring the Implications of Quantum Collapse on Computing

The measurement problem isn’t just theoretical; it directly affects the development of effective quantum computing … Ultimately, reducing errors and increasing algorithm success in quantum computing relies on a solid grasp of what happens during measurement.

Introduction

In quantum mechanics, superposition refers to a unique and intriguing phenomenon where quantum particles can exist in several states simultaneously. Without observation, a quantum system remains in superposition and continues to evolve following Schrödinger’s equation. However, when we measure the system, it collapses into a single, definite state.

This concept challenges our everyday experience with classical objects, which always appear to have specific, identifiable states. Numerous experiments have confirmed that atoms can occupy two or more distinct energy levels at once [1]. If undisturbed, an atom stays in superposition until measurement causes its quantum state to break and settle into one outcome.

But what does it mean to measure or observe a quantum system? Why should a system capable of existing in countless simultaneous states reduce to just one when observed? These fundamental questions form the core of the “measurement problem” in quantum mechanics, a puzzle that has intrigued scientists for over a century since the field was first developed.

The measurement problem

The concept of “measurement,” as addressed by the wave function, has long raised critical questions regarding both the scientific and philosophical underpinnings of quantum mechanics, with significant implications for our comprehension of reality. Numerous interpretations exist to explain the measurement problem, which continues to challenge efforts to establish a coherent and reliable account of the nature of reality. Despite over a century of advancement in quantum mechanics, definitive consensus remains elusive concerning its most fundamental phenomena, including superposition and entanglement.

Quantum mechanics dictates that a quantum state evolves according to two distinct processes: if undisturbed, it follows Schrödinger’s equation; when subjected to measurement, the system yields a classical outcome, with probabilities determined by the Born rule. Measurement refers to any interaction extracting classical information from a quantum system probabilistically, without facilitating communication between remote systems [2]. This framework allows the measurement problem to be categorized into three principal issues:

  • Preferred basis problem – during measurement, outcomes consistently manifest within a particular set of states, although quantum states can, in theory, be described by infinitely many mathematical representations.
  • Non-observability of interference problem – observable interference effects arising from coherent superpositions are limited to microscopic scales.
  • Outcomes problem – measurements invariably produce a single, definitive result rather than a superposition of possibilities. The mechanism behind this selection and its implications for observing superposed outcomes remain unclear.

Addressing any one of these challenges does not fully resolve the others, thereby perpetuating the complexities inherent in the measurement problem.

Wave function collapse

The superposition of an atom across all possible states is characterized by a wave function, which serves as a representation of every quantum state and the probability associated with each state [3]. This function illustrates how an electron within an atomic cloud may occupy various positions with corresponding probabilities, and similarly how a qubit in a quantum computer can be in both states 0 and 1 simultaneously.

In the absence of observation, the system evolves continuously, maintaining the full spectrum of probabilities. Measurement, however, results in a distinct outcome; the act of measurement compels the selection of a single result from myriad possibilities, causing alternative outcomes to cease. As formalized by John von Neumann in 1932, quantum theory reliably predicts the statistical distribution of results over repeated trials, though it remains impossible to forecast the precise outcome of any individual measurement.

The wave function underscores the inherent randomness in the determination of outcomes, akin to nature employing chance. Albert Einstein famously critiqued this perspective, suggesting it implied that “God is playing dice” with the universe. Despite its counterintuitive nature, the wave function is essential for translating the stochasticity of superposition into the observed singular outcome, determined by the probabilities encoded within the wave function.

Conclusion

Wave function collapse plays a key role in quantum mechanics, linking the quantum and classical worlds. This phenomenon lets us measure things like an electron’s position and operate qubits in quantum computers, ensuring accurate results through coherence. Building dependable quantum computers largely depends on managing wave function collapse, aiming to prevent early collapses and errors while encouraging collapses that yield useful data.

The measurement problem isn’t just theoretical; it directly affects the development of effective quantum computing. Quantum algorithms work by sampling from a superposition of computational paths and collapsing them into desired outcomes, especially when designed well. Wave function collapse determines whether qubits are measured as intended or accidentally disrupted by outside influences (decoherence). Ultimately, reducing errors and increasing algorithm success in quantum computing relies on a solid grasp of what happens during measurement.

The Quantum Realm: Our Connection to the Universe

At the quantum scale, the universe manifests as a field of infinite possibilities, where the electrons within our atoms move in clouds of probability, always shifting. Consequently, we, as humans composed of countless atoms, are an inseparable part of the universe’s ever-changing nature, and our problems, at the quantum level, do not really exist.

Introduction 

When we close our eyes and place our hand on our forehead, we perceive the firmness of our hand and the gentle warmth of our skin. This physical sensation, the apparent solidity and presence of our body, seems tangible and reassuring. However, at the most fundamental level, our bodies are composed almost entirely of empty space. Beneath the surface of our bones, tissues, and cells, we find that our physical form is constructed from atoms, which themselves are predominantly made up of empty space, held together by the invisible forces of electromagnetism. The idea that we are, in essence, built from empty space can feel unsettling, yet it is central to our understanding of quantum mechanics.   

If we imagine an atom, and picture a single proton as a grain of sand placed at the centre of a football stadium, the nearest electron would be found somewhere in the outer bleachers, approximately 90 metres away. The vast expanse between the proton and the electron is filled with nothing but empty space [1]. The electrons themselves do not orbit the nucleus like tiny marbles following a fixed path. Instead, they ripple through space in a cloud-like manner, appearing in one location at one moment, and in another the next. Their movement is not governed by certainty, but by the probability clouds that define their position and momentum.    

The Universe Is Impermanent

Everything in the universe is in a state of constant motion. Objects such as chairs and tables may appear completely motionless to our eyes, yet at the quantum level, this sense of stillness is an illusion. Even as we sleep and perceive ourselves to be at rest, the atoms that make up our bodies are ceaselessly moving and vibrating. This underlying activity is dictated by the principles of quantum mechanics, which reveal an intricate and dynamic world beneath the surface of everyday experience.

Werner Heisenberg’s uncertainty principle states that it is impossible to simultaneously know both the precise position and the exact momentum of any object [2]. The more accurately we measure one, the less certain we become of the other. This fundamental limit means that no object can ever be fixed in a single, definite spot while remaining absolutely still. To do so would violate the laws of quantum physics, which require all matter to retain a degree of movement and uncertainty 

Consider a ball placed in a bowl and cooled until it appears perfectly still at the bottom. According to the uncertainty principle, the ball can never truly be at rest. It will always exhibit a subtle vibration, as restricting its position too precisely leads to uncertainty in its momentum. This perpetual motion is known as the ball’s zero-point energy.  

A universe where everything is perfectly still would not permit life as we know it. Nothing in the cosmos is permanent; particles continuously move, shift, and even appear and disappear. Remarkably, quantum theory predicts that even the vacuum of space is not empty but is filled with modes of vibration possessing zero-point energy [3]. This means that space itself is permeated by an endless and restless sea of energy, where particles are constantly popping in and out of existence, reflecting the ever-changing nature of reality.  

Quantum Mechanics and the Foundations of Consciousness 

At the quantum level, the behaviour of particles is defined by several extraordinary phenomena, including superposition, entanglement, coherence, and the observer effect. In the phenomenon known as superposition, particles can exist in multiple states at the same time. These particles remain in superposition until an act of observation occurs, causing their wave functions to collapse into a single, definite outcome. When two particles interact and become entangled, their properties, such as spin, polarization, and momentum, become fundamentally inseparable. Measurement of one entangled particle instantly determines the state of its partner, regardless of the distance separating them. 

Humans are deeply entangled with the inner workings of the universe. Our thoughts, memories, and emotions are rooted in the quantum behaviours of the atoms that compose our bodies. Consciousness, in this context, is shaped and defined through quantum operations. The billions of neurons firing simultaneously in the human brain function through quantum entanglement, collectively giving rise to our conscious experience [4]

Stuart Hameroff and Roger Penrose, in their 1996 paper, argued that consciousness depends on coherent quantum processes within collections of microtubules found in brain neurons. At the lowest neurophysiological level, the cytoskeleton of neurons in the human brain is composed of protein networks, specifically neurofilaments and microtubuli. These structures are essential for various transport processes within neurons [5] [6]. According to Hameroff and Penrose’s theoretical framework, tubulins in microtubuli serve as the substrate for quantum processes. 

Through their Orchestrated Objective Reduction (Orch OR) theory, Hameroff and Penrose proposed that the brain’s microtubules act as quantum computers, maintaining coherent quantum states that collapse in a process tied to the geometry of space-time and influenced by quantum gravity. In this framework, consciousness operates as a quantum wave function passing through the brain’s microtubuli, with these collapses corresponding to the observer’s elementary acts of consciousness and embedding them directly into the fabric of the universe. 

Conclusion 

Contemplating the foundations of our bodies and consciousness, it becomes apparent that quantum mechanics may govern much more than just the biological processes within us. While the Orch OR theory proposed by Hameroff and Penrose remains a topic of debate, it opens the door to the possibility that consciousness arises not solely from biological functions but also from quantum phenomena.

In quantum computing, the act of observation is inherently influential, determining the state to which a particle’s wave function collapses. This raises a profound question: could quantum mechanics provide an explanation for our ability to perceive and realize different realities within our consciousness? Furthermore, could our observation of quantum states, which shape our consciousness, be the very mechanism that connects us to the universe in a holistic manner?

I found that for me, the most meaningful way to think about it was that the concept of uncertainty and constant motion is central to how the universe operates at the quantum level. If our bodies and consciousness are subject to the laws of quantum physics, then our experiences of periods of darkness and despair, feelings of being stuck or hopeless, are never truly fixed states. Motion persists within our atoms and within our consciousness, regardless of our perceptions. The pressure we experience, the everyday stresses, and our emotions are all shaped by how we observe and interpret events. At the quantum level, nothing remains permanent; everything is in flux.

This perspective is not meant to diminish our existence as human beings. Rather, it highlights our intrinsic connection to the fabric of the universe. The universe does not operate with absolute certainty or permanence; it is defined by uncertainty, continual change, and movement. At the quantum scale, the universe manifests as a field of infinite possibilities, where the electrons within our atoms move in clouds of probability, always shifting. Consequently, we, as humans composed of countless atoms, are an inseparable part of the universe’s ever-changing nature, and our problems, at the quantum level, do not really exist.


Designing solutions that effectively meet user needs is the driving force behind my work. I also share practical insights on computing and human-centered design each week. I’d love to connect and discuss your design ideas or challenges; feel free to reach out to me today on LinkedIn or contact me at Mimico Design House.


Atom Loss: A Bottleneck in Quantum Computing

It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

Introduction

Until recently, quantum computers have faced a significant obstacle known as ‘atom loss’, which has limited their advancement and ability to operate for long durations. At the heart of these systems are quantum bits, or qubits, which represent information in a quantum state, allowing them to be in the state 0, 1, or both simultaneously, thanks to superposition. Qubits are formed from subatomic particles and engineered through precise manipulation and measurement of quantum mechanical properties.

Historically, this atom loss phenomenon restricted quantum computers to performing computations for only a few milliseconds. Even the most sophisticated machines struggled to operate beyond a few seconds. However, recent breakthroughs by Sandia National Laboratories and Harvard University researchers have changed this landscape dramatically. At Harvard, researchers have built a quantum computer that could sustain operations for over two hours [1], a substantial improvement over previous limitation. This advancement has led scientists to believe they are on the verge of enabling quantum computers to run continuously, potentially without time constraints.

What causes atom loss?

Atom loss presents a significant challenge in quantum computing, as it results in the loss of the fundamental unit of information – the qubit – along with any data it contains. During quantum computations, qubits may be lost from the system due to factors such as noise and temperature fluctuations. This phenomenon can lead to information degradation and eventual system failure. To maintain qubit stability and prevent atom loss, a stringent set of physical, environmental, and engineering conditions must be satisfied.

Environmental fluctuations

Maintaining the integrity of qubits in a quantum computing system is heavily dependent on shielding them from various environmental disturbances. Qubits are highly sensitive to noise, electromagnetic fields, and stray particles, any of which can interfere with their quantum coherence. Quantum coherence describes the ability of a qubit to remain in a stable superposition state over time; the duration of this coherence directly affects how long a qubit can function without experiencing errors.

One fundamental requirement for preserving quantum coherence is the maintenance of cryogenic environments. Qubits must be kept at temperatures near absolute zero, which is essential for eliminating thermal noise and fostering the quantum behaviour necessary for reliable operations. Even slight fluctuations in temperature or the presence of external electromagnetic influences can cause the delicate quantum state of a qubit to degrade or flip unpredictably, leading to information loss and system errors [2].

These stringent environmental controls are critical for ensuring that qubits remain stable and effective throughout quantum computations, highlighting the importance of addressing environmental fluctuations as a key challenge in quantum computing.

Trap imperfections

Neutral atom processors have become a prominent platform for achieving large-scale, fault-tolerant quantum computing [3]. This approach enables qubits to be encoded in states that possess exceptionally long coherence times, often extending up to tens of seconds. The extended coherence time is crucial for maintaining quantum information over prolonged computations, which is essential for complex and reliable quantum operations.

The operation of neutral atom processors relies on the use of optical tweezer arrays. These arrays are dynamically configured, allowing qubits to be trapped in arbitrary geometries and enabling the system to scale to tens of thousands of qubits. The flexibility in configuration and scalability makes neutral atom processors especially suited for advancing quantum computing technology beyond previous limitations.

Despite these advantages, neutral atom processors are not immune to challenges. Atom loss remains a significant issue, arising from several sources. Heating within the system can cause atoms to escape their traps, while collisions with background gas particles further contribute to atom loss. Additionally, during the excitation of an atom from one quantum state to another, such as the transition to a Rydberg state, anti-trapping can occur, leading to the loss of qubits from the processor array.

Readout errors

During the process of reading out quantum information, qubits may be displaced from their positions within the two-dimensional arrays. This readout operation, which involves imaging the qubits to determine their quantum state, can inadvertently lead to the loss of qubits from the processor array. Such atom loss poses a risk to the integrity and continuity of quantum computations.

To address this challenge, neutral atom processor arrays are typically designed with additional qubits that act as a buffer. These extra qubits ensure that, even when some atoms are lost during readout or other operations, enough qubits remain available for the system to continue performing calculations reliably.

Another approach to mitigating atom loss during readout is to slow down the imaging process. By reducing the speed of readout operations, the likelihood of displacing qubits can be minimized, thereby decreasing the rate at which atoms are lost from the array. However, this strategy comes with a trade-off: slowing down readout operations leads to reduced overall system efficiency, as calculations take longer to complete [4]. As a result, there is an inherent balance between maintaining qubit integrity and preserving the speed and efficiency of quantum computations.

Imperfect isolation

Maintaining perfect isolation of qubits from their environment is an immense challenge, primarily because it demands highly sophisticated and costly shielding methods. In practice, it is virtually impossible to completely shield quantum systems from external influences. As a result, stray electromagnetic signals, fluctuations in temperature, and mechanical vibrations can penetrate these defences and interact with quantum systems. Such interactions are detrimental, as they can disrupt the delicate balance required for quantum operations and ultimately lead to atom loss within the processor array [5]. These environmental disturbances compromise the stability and coherence of qubits, posing a significant obstacle to the reliability and scalability of quantum computers.

Recent solutions and research

Multiple research teams are developing ways to reduce atom loss by detecting and correcting missing atoms in quantum systems, improving calculation reliability.

Researchers at Sandia National Laboratories, in collaboration with the University of New Mexico, have published a study demonstrating, for the first time, that qubit leakage errors in neutral atom platforms can be detected without compromising or altering computational outcomes [6]. The team achieved this by utilising the alternating states of entanglement and disentanglement among atoms within the system. In experiments where the atoms were disentangled, results showed substantial deviations compared to those observed during entanglement. This approach enabled the detection of the presence of adjacent atoms without direct observation, thereby preserving the integrity of the information contained within each atom.

Ancilla qubits are essential in quantum error correction and algorithms [7]. These extra qubits help with measurement and gate implementation, yet they do not store information from the main quantum state. By weakly entangling ancilla qubits with the physical qubits, it becomes possible for them to identify errors without disturbing the actual quantum data. Thanks to non-demolition measurements, errors can be detected while keeping the physical qubit’s state intact.

A group of physicists from Harvard University have recently created the first quantum computer capable of continuous operation without needing to restart [1]. By inventing a technique to replenish qubits in optical tweezer arrays as they exit the system, the researchers managed to keep the computer running for more than two hours. Their setup contains 3,000 qubits and can inject up to 300,000 atoms each second into the array, compensating for any lost qubits. This approach enables the system to maintain quantum information, even as atoms are lost and replaced. According to the Harvard team, this innovation could pave the way for quantum systems that can function indefinitely.

Conclusion

It was previously believed that atom loss could seriously hinder the progress of quantum computing. Atom loss and qubit leakage were serious errors that could render calculations unreliable. With the advancements introduced by the researchers at Sandia National Laboratories, the University of New Mexico and Harvard University, and a host of other teams around the world, the revolutionary advancements quantum computers could introduce in scientific research, medicine and finance are becoming closer than ever. It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

[1] Harvard Researchers Develop First Ever Continuously Operating Quantum Computer

[2] Quantum Chips: The Brains Behind Quantum Computing

[3] Quantum Error Correction resilient against Atom Loss

[4] Novel Solutions For Continuously Loading Large Atomic Arrays

[5] Quantum Decoherence: The Barrier to Quantum Computing

[6] A breakthrough in Quantum Error Correction

[7] Ancilla Qubit

Bringing Ideas to Life: My Journey as a Product Architect

My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible.

Lately, I have been reflecting on what drew me, as a designer, to write about topics such as artificial intelligence and quantum computing. I have been fascinated with both topics and how they have transformed that way we view the world. Everything we see today in terms of advancements in AI and quantum computing started with an idea, brought to life through innovation and perseverance.

In AI, there was the idea that machine learning would transform the way we do business by leveraging large amounts of data to provide valuable insights, something that would not be easily attainable through human effort. In quantum computing, there was the idea that applying the way particles behave in the universe to computing would unlock a vast potential for computing capabilities and power, beyond what classical computers can achieve. So many other advancements and achievements in AI and quantum computing continue to be realized through the conception of ideas and the relentless pursuit of ways and methods to implement them.

Everything starts with an idea

Beyond AI and quantum computing, everything we see around us started with an idea, brought to life through continued and persistent effort to make it a reality. Every building we see, every product, every service and all material and immaterial things in our lives are the product of an idea.

As a designer and product architect, I also help make ideas a reality through persistent effort and the application of methodology that lays a roadmap for the implementation of those ideas. Similarly, AI and quantum computing are fields that are bringing novel and exciting concepts to life through the development and application of scientific methodology.

While thinking about all of this, I pondered how I would define my work and role as a designer. How would I describe my work, knowing that most of us use technology without thinking about the journey a product takes from idea to experience? What value do I bring to organizations that hire me to help them with their problems? In an age where products are incorporating ever more advanced and sophisticated technology, as is the case with AI and quantum computing, how does my work extend beyond simply developing designs and prototypes?

To answer these questions, I am drawn back to the fact that everything around us starts with an idea. As a designer, it is extremely rewarding to me to help make ideas for my clients a reality while navigating the conceptual, technical and implementation challenges.

Making the invisible useful

I’ve been thinking a lot about the similarities between how we design physical spaces and how we design digital ones. Just like a building starts as an idea in an architect’s mind, so are the products that I work on and help a multitude of organizations bring to life. As a designer, I help lay the foundations for a product idea by thoroughly understanding the motivations and needs behind it, and what benefits and improvements implementing it would bring.

Buildings serve needs by providing housing for people or serving as places to work, and for businesses and organizations to operate. A well-designed building offers an effortless flow that draws people in and makes them want to stay. Similarly, great digital design allows for seamless navigation, creating an experience that feels natural and engaging. Before an architect devises plans and drawings for a building, they must first maintain a clear vision of the idea in their mind, understand the needs behind it and ensure that their designs and plans meet those needs.

From there, the idea and concept of the building in the architect’s mind are translated into plans and drawings. Those plans are drawn and shared with a builder, who in turn collaborates with the architect to bring them to life. Without the architect and their clear vision of the idea and concept behind the building, the building would not exist, at least not in the shape and form that the architect would have imagined. It would not properly serve the needs and bring about the benefits that accompanied the original idea.

Just like a building architect, as a product architect I must also understand the needs behind digital products to create experiences that truly serve the user. Through this process, I envision flows and interactions that will enable users to achieve their goals in the simplest and easiest way possible, reducing friction while also achieving the desired business value and benefit. Like an architect, I collaborate with members of technical teams so that the idea behind the product can be realized to its full potential through detailed roadmaps, designs and prototypes.

Figure 1. Architects are masters of the invisible made useful.

An architect must possess technical and creative skills that enable them to visualize the idea of a building. The same is true for me as a product architect. Without the ability to clearly articulate complex technical concepts through detailed designs and specifications while also applying a creative lens, product ideas would not be realized to their full potential.

In summary, how do I define my work? My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible. I can help you and your organization achieve the same. Let me show you how.

Quantum Computing: Revolutionizing Industry and Science

We can imagine a world where quantum computers will be able to design powerful new drugs by simulating the behaviour of individual molecules, and optimize complex supply chains to help companies source the parts they need and assemble products in the most efficient way possible.

Introduction

Quantum computing is an entirely new dimension of computing leveraging the laws of quantum mechanics. Quantum computers apply superposition and entanglement at the universe’s smallest scales and coldest temperatures. They also adopt a multidisciplinary approach comprising of computer science, physics and mathematics to enable scientist to solve complex problems.

While today’s quantum computers remain rudimentary and error-prone, they have the potential to provide significant performance gains, and dramatically increased computation speeds to perform complex computational tasks that can take classical computers years to complete. Numerous governments, universities and vendors around the world are investing heavily in harnessing quantum computing technology to achieve fault-tolerant and reliable systems.

In this article, I provide a detailed examination of the key concepts underlying quantum computers, and how they promise to open the potential for massive advancements in a variety of scientific and industrial applications.

Superposition and entanglement

Qubits are the most basic units of processing in quantum computers. Qubits rely on the use of particles such electrons and photons, that can be suspended in states of 0, 1 or any states in between. This ability of qubits to be in more than one state at a time is what gives quantum computers their processing power. However, it is the application of superposition and entanglement through interference to qubits that allow quantum computers to produce reliable outcomes.

To better understand superposition, we refer to the famous thought experiment involving a cat as imagined by the physicist, Erwin Shrödinger. Shrödinger’s experiment imagined a cat sealed in a box with a poison trap that can be triggered by a decaying radioactive atom. Since the decay of the radioactive atom is uncertain, at any given moment the cat could be in a superposition of states such that it is either dead or alive [1]. It is only when someone opens the box and observes the cat does its state become definite or its state “collapses” to being either dead or alive.

Superposition is difficult to explain through analogies; however it is also possible to imagine a coin tossed and spinning fast in the air. As long as the coin continues spinning then its state can be considered both heads and tails. It is only when the coin is stopped does one observe its state as either heads or tails.  

Quantum theory also implies that particles can be linked with each other, such that when the state of one particle changes it will instantly impact the state of the other, regardless of the distance between them. This is what is referred to as entanglement, and it is what allows qubits to correlate their states with each other and thus scale their processing power exponentially.

In Shrödinger’s cat experiment, entanglement can be described as having several cats in the box that are entangled in a superposition of states such all cats in the box are either dead or alive. When someone opens the box, their state then collapses such that the cats in the box are all observed to be either dead or alive. Entanglement means that two particles are always connected, and they are never independent of each other. This is how nature works at the atomic level.   

Interference

Quantum interference refers to a phenomenon where the probability amplitudes of quantum states combine, either constructively or destructively, to influence the likelihood of an outcome. In classical interference, physical waves such as sound or water can overlap such that they amplify or cancel each other out. Quantum interference is different in that is it based on the wave-like behaviour of particles such as electrons, photons and atoms [2].

In quantum theory, particles are described via wavefunctions, which contain complex-value probability amplitudes. We can think of a particle going through two indistinguishable paths such as two slits in a barrier as the two-slit experiment describes. In this experiment, particles such as photons or electrons are fired one at a time at a wall with two narrow slits and a screen placed behind it. Each particle must pass through slit A, slit B or a combination of both. The expectation would be that particles would pass through one slit or the other, and that the screen would show two bright spots as the particles pass through.

Instead of observing two spots on the screen, a series of bright and dark fringes are observed – an interference pattern. The fringe pattern is characteristic of the behaviour of waves rather than particles, where the bright areas indicate wave amplitudes that amplified each other, while the dark ones are waves that canceled out. This behaviour can be described as [3]:

  • Constructive interference, where the wave amplitudes add up, thus increasing the probability of a particular outcome.
  • Destructive interference, where the amplitudes cancel each other out, thus reducing or eliminating the chance of an outcome.

What is fascinating about the fringe pattern observed is that it can appear even when particles are sent one at a time. Therefore, instead of interfering with each other particles are interfering with themselves, thus taking both paths simultaneously in superposition.

Interference is what gives quantum computers their superiority over classical computers. It allows quantum systems to guide computations by enhancing the probability of correct answers while supressing wrong ones. Once qubits are transformed and entangled, their probability amplitudes evolve through interference. All possible computations are performed simultaneously and are allowed to interact through entanglement.

A critical condition of interference is that the paths followed by qubits are indistinguishable, such that it is not possible to determine which path a qubit takes, even in principle. Any form of measurement collapses the wavefunction, thus destroying the superposition and possibility of interference. Interference underlies the power of quantum computing, and it remains a key component in unlocking the full potential of quantum technology.

Measurement

In the final stage of quantum computing, states collapse into classical outcomes upon measurement. These outcomes are not random and are fundamentally determined by whether computational paths leading to them have interfered constructively or destructively.

A state where computational paths leading to it have interfered destructively will have a probability close to 0. Similarly, a state where computational paths leading to it have interfered constructively will have a significantly amplified likelihood.

Instead of measuring outcomes sequentially, quantum computers exploit the wave-like nature of qubits to allow all possible computational paths to co-exist and interfere. This creates a probabilistic landscape where the correct answers become the most likely outcomes.

Quantum bits (Qubits)

Computers process information using bits that store information using 0’s and 1’s. Bits can be represented using physical objects such as bar magnets or switches placed in either a state of up or down. Bits can maintain their state for a long time, thus allowing them to represent stored information in a stable and long-lasting fashion. However, bits are limited in their ability to store information when compared to qubits. While bits can exist in either a stare of 0 or 1, qubits can exist in a superposition of multiple states of 0, 1 or any state in between.

The superposition of qubits is what makes them superior to classical bits. It is possible to think of a qubit as an electron spinning in a magnetic field. The electron could be spinning with the field, known as spin-up state, or against the field, knows as spin-down state. Suppose it is possible to change the direction of the electron’s spin using a pulse of energy such as a laser. If only half a pulse of laser energy is used and all external influences are isolated, then we can imagine the electron in superposition where it is in all possible states at once [4].

Superposition increases the computational power of qubits exponentially depending on the number of qubits in a quantum computer. Whereas two classical bits can contain only two pieces of information (01 and 10), two qubits can store a superposition of four combinations of 0 and 1 simultaneously, three qubits can store eight combinations, and so on. Therefore, a quantum computer can perform 2N computations, where N is the number of qubits.

Conclusion

Through exponential scaling, unique algorithms and the continued evolution of quantum hardware, quantum computing has the potential to revolutionize industries like cryptography, material science, pharmaceuticals and logistics. We can imagine a world where quantum computers will be able to design powerful new drugs by simulating the behaviour of individual molecules, and optimize complex supply chains to help companies source the parts they need and assemble products in the most efficient way possible. Other more impactful applications could be computers that could break the encryption that safeguards our private information on the internet.

Governments, companies and research labs are working tirelessly to harness the potential of this emerging technology. Quantum computing, combined with the capabilities and advancements in AI, has the potential to achieve artificial general intelligence (AGI). By enabling rapid data processing and computation, improved learning capabilities and parallel processing, quantum computers can process extensive datasets, enabling the improved learning capabilities needed for AGI. As quantum computers continue to rapidly evolve, it is essential for us to harness their potential in ways that further advance humanity’s future and well-being.

References

[1] Quantum Computing Explained

[2] What is quantum interference and how does it work?

[3] Quantum interference in Quantum Computing: 2025 Full Guide

[4] What is quantum computing? How it works and examples