Transforming Data into Actionable Insights through Design

Introduction

At the age of fifteen, I secured a summer position at a furniture factory. To get the job, I expressed my interest in technology and programming to the owner, specifically regarding their newly acquired CNC machine. To demonstrate my capability, I presented my academic record and was hired to support a senior operator with the machine.

That summer, I was struck by the ability to control complex machinery through programmed commands on its control board. The design and layout of the interface, as well as the tangible results yielded from my input, highlighted the intersection of technical expertise and thoughtful design. This experience sparked my curiosity about the origins and development of such systems and functionalities.

I have always maintained that design is fundamentally about clarity, how systems make sense and elicit meaningful responses. It involves translating intricate, technical concepts into experiences that are intuitive and accessible. This perspective has guided my approach throughout my career, whether developing an AI-powered dashboard for Air Canada, creating an inclusive quoting tool for TD Insurance, or designing online public services for Ontario.

The central challenge remains consistent: achieving transparency and trust in complex environments. Effective design bridges the gap between people and systems, supporting purposeful engagement.

My observational nature drives me to understand how systems operate, decisions are reached, and individuals navigate complexity. This curiosity informs my design methodology, which begins by analyzing the foundational elements, people, processes, data, and technology, that must integrate seamlessly to deliver a cohesive experience.

To me, design is not merely an aesthetic layer; it serves as the essential framework that provides structure, clarity, and empathy within multifaceted systems. Designing from this perspective, I prioritize not only usability but also alignment across stakeholders and components.

My core design strengths

Throughout my career, I have found that my most effective work comes from applying a set of foundational strengths to every project. These strengths consistently guide my approach and ensure each solution is thoughtful, impactful, and built for real-world complexity.

Systems Thinking: I make it a priority to look beyond surface-level interfaces. My approach involves examining how data, people, and technology interact and influence each other within a system. By doing so, I can design solutions that are not only visually appealing but also deeply integrated and sustainable across the entire ecosystem.

Human-Centred Design: Every design decision I make is grounded in observation and empathy. I focus on the user’s experience, prioritizing how it feels to engage with the product or service. My aim is to create solutions that resonate with individuals on a practical and emotional level.

Accessibility & Inclusion: Designing for everyone is a fundamental principle for me. I strive to ensure that the experiences I create are not just compliant with accessibility standards, but are genuinely usable and fair for all users. Inclusion is woven into the fabric of my process, shaping outcomes that reflect the diversity of people who will interact with them.

Storytelling & Visualization: I leverage visual storytelling to simplify and clarify complex ideas. Using visuals, I help teams and stakeholders see both what we are building and why it matters. This approach fosters understanding and alignment, making the design process transparent and purposeful.

Facilitation & Collaboration: I believe that the best insights and solutions emerge when diverse voices contribute to the process. By facilitating collaboration, I encourage open dialogue and collective problem-solving, ensuring that outcomes are shaped by a broad range of perspectives and expertise.

If I had to distill all these strengths into a single guiding principle, it would be this: “I design to understand, not just to create.”

My design approach: a cyclical process

Design, for me, is less of a straight line and more of a cycle, a continuous rhythm of curiosity, synthesis, and iteration. This process shapes how I approach every project, ensuring that each step builds upon the previous insights and discoveries.

1. Understand the System: I begin by mapping the entire ecosystem, considering all the people involved, their goals, the relevant data, and any constraints. This foundational understanding allows me to see how different elements interact and influence each other.

2. Observe the Experience: Next, I dedicate time to watch, listen, and learn how people actually engage with the system. Through observation and empathy, I uncover genuine behaviours and needs that may not be immediately apparent.

3. Synthesize & Prioritize: I then translate my findings into clear opportunities and actionable design principles. This synthesis helps to focus efforts on what matters most, guiding the team toward solutions that address real challenges.

4. Visualize the Future: Prototyping and iteration are central to my approach. I work to make complexity feel simple and trustworthy, refining concepts until the design communicates clarity and confidence.

5. Deliver & Educate: Finally, I collaborate with developers, stakeholders, and accessibility teams to bring the vision to life. I also focus on making the solution scalable, ensuring that the impact and understanding extend as the project grows.

Good design isn’t just creative, it’s disciplined, methodical, and deeply human.

Projects that demonstrate impact

Transforming operations at Air Canada

At Air Canada, I was responsible for designing AI dashboards that transformed predictive data into clear, actionable insights. These dashboards provided operations teams with the tools to act quickly and effectively, which resulted in a significant reduction in delay response time, by 25%. This project highlighted the value of turning complex data into meaningful information that drives real-world improvements.

Advancing accessibility at TD Insurance

During my time at TD Insurance, I led an accessibility-first redesign of the Auto and Travel Quoter. My approach was centred on ensuring that the solution met the rigorous standards of WCAG 2.1 AA compliance. The redesign not only made the product fully accessible, but also drove an 18% increase in conversions. This experience reinforced the importance of designing for everyone and demonstrated how accessibility can be a catalyst for business growth.

Simplifying government services for Ontarians

With the Ontario Ministry of Transportation, I took on the challenge of redesigning a complex government service. My focus was on simplifying the process for citizens, making it easier and more intuitive to use. The result was a 40% reduction in form completion time, making government interactions smoother and more efficient for the people of Ontario.

Clarity as a catalyst

What stands out to me about these projects is that each one demonstrates a universal truth: clarity scales. When people have a clear understanding of what they are doing and why, efficiency, trust, and accessibility naturally follow. These outcomes prove that good design is not just about aesthetics, it’s about making information actionable and understandable, leading to measurable impact.

Reflection

The best design doesn’t add more, it removes confusion. It connects people, systems, and intent, turning complexity into clarity.

If your organization is wrestling with complexity, whether that’s data, accessibility, or AI, that’s exactly where design can make the biggest difference.

At Mimico Design House, we specialize in helping teams turn that complexity into clarity, mapping systems, simplifying experiences, and designing interfaces that people actually understand and trust.

Through a combination of human-centered design, systems thinking, and accessibility expertise, I work with organizations to bridge the gap between business strategy and user experience, transforming friction points into moments of understanding.

If your team is facing challenges with alignment, usability, or data-driven decision-making, I’d love to explore how we can help.

You can connect with me directly on LinkedIn or visit mimicodesignhouse.com to learn more about how we help organizations design systems people believe in.

Atom Loss: A Bottleneck in Quantum Computing

It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

Introduction

Until recently, quantum computers have faced a significant obstacle known as ‘atom loss’, which has limited their advancement and ability to operate for long durations. At the heart of these systems are quantum bits, or qubits, which represent information in a quantum state, allowing them to be in the state 0, 1, or both simultaneously, thanks to superposition. Qubits are formed from subatomic particles and engineered through precise manipulation and measurement of quantum mechanical properties.

Historically, this atom loss phenomenon restricted quantum computers to performing computations for only a few milliseconds. Even the most sophisticated machines struggled to operate beyond a few seconds. However, recent breakthroughs by Sandia National Laboratories and Harvard University researchers have changed this landscape dramatically. At Harvard, researchers have built a quantum computer that could sustain operations for over two hours [1], a substantial improvement over previous limitation. This advancement has led scientists to believe they are on the verge of enabling quantum computers to run continuously, potentially without time constraints.

What causes atom loss?

Atom loss presents a significant challenge in quantum computing, as it results in the loss of the fundamental unit of information – the qubit – along with any data it contains. During quantum computations, qubits may be lost from the system due to factors such as noise and temperature fluctuations. This phenomenon can lead to information degradation and eventual system failure. To maintain qubit stability and prevent atom loss, a stringent set of physical, environmental, and engineering conditions must be satisfied.

Environmental fluctuations

Maintaining the integrity of qubits in a quantum computing system is heavily dependent on shielding them from various environmental disturbances. Qubits are highly sensitive to noise, electromagnetic fields, and stray particles, any of which can interfere with their quantum coherence. Quantum coherence describes the ability of a qubit to remain in a stable superposition state over time; the duration of this coherence directly affects how long a qubit can function without experiencing errors.

One fundamental requirement for preserving quantum coherence is the maintenance of cryogenic environments. Qubits must be kept at temperatures near absolute zero, which is essential for eliminating thermal noise and fostering the quantum behaviour necessary for reliable operations. Even slight fluctuations in temperature or the presence of external electromagnetic influences can cause the delicate quantum state of a qubit to degrade or flip unpredictably, leading to information loss and system errors [2].

These stringent environmental controls are critical for ensuring that qubits remain stable and effective throughout quantum computations, highlighting the importance of addressing environmental fluctuations as a key challenge in quantum computing.

Trap imperfections

Neutral atom processors have become a prominent platform for achieving large-scale, fault-tolerant quantum computing [3]. This approach enables qubits to be encoded in states that possess exceptionally long coherence times, often extending up to tens of seconds. The extended coherence time is crucial for maintaining quantum information over prolonged computations, which is essential for complex and reliable quantum operations.

The operation of neutral atom processors relies on the use of optical tweezer arrays. These arrays are dynamically configured, allowing qubits to be trapped in arbitrary geometries and enabling the system to scale to tens of thousands of qubits. The flexibility in configuration and scalability makes neutral atom processors especially suited for advancing quantum computing technology beyond previous limitations.

Despite these advantages, neutral atom processors are not immune to challenges. Atom loss remains a significant issue, arising from several sources. Heating within the system can cause atoms to escape their traps, while collisions with background gas particles further contribute to atom loss. Additionally, during the excitation of an atom from one quantum state to another, such as the transition to a Rydberg state, anti-trapping can occur, leading to the loss of qubits from the processor array.

Readout errors

During the process of reading out quantum information, qubits may be displaced from their positions within the two-dimensional arrays. This readout operation, which involves imaging the qubits to determine their quantum state, can inadvertently lead to the loss of qubits from the processor array. Such atom loss poses a risk to the integrity and continuity of quantum computations.

To address this challenge, neutral atom processor arrays are typically designed with additional qubits that act as a buffer. These extra qubits ensure that, even when some atoms are lost during readout or other operations, enough qubits remain available for the system to continue performing calculations reliably.

Another approach to mitigating atom loss during readout is to slow down the imaging process. By reducing the speed of readout operations, the likelihood of displacing qubits can be minimized, thereby decreasing the rate at which atoms are lost from the array. However, this strategy comes with a trade-off: slowing down readout operations leads to reduced overall system efficiency, as calculations take longer to complete [4]. As a result, there is an inherent balance between maintaining qubit integrity and preserving the speed and efficiency of quantum computations.

Imperfect isolation

Maintaining perfect isolation of qubits from their environment is an immense challenge, primarily because it demands highly sophisticated and costly shielding methods. In practice, it is virtually impossible to completely shield quantum systems from external influences. As a result, stray electromagnetic signals, fluctuations in temperature, and mechanical vibrations can penetrate these defences and interact with quantum systems. Such interactions are detrimental, as they can disrupt the delicate balance required for quantum operations and ultimately lead to atom loss within the processor array [5]. These environmental disturbances compromise the stability and coherence of qubits, posing a significant obstacle to the reliability and scalability of quantum computers.

Recent solutions and research

Multiple research teams are developing ways to reduce atom loss by detecting and correcting missing atoms in quantum systems, improving calculation reliability.

Researchers at Sandia National Laboratories, in collaboration with the University of New Mexico, have published a study demonstrating, for the first time, that qubit leakage errors in neutral atom platforms can be detected without compromising or altering computational outcomes [6]. The team achieved this by utilising the alternating states of entanglement and disentanglement among atoms within the system. In experiments where the atoms were disentangled, results showed substantial deviations compared to those observed during entanglement. This approach enabled the detection of the presence of adjacent atoms without direct observation, thereby preserving the integrity of the information contained within each atom.

Ancilla qubits are essential in quantum error correction and algorithms [7]. These extra qubits help with measurement and gate implementation, yet they do not store information from the main quantum state. By weakly entangling ancilla qubits with the physical qubits, it becomes possible for them to identify errors without disturbing the actual quantum data. Thanks to non-demolition measurements, errors can be detected while keeping the physical qubit’s state intact.

A group of physicists from Harvard University have recently created the first quantum computer capable of continuous operation without needing to restart [1]. By inventing a technique to replenish qubits in optical tweezer arrays as they exit the system, the researchers managed to keep the computer running for more than two hours. Their setup contains 3,000 qubits and can inject up to 300,000 atoms each second into the array, compensating for any lost qubits. This approach enables the system to maintain quantum information, even as atoms are lost and replaced. According to the Harvard team, this innovation could pave the way for quantum systems that can function indefinitely.

Conclusion

It was previously believed that atom loss could seriously hinder the progress of quantum computing. Atom loss and qubit leakage were serious errors that could render calculations unreliable. With the advancements introduced by the researchers at Sandia National Laboratories, the University of New Mexico and Harvard University, and a host of other teams around the world, the revolutionary advancements quantum computers could introduce in scientific research, medicine and finance are becoming closer than ever. It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

[1] Harvard Researchers Develop First Ever Continuously Operating Quantum Computer

[2] Quantum Chips: The Brains Behind Quantum Computing

[3] Quantum Error Correction resilient against Atom Loss

[4] Novel Solutions For Continuously Loading Large Atomic Arrays

[5] Quantum Decoherence: The Barrier to Quantum Computing

[6] A breakthrough in Quantum Error Correction

[7] Ancilla Qubit

Bringing Ideas to Life: My Journey as a Product Architect

My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible.

Lately, I have been reflecting on what drew me, as a designer, to write about topics such as artificial intelligence and quantum computing. I have been fascinated with both topics and how they have transformed that way we view the world. Everything we see today in terms of advancements in AI and quantum computing started with an idea, brought to life through innovation and perseverance.

In AI, there was the idea that machine learning would transform the way we do business by leveraging large amounts of data to provide valuable insights, something that would not be easily attainable through human effort. In quantum computing, there was the idea that applying the way particles behave in the universe to computing would unlock a vast potential for computing capabilities and power, beyond what classical computers can achieve. So many other advancements and achievements in AI and quantum computing continue to be realized through the conception of ideas and the relentless pursuit of ways and methods to implement them.

Everything starts with an idea

Beyond AI and quantum computing, everything we see around us started with an idea, brought to life through continued and persistent effort to make it a reality. Every building we see, every product, every service and all material and immaterial things in our lives are the product of an idea.

As a designer and product architect, I also help make ideas a reality through persistent effort and the application of methodology that lays a roadmap for the implementation of those ideas. Similarly, AI and quantum computing are fields that are bringing novel and exciting concepts to life through the development and application of scientific methodology.

While thinking about all of this, I pondered how I would define my work and role as a designer. How would I describe my work, knowing that most of us use technology without thinking about the journey a product takes from idea to experience? What value do I bring to organizations that hire me to help them with their problems? In an age where products are incorporating ever more advanced and sophisticated technology, as is the case with AI and quantum computing, how does my work extend beyond simply developing designs and prototypes?

To answer these questions, I am drawn back to the fact that everything around us starts with an idea. As a designer, it is extremely rewarding to me to help make ideas for my clients a reality while navigating the conceptual, technical and implementation challenges.

Making the invisible useful

I’ve been thinking a lot about the similarities between how we design physical spaces and how we design digital ones. Just like a building starts as an idea in an architect’s mind, so are the products that I work on and help a multitude of organizations bring to life. As a designer, I help lay the foundations for a product idea by thoroughly understanding the motivations and needs behind it, and what benefits and improvements implementing it would bring.

Buildings serve needs by providing housing for people or serving as places to work, and for businesses and organizations to operate. A well-designed building offers an effortless flow that draws people in and makes them want to stay. Similarly, great digital design allows for seamless navigation, creating an experience that feels natural and engaging. Before an architect devises plans and drawings for a building, they must first maintain a clear vision of the idea in their mind, understand the needs behind it and ensure that their designs and plans meet those needs.

From there, the idea and concept of the building in the architect’s mind are translated into plans and drawings. Those plans are drawn and shared with a builder, who in turn collaborates with the architect to bring them to life. Without the architect and their clear vision of the idea and concept behind the building, the building would not exist, at least not in the shape and form that the architect would have imagined. It would not properly serve the needs and bring about the benefits that accompanied the original idea.

Just like a building architect, as a product architect I must also understand the needs behind digital products to create experiences that truly serve the user. Through this process, I envision flows and interactions that will enable users to achieve their goals in the simplest and easiest way possible, reducing friction while also achieving the desired business value and benefit. Like an architect, I collaborate with members of technical teams so that the idea behind the product can be realized to its full potential through detailed roadmaps, designs and prototypes.

Figure 1. Architects are masters of the invisible made useful.

An architect must possess technical and creative skills that enable them to visualize the idea of a building. The same is true for me as a product architect. Without the ability to clearly articulate complex technical concepts through detailed designs and specifications while also applying a creative lens, product ideas would not be realized to their full potential.

In summary, how do I define my work? My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible. I can help you and your organization achieve the same. Let me show you how.

Quantum Computing: Revolutionizing Industry and Science

We can imagine a world where quantum computers will be able to design powerful new drugs by simulating the behaviour of individual molecules, and optimize complex supply chains to help companies source the parts they need and assemble products in the most efficient way possible.

Introduction

Quantum computing is an entirely new dimension of computing leveraging the laws of quantum mechanics. Quantum computers apply superposition and entanglement at the universe’s smallest scales and coldest temperatures. They also adopt a multidisciplinary approach comprising of computer science, physics and mathematics to enable scientist to solve complex problems.

While today’s quantum computers remain rudimentary and error-prone, they have the potential to provide significant performance gains, and dramatically increased computation speeds to perform complex computational tasks that can take classical computers years to complete. Numerous governments, universities and vendors around the world are investing heavily in harnessing quantum computing technology to achieve fault-tolerant and reliable systems.

In this article, I provide a detailed examination of the key concepts underlying quantum computers, and how they promise to open the potential for massive advancements in a variety of scientific and industrial applications.

Superposition and entanglement

Qubits are the most basic units of processing in quantum computers. Qubits rely on the use of particles such electrons and photons, that can be suspended in states of 0, 1 or any states in between. This ability of qubits to be in more than one state at a time is what gives quantum computers their processing power. However, it is the application of superposition and entanglement through interference to qubits that allow quantum computers to produce reliable outcomes.

To better understand superposition, we refer to the famous thought experiment involving a cat as imagined by the physicist, Erwin Shrödinger. Shrödinger’s experiment imagined a cat sealed in a box with a poison trap that can be triggered by a decaying radioactive atom. Since the decay of the radioactive atom is uncertain, at any given moment the cat could be in a superposition of states such that it is either dead or alive [1]. It is only when someone opens the box and observes the cat does its state become definite or its state “collapses” to being either dead or alive.

Superposition is difficult to explain through analogies; however it is also possible to imagine a coin tossed and spinning fast in the air. As long as the coin continues spinning then its state can be considered both heads and tails. It is only when the coin is stopped does one observe its state as either heads or tails.  

Quantum theory also implies that particles can be linked with each other, such that when the state of one particle changes it will instantly impact the state of the other, regardless of the distance between them. This is what is referred to as entanglement, and it is what allows qubits to correlate their states with each other and thus scale their processing power exponentially.

In Shrödinger’s cat experiment, entanglement can be described as having several cats in the box that are entangled in a superposition of states such all cats in the box are either dead or alive. When someone opens the box, their state then collapses such that the cats in the box are all observed to be either dead or alive. Entanglement means that two particles are always connected, and they are never independent of each other. This is how nature works at the atomic level.   

Interference

Quantum interference refers to a phenomenon where the probability amplitudes of quantum states combine, either constructively or destructively, to influence the likelihood of an outcome. In classical interference, physical waves such as sound or water can overlap such that they amplify or cancel each other out. Quantum interference is different in that is it based on the wave-like behaviour of particles such as electrons, photons and atoms [2].

In quantum theory, particles are described via wavefunctions, which contain complex-value probability amplitudes. We can think of a particle going through two indistinguishable paths such as two slits in a barrier as the two-slit experiment describes. In this experiment, particles such as photons or electrons are fired one at a time at a wall with two narrow slits and a screen placed behind it. Each particle must pass through slit A, slit B or a combination of both. The expectation would be that particles would pass through one slit or the other, and that the screen would show two bright spots as the particles pass through.

Instead of observing two spots on the screen, a series of bright and dark fringes are observed – an interference pattern. The fringe pattern is characteristic of the behaviour of waves rather than particles, where the bright areas indicate wave amplitudes that amplified each other, while the dark ones are waves that canceled out. This behaviour can be described as [3]:

  • Constructive interference, where the wave amplitudes add up, thus increasing the probability of a particular outcome.
  • Destructive interference, where the amplitudes cancel each other out, thus reducing or eliminating the chance of an outcome.

What is fascinating about the fringe pattern observed is that it can appear even when particles are sent one at a time. Therefore, instead of interfering with each other particles are interfering with themselves, thus taking both paths simultaneously in superposition.

Interference is what gives quantum computers their superiority over classical computers. It allows quantum systems to guide computations by enhancing the probability of correct answers while supressing wrong ones. Once qubits are transformed and entangled, their probability amplitudes evolve through interference. All possible computations are performed simultaneously and are allowed to interact through entanglement.

A critical condition of interference is that the paths followed by qubits are indistinguishable, such that it is not possible to determine which path a qubit takes, even in principle. Any form of measurement collapses the wavefunction, thus destroying the superposition and possibility of interference. Interference underlies the power of quantum computing, and it remains a key component in unlocking the full potential of quantum technology.

Measurement

In the final stage of quantum computing, states collapse into classical outcomes upon measurement. These outcomes are not random and are fundamentally determined by whether computational paths leading to them have interfered constructively or destructively.

A state where computational paths leading to it have interfered destructively will have a probability close to 0. Similarly, a state where computational paths leading to it have interfered constructively will have a significantly amplified likelihood.

Instead of measuring outcomes sequentially, quantum computers exploit the wave-like nature of qubits to allow all possible computational paths to co-exist and interfere. This creates a probabilistic landscape where the correct answers become the most likely outcomes.

Quantum bits (Qubits)

Computers process information using bits that store information using 0’s and 1’s. Bits can be represented using physical objects such as bar magnets or switches placed in either a state of up or down. Bits can maintain their state for a long time, thus allowing them to represent stored information in a stable and long-lasting fashion. However, bits are limited in their ability to store information when compared to qubits. While bits can exist in either a stare of 0 or 1, qubits can exist in a superposition of multiple states of 0, 1 or any state in between.

The superposition of qubits is what makes them superior to classical bits. It is possible to think of a qubit as an electron spinning in a magnetic field. The electron could be spinning with the field, known as spin-up state, or against the field, knows as spin-down state. Suppose it is possible to change the direction of the electron’s spin using a pulse of energy such as a laser. If only half a pulse of laser energy is used and all external influences are isolated, then we can imagine the electron in superposition where it is in all possible states at once [4].

Superposition increases the computational power of qubits exponentially depending on the number of qubits in a quantum computer. Whereas two classical bits can contain only two pieces of information (01 and 10), two qubits can store a superposition of four combinations of 0 and 1 simultaneously, three qubits can store eight combinations, and so on. Therefore, a quantum computer can perform 2N computations, where N is the number of qubits.

Conclusion

Through exponential scaling, unique algorithms and the continued evolution of quantum hardware, quantum computing has the potential to revolutionize industries like cryptography, material science, pharmaceuticals and logistics. We can imagine a world where quantum computers will be able to design powerful new drugs by simulating the behaviour of individual molecules, and optimize complex supply chains to help companies source the parts they need and assemble products in the most efficient way possible. Other more impactful applications could be computers that could break the encryption that safeguards our private information on the internet.

Governments, companies and research labs are working tirelessly to harness the potential of this emerging technology. Quantum computing, combined with the capabilities and advancements in AI, has the potential to achieve artificial general intelligence (AGI). By enabling rapid data processing and computation, improved learning capabilities and parallel processing, quantum computers can process extensive datasets, enabling the improved learning capabilities needed for AGI. As quantum computers continue to rapidly evolve, it is essential for us to harness their potential in ways that further advance humanity’s future and well-being.

References

[1] Quantum Computing Explained

[2] What is quantum interference and how does it work?

[3] Quantum interference in Quantum Computing: 2025 Full Guide

[4] What is quantum computing? How it works and examples

The Principles of Quantum Computing Explained

Today, a variety of companies are producing mainstream quantum hardware and making tools available to developers, turning quantum computing technology that was theoretical a few decades ago into a reality.

Introduction

During one of his Messenger Lectures at MIT in 1964, the renowned Nobel prize laureate and theoretical physicist, Richard Feynman, was quoted as saying “I think I can safely say that no one can understand quantum mechanics”. Feynman emphasized the counter intuitiveness of quantum mechanics, and encouraged listeners at his lecture to simply accept how atoms behave at the quantum level, rather than trying to apply a classical understanding onto it [1].

At its core, quantum theory describes how light and matter behave at the subatomic level. Quantum theory explains how particles can appear in two different places at the same time, how light can behave both as a particle and a wave, and how electrical current can flow both clockwise and counter-clockwise in a wire. These ideas can seem strange to us, even bizarre, yet quantum mechanics gave rise to a new world of possibilities science, technology and information processing.

What is a quantum computer?

While classical computers use bits that can be either 0 or 1, quantum computers use quantum bits (qubits) that can be 0, 1 or both at the same time, suspended in superposition. Qubits are created by manipulating and measuring systems that exhibit quantum mechanical behaviour. Because qubits can hold superposition and exhibit interference, they can solve problems differently than classical computers.

Quantum computers perform quantum computations by manipulating the quantum states of qubits in a controlled way to perform algorithms [2]. Quantum computers can adopt an arbitrary quantum state from an arbitrary input quantum state. This enables quantum computers to accurately compute the behaviour of small particles that follow the laws of quantum mechanics, such as the behaviour of an electron in a hydrogen molecule. Quantum computers can also be used to efficiently run optimization and machine learning algorithms.

For example, a classical computer might apply a brute force method to solve a maze by trying every possible path and remembering the paths that don’t work. A quantum computer, on the other hand, may not need to test all paths in the maze to arrive at the solution. Instead, given a snapshot of the maze, a quantum computer relies on measuring the probability amplitudes of qubits to determine the outcome. Since the amplitudes behave like waves, the solution is found when the waves overlap.

Principles of quantum computing

Quantum computing relies on four key principles:

Superposition – represents all possible combinations of a qubit through a complex multi-dimensional computational space. Superposition allows the representation of complex problems in new ways using these computational spaces. The quantum state is measured by collapsing it from the superposition of possibilities into a binary state that can be registered as binary code using 0 and 1[3].   

Entanglement – the ability of qubits to correlate their state with other qubits. Entanglement implies close connections among qubits in a quantum system, such that each qubit can immediately determine information about other qubits in the system.

Interference – qubits placed in a state of collective superposition structure information in a way that looks like waves, with amplitudes associated with each wave. These waves can either peak at a particular level or cancel each other out, thus amplifying the probability or canceling it out for a specific outcome. Amplifying or canceling out a probability are both forms of interference.

Decoherence – occurs when a system collapses from a quantum state to a non-quantum state. This can be triggered intentionally through measurement of the quantum system or other unintentional factors. Quantum computers require avoiding or minimizing decoherence.                 

Combining these principles can help explain how quantum computers work. By preparing a superposition of quantum states, a quantum circuit written by the user uses operations to entangle qubits and generate interference patterns, as governed by a quantum algorithm. Outcomes are either canceled out or amplified through interference, and the amplified outcomes serve as the solution to the computation.

Conclusion

Today, a variety of companies are producing mainstream quantum hardware and making tools available to developers, turning quantum computing technology that was theoretical a few decades ago into a reality. Superconducting quantum processors are being delivered at regular intervals, increasing quantum computing speed and capacity. Researchers are continuing to make quantum computers even more useful, while overcoming challenges related to scaling quantum hardware and software, quantum error correction and quantum algorithms.


Designing solutions that work for users is what fuels my work. I’d love to connect and talk through your design ideas or challenges, connect with me today on LinkedIn or contact me at Mimico Design House.


References

[1] Quantum Mechanics by Richard P. Feynman

[2] The basics of Quantum Computing

[3] What is quantum computing?