Quantum Entanglement: ‘Spooky Action at a Distance’

The atoms that comprise all matter – including those composing our bodies – originated from distant stars and galaxies, emphasizing our intrinsic connection to the universe at fundamental scales. It is perhaps an inescapable conclusion that our reality is defined by how we observe and view our universe, and everything within it.

Introduction

In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen published a paper addressing the conceptual challenges posed by quantum entanglement [1]. These physicists argued that quantum entanglement appeared to conflict with established physical laws and suggested that existing explanations were incomplete without the inclusion of undiscovered properties, referred to as hidden variables. This argument, later termed the EPR argument, underscored perceived gaps in quantum mechanics.

Quantum entanglement represents a significant and intriguing phenomenon within quantum mechanics. It describes a situation wherein the characteristics of one particle within an entangled pair are dependent on those of its partner, regardless of the spatial separation between them. The particles involved may be electrons or photons, with properties such as spin direction serving as examples. Fundamentally, entanglement is based on quantum superposition: particles occupy multiple potential states until observation forces the system into a definite state. This state collapse occurs instantaneously for both particles.

The implication that measuring one particle’s property immediately determines the corresponding property of the other – even across vast cosmic distances – suggests the transmission of information at speeds exceeding that of light. This notion appeared to contradict foundational principles of physics as understood by Einstein, who referred to quantum entanglement as “spooky action at a distance” and advocated for a more satisfactory theoretical explanation.

Modern understanding of entanglement

The EPR argument highlighted the conventional concept of reality as consisting of entities with physical properties that are revealed through measurement. Einstein’s theory of relativity is based on this perspective, asserting that reality must be local and that no influence can propagate faster than the speed of light [2]. The EPR analysis demonstrated that quantum mechanics does not align with these principles of local reality, suggesting that a more comprehensive theory may be required to fully describe physical phenomena.

It was not until the 1960s that advances in technology and clearer definitions of measurement permitted physicists to investigate whether hidden variables were necessary to complete quantum theory. In 1964, Irish physicist John S. Bell formulated an equation, Bell’s inequality, which holds true for hidden variable theories but not exclusively for quantum mechanics. If real-world experiments failed to satisfy Bell’s equation, hidden variables could be excluded as an explanation for quantum entanglement.

In 2022, the Nobel Prize in Physics honored Alain Aspect, John Clauser, and Anton Zeilinger for their pioneering experiments utilizing Bell’s inequality, which significantly advanced our understanding of quantum entanglement. Unlike earlier thought experiments involving pairs of electrons and positrons, their work employed entangled photons. Their findings definitively eliminated the possibility of hidden variables and confirmed that particles can exhibit correlations across vast distances, challenging pre-quantum mechanical interpretations of physics.

Furthermore, these experiments demonstrated that quantum mechanics is compatible with special relativity. The collapse of the states of two entangled particles upon measurement does not entail information transfer exceeding the speed of light; rather, it reveals a correlation between entangled particle states governed by randomness and probability, such that measuring one immediately determines the state of the other.

Conclusion

When he called it “spooky action at a distance”, Einstein sought to understand entanglement within the context of local reality. The EPR argument subsequently highlighted the non-local nature of reality through quantum entanglement. Although information cannot be transmitted faster than the speed of light, quantum entanglement demonstrates that the states of entangled particles exhibit instantaneous correlations, ensuring that any transfer of information remains consistent with causality and relativity.

Quantum entanglement underscores the indeterminate nature of reality prior to observation. Rather than existing as predetermined outcomes, reality according to quantum systems resides within vast fields of probability that are defined upon measurement. Additionally, the atoms that comprise all matter – including those composing our bodies – originated from distant stars and galaxies, emphasizing our intrinsic connection to the universe at fundamental scales. It is perhaps an inescapable conclusion that our reality is defined by how we observe and view our universe and, everything within it.

Transforming Data into Actionable Insights through Design

Introduction

At the age of fifteen, I secured a summer position at a furniture factory. To get the job, I expressed my interest in technology and programming to the owner, specifically regarding their newly acquired CNC machine. To demonstrate my capability, I presented my academic record and was hired to support a senior operator with the machine.

That summer, I was struck by the ability to control complex machinery through programmed commands on its control board. The design and layout of the interface, as well as the tangible results yielded from my input, highlighted the intersection of technical expertise and thoughtful design. This experience sparked my curiosity about the origins and development of such systems and functionalities.

I have always maintained that design is fundamentally about clarity, how systems make sense and elicit meaningful responses. It involves translating intricate, technical concepts into experiences that are intuitive and accessible. This perspective has guided my approach throughout my career, whether developing an AI-powered dashboard for Air Canada, creating an inclusive quoting tool for TD Insurance, or designing online public services for Ontario.

The central challenge remains consistent: achieving transparency and trust in complex environments. Effective design bridges the gap between people and systems, supporting purposeful engagement.

My observational nature drives me to understand how systems operate, decisions are reached, and individuals navigate complexity. This curiosity informs my design methodology, which begins by analyzing the foundational elements, people, processes, data, and technology, that must integrate seamlessly to deliver a cohesive experience.

To me, design is not merely an aesthetic layer; it serves as the essential framework that provides structure, clarity, and empathy within multifaceted systems. Designing from this perspective, I prioritize not only usability but also alignment across stakeholders and components.

My core design strengths

Throughout my career, I have found that my most effective work comes from applying a set of foundational strengths to every project. These strengths consistently guide my approach and ensure each solution is thoughtful, impactful, and built for real-world complexity.

Systems Thinking: I make it a priority to look beyond surface-level interfaces. My approach involves examining how data, people, and technology interact and influence each other within a system. By doing so, I can design solutions that are not only visually appealing but also deeply integrated and sustainable across the entire ecosystem.

Human-Centred Design: Every design decision I make is grounded in observation and empathy. I focus on the user’s experience, prioritizing how it feels to engage with the product or service. My aim is to create solutions that resonate with individuals on a practical and emotional level.

Accessibility & Inclusion: Designing for everyone is a fundamental principle for me. I strive to ensure that the experiences I create are not just compliant with accessibility standards, but are genuinely usable and fair for all users. Inclusion is woven into the fabric of my process, shaping outcomes that reflect the diversity of people who will interact with them.

Storytelling & Visualization: I leverage visual storytelling to simplify and clarify complex ideas. Using visuals, I help teams and stakeholders see both what we are building and why it matters. This approach fosters understanding and alignment, making the design process transparent and purposeful.

Facilitation & Collaboration: I believe that the best insights and solutions emerge when diverse voices contribute to the process. By facilitating collaboration, I encourage open dialogue and collective problem-solving, ensuring that outcomes are shaped by a broad range of perspectives and expertise.

If I had to distill all these strengths into a single guiding principle, it would be this: “I design to understand, not just to create.”

My design approach: a cyclical process

Design, for me, is less of a straight line and more of a cycle, a continuous rhythm of curiosity, synthesis, and iteration. This process shapes how I approach every project, ensuring that each step builds upon the previous insights and discoveries.

1. Understand the System: I begin by mapping the entire ecosystem, considering all the people involved, their goals, the relevant data, and any constraints. This foundational understanding allows me to see how different elements interact and influence each other.

2. Observe the Experience: Next, I dedicate time to watch, listen, and learn how people actually engage with the system. Through observation and empathy, I uncover genuine behaviours and needs that may not be immediately apparent.

3. Synthesize & Prioritize: I then translate my findings into clear opportunities and actionable design principles. This synthesis helps to focus efforts on what matters most, guiding the team toward solutions that address real challenges.

4. Visualize the Future: Prototyping and iteration are central to my approach. I work to make complexity feel simple and trustworthy, refining concepts until the design communicates clarity and confidence.

5. Deliver & Educate: Finally, I collaborate with developers, stakeholders, and accessibility teams to bring the vision to life. I also focus on making the solution scalable, ensuring that the impact and understanding extend as the project grows.

Good design isn’t just creative, it’s disciplined, methodical, and deeply human.

Projects that demonstrate impact

Transforming operations at Air Canada

At Air Canada, I was responsible for designing AI dashboards that transformed predictive data into clear, actionable insights. These dashboards provided operations teams with the tools to act quickly and effectively, which resulted in a significant reduction in delay response time, by 25%. This project highlighted the value of turning complex data into meaningful information that drives real-world improvements.

Advancing accessibility at TD Insurance

During my time at TD Insurance, I led an accessibility-first redesign of the Auto and Travel Quoter. My approach was centred on ensuring that the solution met the rigorous standards of WCAG 2.1 AA compliance. The redesign not only made the product fully accessible, but also drove an 18% increase in conversions. This experience reinforced the importance of designing for everyone and demonstrated how accessibility can be a catalyst for business growth.

Simplifying government services for Ontarians

With the Ontario Ministry of Transportation, I took on the challenge of redesigning a complex government service. My focus was on simplifying the process for citizens, making it easier and more intuitive to use. The result was a 40% reduction in form completion time, making government interactions smoother and more efficient for the people of Ontario.

Clarity as a catalyst

What stands out to me about these projects is that each one demonstrates a universal truth: clarity scales. When people have a clear understanding of what they are doing and why, efficiency, trust, and accessibility naturally follow. These outcomes prove that good design is not just about aesthetics, it’s about making information actionable and understandable, leading to measurable impact.

Reflection

The best design doesn’t add more, it removes confusion. It connects people, systems, and intent, turning complexity into clarity.

If your organization is wrestling with complexity, whether that’s data, accessibility, or AI, that’s exactly where design can make the biggest difference.

At Mimico Design House, we specialize in helping teams turn that complexity into clarity, mapping systems, simplifying experiences, and designing interfaces that people actually understand and trust.

Through a combination of human-centered design, systems thinking, and accessibility expertise, I work with organizations to bridge the gap between business strategy and user experience, transforming friction points into moments of understanding.

If your team is facing challenges with alignment, usability, or data-driven decision-making, I’d love to explore how we can help.

You can connect with me directly on LinkedIn or visit mimicodesignhouse.com to learn more about how we help organizations design systems people believe in.

The Quantum Realm: Our Connection to the Universe

At the quantum scale, the universe manifests as a field of infinite possibilities, where the electrons within our atoms move in clouds of probability, always shifting. Consequently, we, as humans composed of countless atoms, are an inseparable part of the universe’s ever-changing nature, and our problems, at the quantum level, do not really exist.

Introduction 

When we close our eyes and place our hand on our forehead, we perceive the firmness of our hand and the gentle warmth of our skin. This physical sensation, the apparent solidity and presence of our body, seems tangible and reassuring. However, at the most fundamental level, our bodies are composed almost entirely of empty space. Beneath the surface of our bones, tissues, and cells, we find that our physical form is constructed from atoms, which themselves are predominantly made up of empty space, held together by the invisible forces of electromagnetism. The idea that we are, in essence, built from empty space can feel unsettling, yet it is central to our understanding of quantum mechanics.   

If we imagine an atom, and picture a single proton as a grain of sand placed at the centre of a football stadium, the nearest electron would be found somewhere in the outer bleachers, approximately 90 metres away. The vast expanse between the proton and the electron is filled with nothing but empty space [1]. The electrons themselves do not orbit the nucleus like tiny marbles following a fixed path. Instead, they ripple through space in a cloud-like manner, appearing in one location at one moment, and in another the next. Their movement is not governed by certainty, but by the probability clouds that define their position and momentum.    

The Universe Is Impermanent

Everything in the universe is in a state of constant motion. Objects such as chairs and tables may appear completely motionless to our eyes, yet at the quantum level, this sense of stillness is an illusion. Even as we sleep and perceive ourselves to be at rest, the atoms that make up our bodies are ceaselessly moving and vibrating. This underlying activity is dictated by the principles of quantum mechanics, which reveal an intricate and dynamic world beneath the surface of everyday experience.

Werner Heisenberg’s uncertainty principle states that it is impossible to simultaneously know both the precise position and the exact momentum of any object [2]. The more accurately we measure one, the less certain we become of the other. This fundamental limit means that no object can ever be fixed in a single, definite spot while remaining absolutely still. To do so would violate the laws of quantum physics, which require all matter to retain a degree of movement and uncertainty 

Consider a ball placed in a bowl and cooled until it appears perfectly still at the bottom. According to the uncertainty principle, the ball can never truly be at rest. It will always exhibit a subtle vibration, as restricting its position too precisely leads to uncertainty in its momentum. This perpetual motion is known as the ball’s zero-point energy.  

A universe where everything is perfectly still would not permit life as we know it. Nothing in the cosmos is permanent; particles continuously move, shift, and even appear and disappear. Remarkably, quantum theory predicts that even the vacuum of space is not empty but is filled with modes of vibration possessing zero-point energy [3]. This means that space itself is permeated by an endless and restless sea of energy, where particles are constantly popping in and out of existence, reflecting the ever-changing nature of reality.  

Quantum Mechanics and the Foundations of Consciousness 

At the quantum level, the behaviour of particles is defined by several extraordinary phenomena, including superposition, entanglement, coherence, and the observer effect. In the phenomenon known as superposition, particles can exist in multiple states at the same time. These particles remain in superposition until an act of observation occurs, causing their wave functions to collapse into a single, definite outcome. When two particles interact and become entangled, their properties, such as spin, polarization, and momentum, become fundamentally inseparable. Measurement of one entangled particle instantly determines the state of its partner, regardless of the distance separating them. 

Humans are deeply entangled with the inner workings of the universe. Our thoughts, memories, and emotions are rooted in the quantum behaviours of the atoms that compose our bodies. Consciousness, in this context, is shaped and defined through quantum operations. The billions of neurons firing simultaneously in the human brain function through quantum entanglement, collectively giving rise to our conscious experience [4]

Stuart Hameroff and Roger Penrose, in their 1996 paper, argued that consciousness depends on coherent quantum processes within collections of microtubules found in brain neurons. At the lowest neurophysiological level, the cytoskeleton of neurons in the human brain is composed of protein networks, specifically neurofilaments and microtubuli. These structures are essential for various transport processes within neurons [5] [6]. According to Hameroff and Penrose’s theoretical framework, tubulins in microtubuli serve as the substrate for quantum processes. 

Through their Orchestrated Objective Reduction (Orch OR) theory, Hameroff and Penrose proposed that the brain’s microtubules act as quantum computers, maintaining coherent quantum states that collapse in a process tied to the geometry of space-time and influenced by quantum gravity. In this framework, consciousness operates as a quantum wave function passing through the brain’s microtubuli, with these collapses corresponding to the observer’s elementary acts of consciousness and embedding them directly into the fabric of the universe. 

Conclusion 

Contemplating the foundations of our bodies and consciousness, it becomes apparent that quantum mechanics may govern much more than just the biological processes within us. While the Orch OR theory proposed by Hameroff and Penrose remains a topic of debate, it opens the door to the possibility that consciousness arises not solely from biological functions but also from quantum phenomena.

In quantum computing, the act of observation is inherently influential, determining the state to which a particle’s wave function collapses. This raises a profound question: could quantum mechanics provide an explanation for our ability to perceive and realize different realities within our consciousness? Furthermore, could our observation of quantum states, which shape our consciousness, be the very mechanism that connects us to the universe in a holistic manner?

I found that for me, the most meaningful way to think about it was that the concept of uncertainty and constant motion is central to how the universe operates at the quantum level. If our bodies and consciousness are subject to the laws of quantum physics, then our experiences of periods of darkness and despair, feelings of being stuck or hopeless, are never truly fixed states. Motion persists within our atoms and within our consciousness, regardless of our perceptions. The pressure we experience, the everyday stresses, and our emotions are all shaped by how we observe and interpret events. At the quantum level, nothing remains permanent; everything is in flux.

This perspective is not meant to diminish our existence as human beings. Rather, it highlights our intrinsic connection to the fabric of the universe. The universe does not operate with absolute certainty or permanence; it is defined by uncertainty, continual change, and movement. At the quantum scale, the universe manifests as a field of infinite possibilities, where the electrons within our atoms move in clouds of probability, always shifting. Consequently, we, as humans composed of countless atoms, are an inseparable part of the universe’s ever-changing nature, and our problems, at the quantum level, do not really exist.


Designing solutions that effectively meet user needs is the driving force behind my work. I also share practical insights on computing and human-centered design each week. I’d love to connect and discuss your design ideas or challenges; feel free to reach out to me today on LinkedIn or contact me at Mimico Design House.


Atom Loss: A Bottleneck in Quantum Computing

It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

Introduction

Until recently, quantum computers have faced a significant obstacle known as ‘atom loss’, which has limited their advancement and ability to operate for long durations. At the heart of these systems are quantum bits, or qubits, which represent information in a quantum state, allowing them to be in the state 0, 1, or both simultaneously, thanks to superposition. Qubits are formed from subatomic particles and engineered through precise manipulation and measurement of quantum mechanical properties.

Historically, this atom loss phenomenon restricted quantum computers to performing computations for only a few milliseconds. Even the most sophisticated machines struggled to operate beyond a few seconds. However, recent breakthroughs by Sandia National Laboratories and Harvard University researchers have changed this landscape dramatically. At Harvard, researchers have built a quantum computer that could sustain operations for over two hours [1], a substantial improvement over previous limitation. This advancement has led scientists to believe they are on the verge of enabling quantum computers to run continuously, potentially without time constraints.

What causes atom loss?

Atom loss presents a significant challenge in quantum computing, as it results in the loss of the fundamental unit of information – the qubit – along with any data it contains. During quantum computations, qubits may be lost from the system due to factors such as noise and temperature fluctuations. This phenomenon can lead to information degradation and eventual system failure. To maintain qubit stability and prevent atom loss, a stringent set of physical, environmental, and engineering conditions must be satisfied.

Environmental fluctuations

Maintaining the integrity of qubits in a quantum computing system is heavily dependent on shielding them from various environmental disturbances. Qubits are highly sensitive to noise, electromagnetic fields, and stray particles, any of which can interfere with their quantum coherence. Quantum coherence describes the ability of a qubit to remain in a stable superposition state over time; the duration of this coherence directly affects how long a qubit can function without experiencing errors.

One fundamental requirement for preserving quantum coherence is the maintenance of cryogenic environments. Qubits must be kept at temperatures near absolute zero, which is essential for eliminating thermal noise and fostering the quantum behaviour necessary for reliable operations. Even slight fluctuations in temperature or the presence of external electromagnetic influences can cause the delicate quantum state of a qubit to degrade or flip unpredictably, leading to information loss and system errors [2].

These stringent environmental controls are critical for ensuring that qubits remain stable and effective throughout quantum computations, highlighting the importance of addressing environmental fluctuations as a key challenge in quantum computing.

Trap imperfections

Neutral atom processors have become a prominent platform for achieving large-scale, fault-tolerant quantum computing [3]. This approach enables qubits to be encoded in states that possess exceptionally long coherence times, often extending up to tens of seconds. The extended coherence time is crucial for maintaining quantum information over prolonged computations, which is essential for complex and reliable quantum operations.

The operation of neutral atom processors relies on the use of optical tweezer arrays. These arrays are dynamically configured, allowing qubits to be trapped in arbitrary geometries and enabling the system to scale to tens of thousands of qubits. The flexibility in configuration and scalability makes neutral atom processors especially suited for advancing quantum computing technology beyond previous limitations.

Despite these advantages, neutral atom processors are not immune to challenges. Atom loss remains a significant issue, arising from several sources. Heating within the system can cause atoms to escape their traps, while collisions with background gas particles further contribute to atom loss. Additionally, during the excitation of an atom from one quantum state to another, such as the transition to a Rydberg state, anti-trapping can occur, leading to the loss of qubits from the processor array.

Readout errors

During the process of reading out quantum information, qubits may be displaced from their positions within the two-dimensional arrays. This readout operation, which involves imaging the qubits to determine their quantum state, can inadvertently lead to the loss of qubits from the processor array. Such atom loss poses a risk to the integrity and continuity of quantum computations.

To address this challenge, neutral atom processor arrays are typically designed with additional qubits that act as a buffer. These extra qubits ensure that, even when some atoms are lost during readout or other operations, enough qubits remain available for the system to continue performing calculations reliably.

Another approach to mitigating atom loss during readout is to slow down the imaging process. By reducing the speed of readout operations, the likelihood of displacing qubits can be minimized, thereby decreasing the rate at which atoms are lost from the array. However, this strategy comes with a trade-off: slowing down readout operations leads to reduced overall system efficiency, as calculations take longer to complete [4]. As a result, there is an inherent balance between maintaining qubit integrity and preserving the speed and efficiency of quantum computations.

Imperfect isolation

Maintaining perfect isolation of qubits from their environment is an immense challenge, primarily because it demands highly sophisticated and costly shielding methods. In practice, it is virtually impossible to completely shield quantum systems from external influences. As a result, stray electromagnetic signals, fluctuations in temperature, and mechanical vibrations can penetrate these defences and interact with quantum systems. Such interactions are detrimental, as they can disrupt the delicate balance required for quantum operations and ultimately lead to atom loss within the processor array [5]. These environmental disturbances compromise the stability and coherence of qubits, posing a significant obstacle to the reliability and scalability of quantum computers.

Recent solutions and research

Multiple research teams are developing ways to reduce atom loss by detecting and correcting missing atoms in quantum systems, improving calculation reliability.

Researchers at Sandia National Laboratories, in collaboration with the University of New Mexico, have published a study demonstrating, for the first time, that qubit leakage errors in neutral atom platforms can be detected without compromising or altering computational outcomes [6]. The team achieved this by utilising the alternating states of entanglement and disentanglement among atoms within the system. In experiments where the atoms were disentangled, results showed substantial deviations compared to those observed during entanglement. This approach enabled the detection of the presence of adjacent atoms without direct observation, thereby preserving the integrity of the information contained within each atom.

Ancilla qubits are essential in quantum error correction and algorithms [7]. These extra qubits help with measurement and gate implementation, yet they do not store information from the main quantum state. By weakly entangling ancilla qubits with the physical qubits, it becomes possible for them to identify errors without disturbing the actual quantum data. Thanks to non-demolition measurements, errors can be detected while keeping the physical qubit’s state intact.

A group of physicists from Harvard University have recently created the first quantum computer capable of continuous operation without needing to restart [1]. By inventing a technique to replenish qubits in optical tweezer arrays as they exit the system, the researchers managed to keep the computer running for more than two hours. Their setup contains 3,000 qubits and can inject up to 300,000 atoms each second into the array, compensating for any lost qubits. This approach enables the system to maintain quantum information, even as atoms are lost and replaced. According to the Harvard team, this innovation could pave the way for quantum systems that can function indefinitely.

Conclusion

It was previously believed that atom loss could seriously hinder the progress of quantum computing. Atom loss and qubit leakage were serious errors that could render calculations unreliable. With the advancements introduced by the researchers at Sandia National Laboratories, the University of New Mexico and Harvard University, and a host of other teams around the world, the revolutionary advancements quantum computers could introduce in scientific research, medicine and finance are becoming closer than ever. It was believed that a reliable quantum computer running indefinitely was a decade or more away. With these new advancements in mitigating atom loss, quantum computers running indefinitely and producing reliable results are only a few years away.

[1] Harvard Researchers Develop First Ever Continuously Operating Quantum Computer

[2] Quantum Chips: The Brains Behind Quantum Computing

[3] Quantum Error Correction resilient against Atom Loss

[4] Novel Solutions For Continuously Loading Large Atomic Arrays

[5] Quantum Decoherence: The Barrier to Quantum Computing

[6] A breakthrough in Quantum Error Correction

[7] Ancilla Qubit

Bringing Ideas to Life: My Journey as a Product Architect

My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible.

Lately, I have been reflecting on what drew me, as a designer, to write about topics such as artificial intelligence and quantum computing. I have been fascinated with both topics and how they have transformed that way we view the world. Everything we see today in terms of advancements in AI and quantum computing started with an idea, brought to life through innovation and perseverance.

In AI, there was the idea that machine learning would transform the way we do business by leveraging large amounts of data to provide valuable insights, something that would not be easily attainable through human effort. In quantum computing, there was the idea that applying the way particles behave in the universe to computing would unlock a vast potential for computing capabilities and power, beyond what classical computers can achieve. So many other advancements and achievements in AI and quantum computing continue to be realized through the conception of ideas and the relentless pursuit of ways and methods to implement them.

Everything starts with an idea

Beyond AI and quantum computing, everything we see around us started with an idea, brought to life through continued and persistent effort to make it a reality. Every building we see, every product, every service and all material and immaterial things in our lives are the product of an idea.

As a designer and product architect, I also help make ideas a reality through persistent effort and the application of methodology that lays a roadmap for the implementation of those ideas. Similarly, AI and quantum computing are fields that are bringing novel and exciting concepts to life through the development and application of scientific methodology.

While thinking about all of this, I pondered how I would define my work and role as a designer. How would I describe my work, knowing that most of us use technology without thinking about the journey a product takes from idea to experience? What value do I bring to organizations that hire me to help them with their problems? In an age where products are incorporating ever more advanced and sophisticated technology, as is the case with AI and quantum computing, how does my work extend beyond simply developing designs and prototypes?

To answer these questions, I am drawn back to the fact that everything around us starts with an idea. As a designer, it is extremely rewarding to me to help make ideas for my clients a reality while navigating the conceptual, technical and implementation challenges.

Making the invisible useful

I’ve been thinking a lot about the similarities between how we design physical spaces and how we design digital ones. Just like a building starts as an idea in an architect’s mind, so are the products that I work on and help a multitude of organizations bring to life. As a designer, I help lay the foundations for a product idea by thoroughly understanding the motivations and needs behind it, and what benefits and improvements implementing it would bring.

Buildings serve needs by providing housing for people or serving as places to work, and for businesses and organizations to operate. A well-designed building offers an effortless flow that draws people in and makes them want to stay. Similarly, great digital design allows for seamless navigation, creating an experience that feels natural and engaging. Before an architect devises plans and drawings for a building, they must first maintain a clear vision of the idea in their mind, understand the needs behind it and ensure that their designs and plans meet those needs.

From there, the idea and concept of the building in the architect’s mind are translated into plans and drawings. Those plans are drawn and shared with a builder, who in turn collaborates with the architect to bring them to life. Without the architect and their clear vision of the idea and concept behind the building, the building would not exist, at least not in the shape and form that the architect would have imagined. It would not properly serve the needs and bring about the benefits that accompanied the original idea.

Just like a building architect, as a product architect I must also understand the needs behind digital products to create experiences that truly serve the user. Through this process, I envision flows and interactions that will enable users to achieve their goals in the simplest and easiest way possible, reducing friction while also achieving the desired business value and benefit. Like an architect, I collaborate with members of technical teams so that the idea behind the product can be realized to its full potential through detailed roadmaps, designs and prototypes.

Figure 1. Architects are masters of the invisible made useful.

An architect must possess technical and creative skills that enable them to visualize the idea of a building. The same is true for me as a product architect. Without the ability to clearly articulate complex technical concepts through detailed designs and specifications while also applying a creative lens, product ideas would not be realized to their full potential.

In summary, how do I define my work? My work is about helping clients and organizations bring their ideas to life, transforming understanding into development, and development into reality, with as little friction and as much functionality as possible. I can help you and your organization achieve the same. Let me show you how.