DeepPractise
DeepPractise

Fidelity and Distance Metrics

Track: Noise & Errors · Difficulty: Intermediate · Est: 14 min

Fidelity and Distance Metrics

Overview

When someone says “this device has 99.5% fidelity,” what does that actually mean?

Noise is multi-dimensional:

  • states can drift
  • gates can mis-rotate
  • measurements can misclassify

So we need metrics that answer specific questions. This page introduces two families:

  • fidelity: “how close are we?”
  • distance metrics: “how far are we?”

The goal is interpretation: what a reported number does and does not guarantee.

Intuition

Why multiple metrics exist

There is no single perfect notion of “closeness” for everything:

  • Sometimes you care about a state (did we prepare +|+\rangle correctly?).
  • Sometimes you care about a process (did the gate behave like the intended rotation?).
  • Sometimes you care about worst-case guarantees (is there any input state that gets badly damaged?).

A metric is like a measuring tape. Different tapes are useful for different tasks.

Fidelity as “overlap”

At an intuitive level, fidelity answers:

  • “If I expected one thing, how much does reality resemble it?”

If two states are identical, fidelity is 1. If they are very different, fidelity is closer to 0.

Why distance metrics matter

Fidelity can look great while hiding a problem you care about. Distance-style metrics emphasize:

  • how distinguishable two behaviors are
  • how large an error could be in the worst case

So distance metrics are often used when you need “safety margins.”

Formal Description

We keep definitions lightweight and focus on meaning.

State fidelity (conceptual)

For pure states, a standard notion is the squared overlap:

F(ψ,ϕ)=ψϕ2.F(|\psi\rangle,|\phi\rangle) = |\langle \psi | \phi \rangle|^2.

Interpretation:

  • F=1F=1 means the states match exactly.
  • F=0F=0 means the states are orthogonal (maximally different for measurement).

For mixed states, fidelity generalizes, but the key idea remains “how much the states align.”

Process fidelity (conceptual)

When you care about a gate or channel, you want to compare:

  • the intended process (ideal gate)
  • the actual implemented process

Process fidelity is a family of metrics that attempts to quantify how close the operation is, not just one output state.

Conceptually:

  • state fidelity: compares one output state to another
  • process fidelity: compares how two processes act across many possible inputs

Why distance metrics are complementary

Distance metrics try to capture “how wrong could things be?”

Without going deep into formalism, you can think of distance metrics as:

  • 0 means identical behavior
  • larger values mean easier to tell the behaviors apart by experiments

This is valuable because some applications need guarantees across many inputs, not just one.

Interpreting reported fidelity numbers

A reported “gate fidelity” number is usually:

  • an average-type summary

So you should ask:

  • Average over what distribution of inputs?
  • Does it include measurement errors?
  • Is it per-gate, per-circuit, or per-layer?
  • Is it measuring coherent error, stochastic error, or both?

A single number can be meaningful—but only with context.

Worked Example

Suppose someone reports:

  • “Single-qubit gate fidelity: 99.9%”

A useful interpretation:

  • A single application of a typical 1-qubit gate is usually very close to ideal.

But what about a circuit with 100 such gates? Even if each gate is “only” 0.1% off on average, the overall circuit can degrade noticeably.

So the right follow-up questions include:

  • How does fidelity change with depth?
  • Are there a few gates that are much worse than average?
  • Is the error mostly coherent (systematic) or random?

This example illustrates why “fidelity per gate” is not the same as “success probability of a full algorithm.”

Turtle Tip

Turtle Tip

When you see a fidelity number, immediately ask: “Fidelity of what—state, gate, or full circuit—and averaged over what?”

Common Pitfalls

Common Pitfalls
  • Treating one fidelity number as a universal device score. Different experiments measure different things.
  • Assuming high state fidelity implies high gate fidelity (or vice versa). You can prepare one state well but still have poor general gate performance.
  • Ignoring worst-case behavior. An average can hide a few very bad cases.

Quick Check

Quick Check
  1. What is the conceptual difference between state fidelity and process fidelity?
  2. Why can a high per-gate fidelity still lead to poor deep-circuit performance?
  3. Name one reason distance metrics can be useful alongside fidelity.

What’s Next

Fidelity numbers are summaries. Next we introduce Randomized Benchmarking (RB), a common way to estimate average gate performance while reducing sensitivity to specific state-preparation and measurement imperfections.