DeepPractise
DeepPractise

Measurement Error Mitigation

Track: Noise & Errors · Difficulty: Intermediate · Est: 12 min

Measurement Error Mitigation

Overview

Measurement error mitigation addresses a simple but crucial problem:

  • “The device may report the wrong bitstring even if the quantum state before measurement was correct.”

Readout error is often one of the largest error sources in near-term experiments. Mitigating it can significantly improve reported probabilities and expectation values.

This page explains:

  • the conceptual idea of inverting confusion matrices
  • why mitigation helps but does not remove underlying noise
  • practical constraints and failure modes

Intuition

Readout as a noisy channel

Think of measurement as a noisy mapping:

  • true outcome → reported outcome

If you know the mapping well (via calibration), you can approximately reverse it.

Why mitigation is possible here

Readout is classical at the end:

  • after measurement, you have classical counts of bitstrings

So you can apply classical post-processing. This makes readout mitigation one of the most accessible mitigation techniques.

Why it doesn’t remove noise

Mitigation does not change what happened in the quantum device. It only changes your best estimate of the underlying distribution.

Also, readout mitigation only fixes measurement misclassification. It does not fix:

  • gate errors
  • decoherence
  • leakage

Formal Description

We describe the method as “calibrate, then invert.”

Readout confusion matrices (conceptual)

For one qubit, the confusion matrix summarizes:

  • P(report 0true 0)P(\text{report }0\mid\text{true }0)
  • P(report 1true 0)P(\text{report }1\mid\text{true }0)
  • P(report 0true 1)P(\text{report }0\mid\text{true }1)
  • P(report 1true 1)P(\text{report }1\mid\text{true }1)

For multiple qubits, the concept generalizes to bitstrings.

Mitigation by inversion (conceptual)

  1. Run calibration circuits that prepare known basis states.
  2. Measure them to estimate the confusion probabilities.
  3. Treat your observed counts as “true distribution passed through confusion.”
  4. Solve (invert) to estimate the true distribution.

This is a linear-algebra-flavored idea, but you don’t need the algebra to understand the logic:

  • if you know how outcomes get scrambled, you can unsramble them approximately

Practical constraints

Mitigation can become difficult when:

  • the matrix is large (many qubits)
  • readout errors are correlated across qubits
  • the confusion matrix changes over time
  • you have limited shots (statistical noise)

Also, inversion can amplify shot noise, especially when error rates are high.

Worked Example

Suppose your 1-qubit calibration gives:

  • P(10)=0.03P(1\mid 0)=0.03
  • P(01)=0.07P(0\mid 1)=0.07

You run a circuit and observe:

  • 600 zeros, 400 ones

Naively, you’d report P(1)=0.40P(1)=0.40. But because some true 1s are misread as 0 (7%), the true P(1)P(1) is likely a bit higher than 0.40.

Mitigation uses the calibration numbers to estimate the underlying probabilities. Even without crunching the exact inversion, the direction is clear:

  • correct for 1→0 misreads tends to increase the estimated probability of 1

Turtle Tip

Turtle Tip

Readout mitigation is often the easiest improvement you can make: calibrate confusion, then correct counts. But it only fixes measurement misclassification—not gate noise.

Common Pitfalls

Common Pitfalls
  • Applying an old calibration matrix after the device has drifted.
  • Treating corrected probabilities as “ground truth.” Inversion can amplify statistical noise.
  • Ignoring correlated readout errors. Independent-per-qubit correction can break down if errors are correlated.

Quick Check

Quick Check
  1. What physical error source does measurement mitigation target?
  2. What does the confusion matrix represent conceptually?
  3. Why can inversion-based mitigation amplify shot noise?

What’s Next

Next we look at post-selection: discarding outcomes that violate known constraints or symmetries. Post-selection can improve accuracy, but it also costs samples and can introduce bias if used carelessly.