DeepPractise
DeepPractise

Post-Selection Techniques

Track: Noise & Errors · Difficulty: Intermediate · Est: 12 min

Post-Selection Techniques

Overview

Post-selection mitigation answers the question:

  • “Can we reject outcomes that we know must be wrong?”

Many experiments have built-in constraints:

  • a conserved quantity
  • a known parity
  • a promise about valid outputs

If noise causes an outcome that violates these constraints, you can discard that shot. This can improve the quality of the remaining data.

But post-selection has a cost:

  • you throw away data
  • you may need many more shots
  • if used incorrectly, it can bias results

Intuition

Rejecting invalid outcomes

In classical data cleaning, you might remove measurements that are physically impossible. Post-selection is the quantum analogue:

  • if you can identify outcomes that should never happen ideally, those outcomes are evidence of error

Using known symmetries

If your ideal computation must satisfy a symmetry, then noise can be detected by “symmetry breaking.” Examples at a conceptual level:

  • parity constraints
  • conserved particle number (in certain models)

We avoid domain-specific details here. The point is: symmetry gives you a filter.

Tradeoff: accuracy vs sample efficiency

If you discard 30% of shots, your effective sample size drops. That increases statistical uncertainty unless you run more shots.

So post-selection is a trade:

  • fewer wrong shots
  • but also fewer total shots

Formal Description

We describe post-selection as a three-step recipe.

Post-selection recipe

  1. Identify a validity rule that should hold for ideal outcomes.
  2. For each measured shot, test the rule on the observed bitstring.
  3. Discard any shot that fails the test.

The rule must be justified. If the rule is wrong or only approximately true, post-selection can create bias.

Bias risks

Post-selection is not neutral. It changes the distribution by conditioning on passing the test.

This can be good (removing error-dominated outcomes), but it can also:

  • distort expectation values
  • hide certain failure modes

So the “validity rule” should be something you are confident is true in the ideal setting.

Worked Example

Suppose your experiment has a rule:

  • valid outputs must have even parity

You run 1000 shots and observe:

  • 750 even-parity outcomes
  • 250 odd-parity outcomes

Post-selection keeps the 750 even shots and discards the 250 odd shots.

Effect:

  • the retained dataset is more consistent with the ideal constraint
  • but your effective sample size is now 750, not 1000

If you need the same statistical confidence as before, you must increase the number of total shots.

Turtle Tip

Turtle Tip

Post-selection is powerful when you have a strong, trustworthy validity rule. If the rule is weak or approximate, you risk “correcting” your data into a biased answer.

Common Pitfalls

Common Pitfalls
  • Using a validity rule that is not truly guaranteed by the ideal circuit.
  • Forgetting the sampling cost: discarding shots increases variance unless you run more shots.
  • Post-selecting on the final result in a way that artificially inflates success (confirmation bias).

Quick Check

Quick Check
  1. What is post-selection in one sentence?
  2. What is the main tradeoff when using post-selection?
  3. Why can post-selection introduce bias?

What’s Next

Post-selection and other mitigation techniques can help a lot in small or moderate settings. Next we discuss the honest bottom line: the limits of mitigation, why it doesn’t scale indefinitely, and when you eventually need stronger approaches.