DeepPractise
DeepPractise

Observables & Expectation Values (Intro)

Track: Foundations · Difficulty: Beginner · Est: 13 min

Observables & Expectation Values (Intro)

Overview

Up to now we discussed measurement as producing labeled outcomes like 0/1 or +/+/-. In physics and computation, measurements often correspond to numbers (energy, spin component, a yes/no test coded as ±1\pm 1, etc.).

This page introduces two ideas:

  • an observable: the thing being measured (a rule assigning possible outcomes),
  • an expectation value: the average result you would get if you repeated the measurement many times.

We keep the operator viewpoint conceptual—no eigen-decomposition math.

Intuition

If you flip a fair coin and record heads as +1 and tails as −1, a single flip is random but the average over many flips tends toward 0.

Quantum measurements behave similarly:

  • Each run produces one outcome.
  • Repeating produces a distribution of outcomes.
  • The expectation value is the weighted average of those outcomes.

Expectation values are useful because they summarize “what you typically see” while still respecting probabilistic outcomes.

Formal Description

Observable as “outcomes + probabilities”

Suppose a measurement has outcomes labeled by numbers a1,a2,a_1, a_2, \dots.

If the probability of outcome aka_k is P(ak)P(a_k), then the expectation value is

E[A]=kakP(ak).\mathbb{E}[A] = \sum_k a_k\,P(a_k).

Here:

  • AA names the observable (the measurement).
  • aka_k are the numerical outcomes.
  • P(ak)P(a_k) are the probabilities.

This formula is just the usual “average = sum of value times probability.”

Connection to earlier Born-rule language

For a projective measurement in a basis {bk}\{|b_k\rangle\} on a pure state ψ|\psi\rangle, Born’s rule gives

P(bk)=bkψ2.P(b_k)=|\langle b_k|\psi\rangle|^2.

If you associate a numeric outcome aka_k with basis outcome bk|b_k\rangle, the expectation value becomes

E[A]=kakbkψ2.\mathbb{E}[A] = \sum_k a_k\,|\langle b_k|\psi\rangle|^2.

This shows expectation values are not a new rule; they are a repackaging of probabilities into a single number.

Operator view (kept minimal)

In the density-matrix language, expectation values can be written compactly as

E[A]=Tr(ρA).\mathbb{E}[A] = \mathrm{Tr}(\rho A).

You do not need to treat AA abstractly yet. The main point is: expectation values are linear in ρ\rho, which is one reason density matrices are convenient.

Worked Example

Consider measuring in the computational basis, but record the outcomes as numbers:

  • record 0|0\rangle as +1+1
  • record 1|1\rangle as 1-1

Let the state be

ψ=350+451.|\psi\rangle = \tfrac{3}{5}|0\rangle + \tfrac{4}{5}|1\rangle.

Then

P(0)=352=925,P(1)=1625.P(0)=\left|\tfrac{3}{5}\right|^2=\tfrac{9}{25},\quad P(1)=\tfrac{16}{25}.

So the expectation value is

E[A]=(+1)925+(1)1625=725.\mathbb{E}[A] = (+1)\cdot\tfrac{9}{25} + (-1)\cdot\tfrac{16}{25} = -\tfrac{7}{25}.

Interpretation: you will not get “−7/25” in any single run. Instead, if you repeat the measurement many times and average the recorded values (+1 or −1), the average tends toward 7/25-7/25.

Turtle Tip

Turtle Tip

Expectation value is an average-over-many-runs concept. If you ever find yourself expecting to “measure the expectation value” in one shot, pause: a single shot returns one outcome, not the average.

Common Pitfalls

Common Pitfalls

Don’t confuse “expectation value” with “most likely outcome.” A distribution can have a most likely outcome of +1 while still having a negative expectation value (or vice versa).

Also, don’t treat outcomes as inherently numeric. Assigning numbers is part of defining what you mean by the observable.

Quick Check

Quick Check
  1. If outcomes are a1=0a_1=0 and a2=1a_2=1 with probabilities 0.90.9 and 0.10.1, what is the expectation value?
  2. Why might two different observables have the same outcomes but represent different measurements?

What’s Next

To talk precisely about observables and measurement structure, we need a more formal view of measurement itself. Next we’ll describe projective measurements using projectors, orthogonality, and completeness—without heavy operator algebra.