SPA Edition Two

TL;DR: Second in a series of implementation proposals - Bayesian derivation and mathematical validation of the Surprisingly Popular Algorithm.

This edition continues to expand the horizons of understanding for a coherent adoption of SPA as outlined in the supplement [1].

Derivation

It is essential for a mathematical model to be weighed against its derivative proof for validation. This segment builds on top of edition one and showcases the mechanism to extrapolate theoretical outcome on to calculated result-set.

The supplementary worksheet (available on request) provides a tabular view of the process. It also captures key elements used as part of theorem 2 analysis. Certain key Bayesian philosophical traits have been outlined here for consumption.

Bayesian Foundation

The premise is built upon hypothesis (typically aided by prior belief) being confirmed by evidence. In this context, consider the following notation:

  • P(H)P(H) = prior probability of an event happening
  • P(E)P(E) = An evidence (or signal) that supports prior probability
  • P(HE)P(H \mid E) = posterior probability as a hypothesis
  • P(EH)P(E \mid H) = is the joint distribution indicating the likelihood of occurrence

This Bayesian framework forms the theoretical basis for understanding how the SPA algorithm determines when the “surprisingly popular” answer differs from the majority vote.

References

[1] A solution to the single-question crowd wisdom problem: Supplementary information - readcube.com