The concept of **conditional probability** is not properly taught in schools. Bayes formula is mentioned but the table technique is rarely discussed. It is important to point out that the conditional probability table is an implementation of the Bayes formula.

**There is no clear definition of “probability” in the real world.** What is traditionally meant by the term “probability” is the “**long term relative frequency.**“ We repeat the test/experiment numerous times and tabulate the outcomes. From the collected data we come up with a number that represents the relative frequency of the desired outcome. The frequency interpretation of probability has a historical bias. It is not predictive. The relative frequency of a desired outcome computed from the past outcomes may not have any relevance for the future.

There are other interpretations of “probability,” the most important of which is known as the Bayesian interpretation which is based on the “likelihood” concept as opposed to the “relative frequency.” Some claim that the Bayesian probability is more predictive.

Another interpretation of “probability” is found in Quantum Mechanics (QM). The Quantum Mechanics is designed to maximize the predictive power of “probability.” There are lessons to be learned from QM for anyone who cares about prediction.

Bayesian and QM interpretations of “probability” are beyond the scope of this article. Here, I will focus on the frequentist method of computing the “conditional probability”.

**Table Approach**

**Example 1**

**Question**: Suppose 1 in 100 people has a certain disease. A test for the disease has 90% accuracy. What is the probability that someone who has tested positively will actually have the disease.

**Answer**: The first thing you should do is to set up the table.

1000 people tested | total | test positive | test negative |

has the disease | 10 | 9 | 1 |

does not have the disease | 990 | 99 | 891 |

Total | 1000 | 108 | 892 |

So, how did we set up the table?

Let’s assume 1000 people were tested. Since 1% of the people have this disease 10 people will actually have the disease but only 9 will show up in the test because the test has 90% accuracy.

990 people does not have the disease but because of 10% error margin 99 people will test positively out of these 990 healthy people.

In a given row the numbers in the “test positive” and “test negative” columns have to add up to the number in the “total” column. So, we determine the numbers in the “test negative” column by subtracting the number in the “test positive” column from the number in the “total” column.

The second row and the “Total” row (bottom row) logically follows.

The conditional probability that someone who has tested positively will actually have the disease

= 9/108 = 0.083333 = 1/12

The conditional probability that someone who has tested negatively will actually have the disease

= 1/892

The conditional probability that someone who has tested positively will NOT have the disease

= 99/108 = 0.916667

You may find these results very counter-intuitive, especially the last one. But remember, we are talking about the conditional probability here. If we did not know the fact that 1% of the people have this disease and someone told us that the test has 90% accuracy then the probability that someone who has tested positively will NOT have the disease would have been reported as 0.10. **Once we know the fact that only 1% of the people actually have this disease the (conditional) probability changes.**

**Bayes Formula**

**Example 2**

**Question**: A coin is flipped twice. Knowing that the first flip lands on heads what is the probability that the second flip lands on heads as well.

**Answer**: In problems like these where the universe of possibilities is countable you may want to apply the Bayesian formula for conditional probability directly.

(H: head, T: tail)

universe of possibilities : {HH,HT,TH,TT} first flip lands on heads : {HH,HT} second flip lands on heads : {HH,TH} first and second flip both land on heads : {HH}

**probability of an event** (relative frequency interpretation) =

(number of desired possibilities)/(total number of possibilities)

Probability of first flip lands on head = #{HH,HT}/#{HH,HT,TH,TT} = 2/4 = 1/2

Probability of second flip lands on head = #{HH,TH}/#{HH,HT,TH,TT} = 2/4 = 1/2

Probability of first and second flip both land on heads = #{HH}/#{HH,HT,TH,TT}= 1/4

The Bayes formula for conditional probability is

Probability of B given that A has occured = Probability of (A and B) / Probability of A

using the A and B terminology

Probability of A = Probability of first flip lands on head = 1/2

Probability of second flip lands on head = 1/2

Probability of (A and B) = Probability of first and second flip both land on heads=1/4

Probability of B given A = (1/4) / (1/2) = 1/2

Are you surprised by this result? Did you expect to find 1/4? Read on.

**Example 3**

**If the question in Example 2 was stated slightly differently**

**Question**: Toss the coin twice. Knowing that at least one head occurs what is the probability that the second toss results in a tail?

**Answer**:

universe of possibilities : {HH,HT,TH,TT} A: at least one head occurs: {HH,HT,TH} B: second toss results in a tail: {HT,TT} A and B: {HT}

Probability A = #{HH,HT,TH}/#{HH,HT,TH,TT} = 3/4

Probability B = #{HT,TT}/#{HH,HT,TH,TT} = 1/2

Probability of (A and B) = #{HT}/#{HH,HT,TH,TT} = 1/4

Probability of B given A = (1/4)/(3/4) = 1/3

**Example 4**

**Question**: A family is expecting their second child. Their first child is a boy. What is the probability that the second child will be a boy as well?

**Answer**: The wording is different but the problem is exactly the same as in Example 2.