What is Compound Probability?
Compound probability is the study of how the probabilities of individual events combine when we consider multiple events together. While simple probability asks "what is the chance of this one thing happening," compound probability asks "what is the chance of this combination of things happening?" It is the mathematical framework for answering questions like: what is the probability that it rains and my flight is delayed, or what is the probability that at least one of three machines fails during a shift?
Every compound probability problem reduces to three fundamental questions about two events A and B: what is the chance both happen, what is the chance at least one happens, and what is the chance neither happens? These three values, P(A and B), P(A or B), and P(neither A nor B), together describe every possible outcome of a two-event scenario and always account for the entire probability space.
Understanding compound probability is essential in fields ranging from insurance underwriting and quality control to medical diagnostics and gambling, anywhere that decisions depend on the combined likelihood of multiple uncertain outcomes.
The Compound Probability Formulas
For two independent events A and B with individual probabilities P(A) and P(B), the three compound probabilities are:
Probability of both events occurring (intersection):
[P(A \cap B) = P(A) \times P(B)]
Probability of at least one event occurring (union):
[P(A \cup B) = P(A) + P(B) - P(A) \times P(B)]
Probability of neither event occurring (complement of the union):
[P(\text{neither}) = (1 - P(A)) \times (1 - P(B))]
These formulas assume that events A and B are independent, meaning the occurrence of one event does not change the probability of the other. When events are dependent, the formulas must be modified to include conditional probabilities, which is a more complex calculation.
Calculation Example
Suppose you flip a fair coin and roll a fair six-sided die. What are the compound probabilities where Event A is getting heads (probability 0.5) and Event B is rolling a six (probability approximately 0.1667)?
P(A and B): both heads and a six
[P(A \cap B) = 0.5 \times 0.1667 = 0.0833]
P(A or B): heads or a six or both
[P(A \cup B) = 0.5 + 0.1667 - 0.0833 = 0.5833]
P(neither): tails and not a six
[P(\text{neither}) = (1 - 0.5) \times (1 - 0.1667) = 0.5 \times 0.8333 = 0.4167]
Summary Table
| Compound Probability | Formula | Value |
|---|---|---|
| P(A and B) | P(A) times P(B) | 0.0833 |
| P(A or B) | P(A) + P(B) - P(A)P(B) | 0.5833 |
| P(neither A nor B) | (1 - P(A))(1 - P(B)) | 0.4167 |
Notice that 0.0833 + 0.5000 (only A, not B) + 0.0833 (already counted) accounts for the full probability space. More precisely, the three mutually exclusive outcomes, both, exactly one, and neither, must sum to 1.0. Here: 0.0833 + (0.5833 - 0.0833) + 0.4167 = 0.0833 + 0.5000 + 0.4167 = 1.0000.
The Multiplication Rule
The multiplication rule states that the probability of two independent events both occurring equals the product of their individual probabilities. This rule is intuitive when you think about it as a narrowing process: starting with all possible outcomes, Event A eliminates some fraction, and then Event B eliminates a further fraction from what remains.
For example, if 30 out of 100 days are rainy (P = 0.30) and 10 out of 100 flights are delayed (P = 0.10), and these events are independent, then 3 out of 100 possible scenarios involve both rain and a delayed flight (0.30 times 0.10 = 0.03).
The multiplication rule extends naturally to more than two events. The probability that three independent events all occur is P(A) times P(B) times P(C). Each additional event multiplies the combined probability by a number between 0 and 1, making the joint probability progressively smaller. This is why compound events with many required conditions are often extremely unlikely, even when each individual condition is fairly probable. Five events each with a 90 percent probability result in a joint probability of only 59 percent.
The Addition Rule
The addition rule calculates the probability that at least one of two events occurs. The naive approach of simply adding the two probabilities overcounts the scenarios where both events happen, so the formula subtracts the intersection to correct for this double-counting:
[P(A \cup B) = P(A) + P(B) - P(A \cap B)]
For independent events, since the intersection equals P(A) times P(B), this becomes P(A) + P(B) - P(A) times P(B).
The addition rule is particularly important in risk assessment. If a factory has two machines, each with a 5 percent chance of failing on any given day, the probability that at least one fails is not 10 percent but 9.75 percent (0.05 + 0.05 - 0.0025). The difference seems small with two events, but it compounds significantly with more events. With 20 machines each at 5 percent failure probability, the naive sum gives 100 percent, while the correct calculation gives approximately 64 percent.
The Complement Rule
The complement rule provides the probability that none of the events occur. For independent events, this equals the product of each event's complement (the probability of that event not occurring):
[P(\text{neither}) = (1 - P(A)) \times (1 - P(B))]
The complement rule is often the easiest path to answering "what is the probability that at least one event occurs" for multiple events. Instead of applying the addition rule with increasingly complex inclusion-exclusion corrections, calculate the probability that none of the events occur and subtract from 1:
[P(\text{at least one}) = 1 - P(\text{none})]
For n independent events each with the same probability p, this simplifies to:
[P(\text{at least one}) = 1 - (1 - p)^{n}]
This formula appears constantly in reliability engineering, where it calculates the probability of at least one component failure in a system of n identical components.
Independent vs. Dependent Events
The formulas in this calculator assume independence, but many real-world events are dependent, meaning the occurrence of one event changes the probability of another.
Independent events include flipping two coins, rolling two dice, drawing a card then replacing it before drawing again, and the weather on two consecutive days in different cities. For these events, knowing the outcome of one provides no information about the other.
Dependent events include drawing two cards without replacement, the probability of a second machine failing given the first machine's failure (if they share a power supply), and the probability of a student passing a second exam given they passed the first (since passing the first suggests preparedness).
For dependent events, the multiplication rule becomes P(A and B) = P(A) times P(B given A), where P(B given A) is the conditional probability of B occurring given that A has occurred. This conditional probability can be higher or lower than P(B) alone, depending on whether the events are positively or negatively correlated.
Applications in Risk Assessment
Insurance companies use compound probability daily to price policies and assess exposure. If the annual probability of a house fire is 0.003 and the annual probability of a flood is 0.01, and these events are independent (which is a simplification), the probability of both occurring in the same year is 0.00003, the probability of at least one occurring is 0.01297, and the probability of neither occurring is 0.98703.
Quality control engineers apply compound probability to manufacturing processes. If a production line has three independent inspection stages, each catching 95 percent of defects, the probability that a defective item passes all three inspections is 0.05 cubed, or 0.000125. This means only 1 in 8,000 defective items escapes all three checks, an impressive figure achieved through the compounding effect of independent probabilities.
Medical diagnostics combine the probabilities of multiple test results to estimate the likelihood of a condition. If two independent screening tests each have a 90 percent sensitivity, administering both tests and requiring a positive result on at least one gives an overall sensitivity of 99 percent (1 minus 0.1 times 0.1), dramatically reducing the chance of a false negative.
Visualizing with Venn Diagrams
A Venn diagram is the standard tool for visualizing compound probabilities. Two overlapping circles represent events A and B, and the areas within the diagram represent the four mutually exclusive outcomes:
| Region | Probability | Description |
|---|---|---|
| A only | P(A) - P(A and B) | Event A occurs but Event B does not |
| B only | P(B) - P(A and B) | Event B occurs but Event A does not |
| A and B (overlap) | P(A) times P(B) | Both events occur |
| Outside both circles | (1 - P(A)) times (1 - P(B)) | Neither event occurs |
The four regions must sum to 1.0, accounting for every possible outcome. The overlap region is precisely the quantity subtracted in the addition rule to avoid double-counting. For independent events, the size of the overlap is proportional to the product of the individual probabilities, while for dependent events, the overlap can be larger or smaller than this product.
Extending to Multiple Events
While this calculator handles two events, the principles extend to any number of independent events. For n independent events with probabilities p1 through pn:
All events occur:
[P(\text{all}) = p_{1} \times p_{2} \times \ldots \times p_{n}]
At least one event occurs:
[P(\text{at least one}) = 1 - (1 - p_{1})(1 - p_{2}) \ldots (1 - p_{n})]
The "at least one" calculation is especially powerful. The famous birthday problem asks: in a group of 23 people, what is the probability that at least two share a birthday? Using the complement approach, the answer is approximately 50.7 percent, a result that surprises most people because it demonstrates how quickly compound probabilities accumulate even when individual probabilities seem small.
Understanding compound probability transforms the way you think about risk, uncertainty, and decision-making. It reveals why redundant systems are so effective, why rare events happen more often than intuition suggests, and why the interaction of multiple uncertain factors creates outcomes that are often far from the simple sum or product of their parts.
Common Misconceptions About Compound Probability
Several persistent errors in probabilistic reasoning trap both students and professionals. Recognizing these misconceptions strengthens your ability to apply compound probability correctly.
The gambler's fallacy is the belief that past outcomes influence future independent events. After a coin lands heads five times in a row, many people feel tails is "due." But the probability of heads on the sixth flip remains exactly 0.5. Each flip is independent, and the coin has no memory. The compound probability of six consecutive heads is 0.5 to the sixth power, which equals roughly 0.0156, but this describes the probability of the entire sequence before it begins, not the probability of the next flip given the previous five.
Neglect of base rates occurs when people assess compound probabilities without anchoring to the underlying frequency of each event. A medical test with 99 percent accuracy sounds nearly perfect, but if the disease it detects occurs in only 1 out of every 10,000 people, the majority of positive results are false positives. The compound probability of testing positive and actually having the disease depends critically on the base rate, and ignoring it leads to dramatically wrong conclusions.
Confusion between "and" and "or" is surprisingly common. People often overestimate the probability of a specific compound outcome (both events happening) while underestimating the probability that at least one of several risks materializes. A project manager who estimates a 10 percent chance of delay from each of five independent risk factors might assume the overall risk is low. In reality, the probability that at least one delay occurs is 1 minus 0.9 to the fifth power, which equals approximately 41 percent, far higher than any individual risk suggests.
From Compound to Conditional Probability
This calculator handles independent events, but real-world analysis often requires the next step: conditional probability. When events are not independent, the probability of one event changes depending on whether the other has occurred. The general multiplication rule accounts for this:
$$P(A \cap B) = P(A) \times P(B \mid A)$$
Here, P(B | A) is the conditional probability of B given that A has occurred. For independent events, P(B | A) equals P(B) and the formula collapses to the simple multiplication rule used by this calculator.
Conditional probability appears everywhere in practice. The probability of a second component failing in a power system often increases after the first failure because the remaining components absorb additional load. The probability of a patient responding to a second-line therapy depends on whether they responded to the first. In fraud detection, the probability that a transaction is fraudulent given that it was flagged by an algorithm depends on both the algorithm's accuracy and the base rate of fraud.
Bayes' theorem extends conditional probability to allow updating beliefs as new evidence arrives. Starting from a prior probability and incorporating observed data, Bayes' theorem calculates a posterior probability that reflects both the initial estimate and the new information. This framework underpins modern spam filters, diagnostic algorithms, and machine learning classifiers, all of which are built on the same probabilistic foundations that begin with the compound probability rules presented here.