\(U^c = \varnothing\)
\(\varnothing^c = U\)
\((A^c)^c = A\)
\(A \cup A^c = U\)
\(A \cap A^c = \varnothing\)
\(A \cup B = B \cup A\)
\(A \cap B = B \cap A\)
\(A \cup (B \cup C) = (A \cup B) \cup C\)
\(A \cap (B \cap C) = (A \cap B) \cap C\)
\(A \cap (B \cup C) = (A \cup B) \cap (A \cup C)\)
\(A \cup (B \cap C) = (A \cap B) \cup (A \cap C)\)
\(A \cup \varnothing = A\)
\(A \cap U = A\)
\(A \cup U = U\)
\(A \cap \varnothing = \varnothing\)
\(A \cup A = A\)
\(A \cap A = A\)
\(A \cup (A \cap B) = A\)
\(A \cap (A \cup B) = A\)
If A and B are disjoin sets, then \(n(A\cup B) = n(A) + n(B)\).
If A and B are finite sets, then \(n(A\cup B) = n(A) + n(B) - n(A \cap B)\).
The result of a function can become the input to another function.
\(y = f(x)\), \(z = g(y)\), then \(z = g(f(x))\)
If \(y = f(x)\) then its inverse function is \(f^-1(y)\).
If \(g(x) = f^-1(x)\) and \(f(x) = g^-1(x)\) then \(f(x)\) and \(g(x)\) are inverses of each other.
Applying Pythagorean theorem: \(d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}\)
Slope-intercept form: \(y = mx + b\)
General form: \(Ax + By + C = 0\)
\[ \begin{array}{lcl} nb + \left(\sum x_i\right)m = \sum y_i \\ \left(\sum x_i\right)b + \left(\sum x_i^2\right)m = \sum x_iy_i \end{array} \]
\[ m = \dfrac{n \sum x_i y_i - \sum x_i \sum y_i}{n \sum x_i^2 - \left(\sum x_i\right)^2}\\ b = \dfrac{\sum y_i - m\sum x_i}{n} = \bar{y} - m\bar{x} \]
\(I_nA = A\) for every \(n \times p\) matrix A.
\(BI_n = B\) for every \(m \times n\) matrix B.
\(I_nA = AI_n = A\) for every \(n \times n\) matrix A.
\(A^{-1}\) or \(A'\) is the inverse of matrix \(A_{n \times n}\) if \(A^{-1}A = AA^{-1} = I_n\).
A \(n \times m\) matrix may have an \(m \times n\) left inverse if \(A^{-1}A = I_n\).
A \(n \times m\) matrix may have an \(m \times n\) right inverse if \(AA^{-1} = I_m\).
\(P(n,r) = _nP_r = P_r^n = P_{n,r} = \dfrac{n!}{(n-r)!}\)
\(\dfrac{n!}{n_1!n_2! \ldots n_m!}\)
- where \(n_1 + n_2 + \ldots + n_m = n\)
\(P(n) = (n-1)!\)
\(C(n,r) = _nC_r = C_r^n = C_{n,r} = \dfrac{n!}{r!(n-r)!}\)
\(0 \leq P(E) \leq 1\)
\(P(E) = 0\) means impossible event.
\(P(E) = 1\) means certainty of event.
if E and F are any 2 events of an experiment, then \(P(E \cup F) = P(E) + P(F) - P(E \cap F)\)
If E and F are mutually exclusive, then \(P(E \cup F) = P(E) + P(F)\)
If \(S = \{e_1,e_2,e_3,...,e_n\}\), then \(P(e_1) + P(e_2) + P(e_3) + \ldots + P(e_n) = P(S) = 1\)
If \(E = \{s_1,s_2,s_3,...,s_n\}\), where \(\{s_1\}\), \(\{s_2\}\), \(\{s_3\}\), …, \(\{s_n\}\) are simple events, then \(P(E) = P(s_1) + P(s_2) + P(s_3) + \ldots + P(s_n)\)
\(P(E^c) = 1 - P(E)\)
\(P(B|A) = \dfrac{P(B \cap A)}{P(A)}\)
\(P(B \cap A) = P(A) \cdot P(B|A)\)
If A and B are independent events, then \(P(B|A) = P(B)\) and \(P(A|B) = P(A)\). In other words, \(P(A \cap B) = P(A) \cdot P(B)\).
\[ Var(x) = \sigma^2 = \dfrac{\sum(x - \mu)^2}{N} \]
\[ \sigma = \sqrt{Var(x)} = \sqrt{\dfrac{\sum(x - \mu)^2}{N}} \]
\[ \delta = \sqrt{Var(x)} = \sqrt{\dfrac{\sum(x - \bar{x})^2}{n-1}} \]