Probability Distribution: A mathematical function that describes the likelihood of obtaining the possible outcomes of a random variable.
Discrete Probability Distribution : Probability distribution for discrete random variables, where the random variable can only take distinct values. Examples include the binomial, Poisson, and geometric distributions.
Continuous Probability Distribution: Probability distribution for continuous random variables, where the random variable can take any value within a given range. Examples include the normal (Gaussian), uniform, and exponential distributions.
Binomial Distribution: Describes the number of successes in a fixed number of independent Bernoulli trials, with a constant probability of success for each trial.
Poisson Distribution: Models the number of events occurring in a fixed interval of time or space, assuming events occur independently at a constant average rate.
Normal Distribution: Also known as the Gaussian distribution, it is a continuous probability distribution that is symmetric and bell-shaped. Many natural phenomena approximately follow this distribution due to the Central Limit Theorem.
These are just a few of the key probability distributions commonly used in statistics and probability theory, each with its own characteristics and applications in various fields such as Magement Science, finance, engineering, biology, and more.
The binomial distribution is a fundamental probability distribution in statistics. It describes the number of successes in a fixed number of independent trials, where each trial has only two possible outcomes: success or failure.
Parameters
The binomial distribution is characterized by two parameters:
Probability Mass Function (PMF)
The probability mass function of the binomial distribution gives the probability of observing exactly \(r\) successes in \(n\) trials, and is given by the formula:
\[ P(X = r) = ^\text{n}{\text{C r}} \times p^r \times (1 - p)^{n-r} \]
where \(X\) is the random variable representing the number of successes, \(r\) is the number of successes, and \(^\text{n} {\text{ C r}} \ \) is the binomial coefficient.
Mean and Variance
The mean (expected value) of a binomial distribution is \(\mu = n \times p\), and the variance is \(\sigma^2 = n \times p \times (1 - p)\).
Applications
The binomial distribution has various applications across different fields:
The binomial distribution can model the probability of achieving certain performance criteria over a series of independent tasks or assignments. Example: Assessing a sales team member’s success rate in meeting monthly sales targets, with each month considered a trial.
Utilizing the binomial distribution to estimate the likelihood of project success by modeling success or failure in meeting project milestones or phases. Example: Estimating the probability of completing critical project phases on time and within budget for a construction project.
Applying the binomial distribution to predict the number of defective items in a production batch, helping to ensure product quality. Example: Modeling the likelihood of defects in a manufacturing process based on historical defect rates and production volume.
Predicting the number of successful investment decisions or portfolio outcomes using the binomial distribution. Example: Estimating the probability of achieving desired returns or meeting investment goals based on historical market data and risk analysis. Human Resource Management and Recruitment:
Employing the binomial distribution to analyze survey data or polling results related to job satisfaction, employee engagement, or candidate preferences.
Example: Assessing the probability of recruiting a certain number of qualified candidates from a pool of applicants based on selection criteria and recruitment strategies. These applications demonstrate how the binomial distribution serves as a versatile tool in administration and management science, enabling decision-makers to assess risk, optimize resource allocation, and make informed decisions to enhance organizational performance.
The binomial distribution provides a powerful tool for analyzing situations involving a fixed number of trials with two possible outcomes.
# to find out the probability using R: Prob(x=2) when n=5 and p=0.5
dbinom(x=2,size=5,p=0.5)
## [1] 0.3125
# to find out the cumulative probability using R: Prob(x=2) when n=5 and p=0.5
pbinom(q=2,size=5,p=0.5)
## [1] 0.5
bd1 <- function(n, p) {
probabilities <- numeric(n + 1) # Initialize an empty vector to store probabilities
for (k in 0:n) {probabilities[k + 1] <- dbinom(k, size = n, prob = p)} # calculate prob
return(probabilities)
}
# Calculate probability distribution
bd1(6,0.5)
## [1] 0.015625 0.093750 0.234375 0.312500 0.234375 0.093750 0.015625
cbd1 <- function(n, p) {
probabilities <- numeric(n + 1) # Initialize an empty vector to store probabilities
for (k in 0:n) {probabilities[k + 1] <- pbinom(k, size = n, prob = p)} # calculate prob
return(probabilities)
}
# Calculate probability distribution
cbd1(6,0.5)
## [1] 0.015625 0.109375 0.343750 0.656250 0.890625 0.984375 1.000000
The Poisson distribution is a discrete probability distribution that describes the number of events occurring in a fixed interval of time or space, assuming a constant average rate of occurrence and independence between events.
Characteristics and Properties
Parameter: The Poisson distribution is characterized by a single parameter, λ (lambda), representing the average rate of event occurrences per unit interval.
Probability Mass Function (PMF): The PMF of the Poisson distribution is given by: \[ P(X = k) = \frac{e^{-\lambda} \lambda^k}{k!} \] where \(X\) is the number of events, \(k\) is the number of events observed, and \(e\) is Euler’s number.
Mean and Variance: Both the mean and variance of a Poisson distribution are equal to λ.
Shape: The shape of the Poisson distribution becomes more symmetric as λ increases.
Applications
The Poisson distribution provides a versatile tool for modeling event occurrences in various management and administrative contexts, helping organizations make informed decisions and optimize resource allocation.
# to find out the probability using R: Prob(x=2) when λ = 4
dpois(x=2, lambda=4)
## [1] 0.1465251
# to find out the cumulative probability using R: Prob(x=2) when when λ = 4
ppois(q=2, lambda=4)
## [1] 0.2381033
pd1 <- function(lambda, max_events) {
probabilities <- numeric(max_events + 1) # Initialize empty vector to store probabilities
for (k in 0:max_events) { probabilities[k + 1] <- dpois(k, lambda)} # calculate prob
return(probabilities)
}
# Calculate Probability Distribution with mean and max no to find prob.
pd1(3,10)
## [1] 0.0497870684 0.1493612051 0.2240418077 0.2240418077 0.1680313557
## [6] 0.1008188134 0.0504094067 0.0216040315 0.0081015118 0.0027005039
## [11] 0.0008101512
cpd2 <- function(lambda, max_events) {
probabilities <- numeric(max_events + 1) # Initialize empty vector to store probabilities
for (k in 0:max_events) { probabilities[k + 1] <- ppois(k, lambda)} # calculate prob
return(probabilities)
}
# Calculate Probability Distribution with mean and max no to find prob.
cpd2(3,10)
## [1] 0.04978707 0.19914827 0.42319008 0.64723189 0.81526324 0.91608206
## [7] 0.96649146 0.98809550 0.99619701 0.99889751 0.99970766
The normal distribution, also known as the Gaussian distribution, is a continuous probability distribution that is symmetric about its mean. It is one of the most widely used probability distributions in statistics, modeling many natural phenomena.
Characteristics and Properties
Parameters: The normal distribution is characterized by two parameters:
Probability Density Function (PDF): The probability density function of the normal distribution is given by: \[ f(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \] where \(x\) represents the random variable, \(\mu\) is the mean, \(\sigma\) is the standard deviation, and \(e\) is Euler’s number.
Symmetry: The normal distribution is symmetric about its mean, with 68% of the data falling within one standard deviation of the mean, 95% within two standard deviations, and 99.7% within three standard deviations.
Standard Normal Distribution: A special case of the normal distribution with mean \(\mu = 0\) and standard deviation \(\sigma = 1\) is known as the standard normal distribution.
Applications
The normal distribution finds applications in various fields, including:
Computing Probabilities and Quantiles
In R, you can compute probabilities and quantiles of the normal
distribution using functions like pnorm() for cumulative
probabilities, qnorm() for quantiles, and
dnorm() for probability density values.
Example:
# Compute the cumulative probability of observing x ≤ 1 in a standard normal distribution
pnorm(1)
## [1] 0.8413447
# Compute the 95th percentile (quantile) of a standard normal distribution
qnorm(0.95)
## [1] 1.644854
# Compute the probability density at x = 0 in a standard normal distribution
dnorm(0)
## [1] 0.3989423