A slab is defined as: \[ \{x \in \mathbb{R}^n \mid \alpha \leq a^T x \leq \beta \} \]
Convexity:
Yes, this set is convex. A slab is the intersection of two half-spaces,
each defined by a linear inequality:
The first half-space is \(H_1 = \{ x \mid a^T x \leq \beta \}\), which contains all points where the linear function \(a^T x\) is at most \(\beta\).
The second half-space is \(H_2 = \{ x \mid a^T x \geq \alpha \}\), which contains all points where \(a^T x\) is at least \(\alpha\). This can be rewritten as \(-a^T x \leq -\alpha\), which is a linear inequality.
Each half-space is convex because it is defined by a linear inequality.
The intersection of two convex sets (in this case, \(H_1\) and \(H_2\)) remains convex.
Therefore, the slab is convex.
A rectangle is defined as: \[ \{ x \in \mathbb{R}^n \mid \alpha_i \leq x_i \leq \beta_i, \forall i = 1, \dots, n \} \]
Convexity:
Yes, a rectangle is convex. Each constraint \(\alpha_i \leq x_i \leq \beta_i\) represents
the intersection of two half-spaces: \(x_i
\leq \beta_i\) and \(x_i \geq
\alpha_i\). Since half-spaces are convex and the intersection of
convex sets remains convex, the rectangle is convex.
A wedge is defined as: \[ \{ x \in \mathbb{R}^n \mid a_1^T x \leq b_1, a_2^T x \leq b_2 \} \]
Convexity:
Yes, this set is convex. Each inequality defines a half-space:
The first half-space is \(H_1 = \{ x \mid a_1^T x \leq b_1 \}\).
The second half-space is \(H_2 = \{ x \mid a_2^T x \leq b_2 \}\).
Since half-spaces are convex and the intersection of convex sets remains convex, the wedge is convex.
\[ \{ x \mid \| x - x_0 \|^2 \leq \| x - y \|^2, \forall y \in S \} \]
Convexity:
Yes, this set is convex. The function \(f(x) =
\| x - y \|^2\) is convex in \(x\) because it is a quadratic function. The
given set represents a sublevel set of this convex function:
The function \(\| x - y \|^2\) is convex in \(x\).
The sublevel set \(\{ x \mid \| x - x_0 \|^2 - \| x - y \|^2 \leq 0, \forall y \in S \}\) is convex because it is defined by a convex inequality.
Thus, the set is convex.
\[ \{ x \mid dist(x, S) \leq dist(x, T) \} \]
Convexity:
This set is not necessarily convex. The function \(dist(x, S)\) is convex because it is the
infimum of convex functions. However, the function \(dist(x, S) - dist(x, T)\) may not be
convex:
The distance function \(dist(x, S)\) is convex.
The function \(dist(x, S) - dist(x, T)\) is not necessarily convex because the difference of two convex functions is not always convex.
Thus, the set is not convex.
\(\alpha \leq E f(x) \leq \beta, \quad E f(x) = \sum_{i=1}^{n} p_i f(a_i)\) Convexity: Yes, this set is convex.
The expectation is an affine function in \(\mathbf{p}\), as it is a linear combination of the probability values \(p_i\) with coefficients \(f(a_i)\).
Since affine functions define convex sets when bounded by linear constraints, the given condition describes a convex set.
\(prob(x \leq \alpha) = \sum_{a_i \leq \alpha} p_i \leq \beta\) Convexity: Yes, this set is convex.
The summation of probability weights corresponding to \(x \leq \alpha\) is an affine function in \(\mathbf{p}\), as it is a linear sum of a subset of probabilities.
The sublevel set of an affine function is always convex, which is ensuring the convexity of this set.
\(E|x|^3 = \sum_{i=1}^{n} p_i |a_i|^3 \leq \alpha E|x| = \alpha \sum_{i=1}^{n} p_i |a_i|\) Convexity: Yes, this set is convex.
Both \(E|x|^3\) and \(E|x|\) are linear functions in \(\mathbf{p}\).
The inequality represents a sublevel set of the convex function \(g(\mathbf{p}) = E|x|^3 - \alpha E|x|\).
Thus, the set is not convex.
\(E x^2 = \sum_{i=1}^{n} p_i a_i^2 \leq \alpha\) Convexity: Yes, this set is convex.
The expectation remains an affine function in \(\mathbf{p}\), as it is a weighted sum of \(a_i^2\) with weights \(p_i\).
Since the inequality represents a sublevel set of the affine function \(g(\mathbf{p}) = E x^2\), the set is then convex.
We explore a least-squares optimization problem with inequality constraints to find an optimal blend of wines that matches a target wine’s chemical composition. In this problem, decision variables must be non-negative and satisfy additional constraints.
We have a dataset containing the chemical composition of six different wines, with each wine described by 11 chemical characteristics such as:
Alcohol
Residual Sugar
Chlorides
etc.
Additionally, we have the chemical composition of a target wine. Our goal is to determine the optimal mixture of the six wines so that the blended wine best matches the target composition.
Let: - \(\mathbf{p}\) be the vector of weights representing the proportion of each wine in the blend.
\(C_{ij}\) be the matrix representing the concentration of chemical \(i\) in wine \(j\).
\(\mathbf{c}_{\text{blend}}\) be the concentration vector for the blended wine.
\(\mathbf{c}\) be the concentration vector of the target wine.
The concentration in the blended wine is given by: \[ c_{\text{blend}, i} = \sum_{j=1}^{6} C_{ij} p_j \] which in matrix form is written as: \[ \mathbf{c}_{\text{blend}} = C \mathbf{p} \]
where \(\mathbf{p}\) is subject to the constraints: \[ \mathbf{p} \geq 0, \quad \mathbf{1}^T \mathbf{p} = 1 \] which ensures non-negativity and that the blend proportions sum to 1.
We aim to minimize the weighted least squares error: \[ \min_{\mathbf{p}} \sum_{i=1}^{11} \left( \frac{c_i - c_{\text{blend}, i}}{c_i} \right)^2 \] where the weights are given by the inverse of the target concentrations \(c_i\). This also ensures proper scaling across chemical properties.
To solve this constrained least-squares problem, we use the CVXR package in R. Below is the implementation step by step:
library(CVXR)
library(readr)
# Load datasets
wine_data <- read_csv("wine_data.csv")
target_data <- read_csv("target.csv")
# Extract wine sample names before converting to matrix
wine_labels <- rownames(wine_data) # Assign row names
# Convert to matrix
C <- as.matrix(wine_data)
# Transpose to get (11 × 6) structure
C <- t(C)
# Ensure row names are preserved after transposing
colnames(C) <- wine_labels
# Define the optimization variable (proportions of each wine)
p <- Variable(ncol(C))
# Compute the blended concentration
c_blend <- C %*% p
c_target <- as.matrix(target_data)
# Ensure proper dimension alignment for weights
weights <- matrix(1 / (c_target + 1e-6), ncol = 1) # Avoiding division by near-zero values
c_target <- t(c_target)
# Define the weighted least squares objective function
objective <- Minimize(sum(weights * (c_target - c_blend)^2))
# Constraints: proportions must sum to 1 and be non-negative
constraints <- list(sum(p) == 1, p >= 1e-3) # Small positive lower bound, preventing optimizer from choosing only one wine (because it happened with my first attempt)
# Solve the optimization problem
problem <- Problem(objective, constraints)
result <- solve(problem)
# Extract the optimal wine blend proportions
optimal_proportions <- result$getValue(p)
# Assign correct wine names
optimal_blend <- data.frame(Wine = colnames(C), Optimal_Proportion = optimal_proportions)
# Display the results
print(optimal_blend)
## Wine Optimal_Proportion
## 1 1 0.5041829
## 2 2 0.0010000
## 3 3 0.2516521
## 4 4 0.0010000
## 5 5 0.0010000
## 6 6 0.2411649
After running the optimization, we analyze the results. The computed blend consists mainly of Wines 1 (50.4%), 3 (25.2%), and 6 (24.1%), with minimal contributions from Wines 2, 4, and 5 (0.001 each), ensuring non-negativity constraints are met.
This distribution suggests that Wines 1, 3, and 6 have chemical compositions closest to the target.
Overall, the optimization successfully identified a wine mixture that closely approximates the target composition while satisfying all constraints.