Sensitivity analysis.
The optimum of an LP is the least interesting thing the solver computes. The interesting numbers are the duals and reduced costs — they tell you how your optimum moves if the world changes. grove exposes both by default.
Why you care
Consider the budget constraint from the scheduling example:
budget := prob.AddConstraint(
"budget",
grove.Expr{day: 1200, night: 1500},
grove.LTE,
18000,
)
After Solve returns Optimal, you
know how many shifts to staff. But you don’t yet know
the question your CFO will ask next, which is:
“what happens if I give you another thousand
dollars?”. The dual value of budget
answers that question, exactly, in the objective’s
units, and costs you nothing to compute — grove has already
computed it during the solve.
Shadow prices (duals)
The shadow price of a constraint is the rate of
change of the optimal objective as you loosen the
constraint’s right-hand side by one unit. Formally, for
a constraint c with RHS b and
optimal objective z*:
dual(c) = ∂z* / ∂b
Read it back from the result:
fmt.Println(res.Dual(budget)) // e.g. 0.000833 (one extra dollar of budget → 0.000833 extra units of objective)
Two practical interpretations:
- A non-zero dual means the constraint is binding — it is exactly satisfied at the optimum, and you would do better if it were looser.
- A zero dual means the constraint has slack at the optimum — you have more of that resource than you’re using, so loosening it further changes nothing.
For a capacity planner, the dual is the answer to “where is the bottleneck, and what is the next dollar of capacity worth?” That single number per constraint is worth more than all the primal values combined.
Sense and sign
grove returns duals in the user-facing sense. For a
Maximize problem, the dual of an LTE
constraint is ≥ 0 at an optimum and the dual of a
GTE is ≤ 0 (loosening an LTE can only help you;
loosening a GTE can only hurt). EQ duals are
signed according to which side of the equality is binding.
You don’t have to remember any of this — just read the
sign.
Reduced costs
The reduced cost of a variable is the answer to a symmetric question: “how much would the objective change per unit I forced this variable to take on, if it is not already in the basis?”. For a variable currently at zero (a non-basic variable), the reduced cost tells you how much worse the objective gets per unit you forced it upward:
fmt.Println(res.Reduced(overnight_shift))
// -12.5 → forcing overnight_shift = 1 costs 12.5 units of objective
A basic variable (one taking a positive value at the optimum) always has reduced cost zero; that is the optimality condition.
A non-basic variable with a “good” sign of reduced cost (negative for max, positive for min) has the property that bringing it into the basis would improve the objective — if it did, the simplex would still be pivoting. At the optimum, every non-basic variable has a “bad” sign or zero.
Zero reduced cost on a non-basic variable is the simplex’s way of telling you there are alternative optima — a whole face of the polytope where the objective takes the same value.
The structured report
For a quick human-readable summary, grove ships a report printer:
rep := grove.SensitivityReport(prob, res)
fmt.Println(rep)
It prints two tables — one row per constraint, one row per variable — each with the primal values and the dual / reduced cost numbers side by side:
Constraints
name lhs rhs slack dual binding
day_min 12.5 GTE 4 8.5 0 false
night_min 2 GTE 2 0 1 true
budget 18000 LTE 18000 0 0.000833 true
Variables
name value reduced bound-status
day 12.5 0 at lower
night 2 0 at lower
The numbers depend on the model; the shape is always this.
If you want the data instead of the prose,
SensitivityReport returns a
*Sensitivity whose fields are flat slices:
type Sensitivity struct {
Constraints []ConstraintSensitivity // one entry per constraint, with a Binding bool
Variables []VarSensitivity // one entry per variable
}
The String method you see above is just a pretty
printer over those slices — filter them yourself for the
dashboard shape you want.
A worked example
Given the nurse model, say the CFO finds another $2,000 in the couch cushions. Before grove was in the picture, you would re-solve. You don’t have to:
extraBudget := 2000.0
deltaZ := res.Dual(budget) * extraBudget
fmt.Printf("With +$%v of budget, z* increases by %.3f\n", extraBudget, deltaZ)
The caveat: this linearisation is only exact within the range where the current basis stays optimal. If you hand the CFO’s grant into the model and re-solve, at some point a different constraint starts binding and the dual jumps. v0.1 doesn’t report the basis-stability range; that ships in v0.2 (see the roadmap). For small perturbations the dual is correct.
What grove does not report (yet)
Three things are on the roadmap but not in v0.1:
- Per-coefficient ranges over which the current basis stays optimal (sensitivity of the objective to the objective coefficient of each variable).
- Per-RHS ranges over which each constraint’s dual stays constant.
- Parametric sensitivity of duals to simultaneous RHS changes.
If you need any of the above today, the WriteMPS
exporter gives you a model you can pipe into a ranging-capable
solver (GLPK’s glpsol --ranges, or HiGHS).