ripopt
A memory-safe interior point optimizer written in Rust, inspired by Ipopt.
What is ripopt?
ripopt solves nonlinear programming (NLP) problems of the form:
min f(x)
s.t. g_l <= g(x) <= g_u
x_l <= x <= x_u
It implements a primal-dual interior point method (IPM) with a logarithmic barrier formulation. The solver is written entirely in Rust (~21,700 lines) with no external C/Fortran dependencies.
At a glance
| Property | Value |
|---|---|
| Algorithm | Primal-dual IPM with Mehrotra predictor-corrector |
| Linear solver | Dense Bunch-Kaufman LDL^T (small) / rmumps multifrontal (large) |
| HS benchmark | 118/120 (98.3%) — surpasses Ipopt's 116/120 |
| CUTEst benchmark | 562/727 (77.3%) — surpasses Ipopt's 561/727 |
| Speed vs Ipopt | 15.0x geo mean on HS suite, 9.9x on CUTEst |
| Language | Rust (no unsafe FFI) |
| Interfaces | Rust API, C API, Pyomo/AMPL, GAMS, Julia/JuMP |
Key features
- Primal-dual IPM with logarithmic barrier and fraction-to-boundary rule
- Mehrotra predictor-corrector with Gondzio centrality corrections (enabled by default)
- Filter line search with switching condition and second-order corrections
- Two-phase restoration: fast Gauss-Newton + full NLP restoration subproblem
- Multi-solver fallback: L-BFGS → Augmented Lagrangian → SQP → slack reformulation
- Dense condensed KKT (Schur complement) for tall-narrow problems: 100-800x speedup on m >> n
- Adaptive and monotone barrier strategies with automatic mode switching
- NE-to-LS reformulation for overdetermined nonlinear equation systems
- Parametric sensitivity analysis (sIPOPT-style)
- Preprocessing: fixed variable elimination, redundant constraint removal, bound tightening
- C API mirroring Ipopt's interface; Pyomo, GAMS, and Julia/JuMP wrappers
Quick example
use ripopt::{NlpProblem, SolveStatus, SolverOptions}; struct Hs071; impl NlpProblem for Hs071 { // ... (see API Guide for full implementation) } fn main() { let result = ripopt::solve(&Hs071, &SolverOptions::default()); assert_eq!(result.status, SolveStatus::Optimal); println!("f* = {:.6}", result.objective); // 17.014017 }
See the API Guide for a complete walkthrough.
Installation
Prerequisites: Rust and Cargo
ripopt is written in Rust. Install the Rust toolchain via rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
Verify: rustc --version && cargo --version
Install ripopt
git clone https://github.com/jkitchin/ripopt.git
cd ripopt
make install
This builds the release binary and shared library, then installs:
ripoptAMPL solver binary →~/.cargo/bin/libripopt.dylib/libripopt.so→~/.local/lib/
Verify: ripopt --version
Using ripopt as a Rust library
Add to your Cargo.toml:
[dependencies]
ripopt = { git = "https://github.com/jkitchin/ripopt" }
Using ripopt with Python/Pyomo
pip install ./pyomo-ripopt
from pyomo.environ import *
solver = SolverFactory('ripopt')
result = solver.solve(model, tee=True)
Using ripopt with Julia/JuMP
cargo build --release
julia -e 'import Pkg; Pkg.develop(path="Ripopt.jl")'
ENV["RIPOPT_LIBRARY_PATH"] = "/path/to/ripopt/target/release"
using JuMP, Ripopt
model = Model(Ripopt.Optimizer)
@variable(model, 1 <= x[1:4] <= 5)
set_start_value.(x, [1.0, 5.0, 5.0, 1.0])
@NLobjective(model, Min, x[1]*x[4]*(x[1]+x[2]+x[3]) + x[3])
@NLconstraint(model, x[1]*x[2]*x[3]*x[4] >= 25)
@NLconstraint(model, x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2 == 40)
optimize!(model)
println(objective_value(model)) # ≈ 17.014
Using ripopt with GAMS
cargo build --release
make -C gams && sudo make -C gams install
option nlp = ripopt;
Solve mymodel using nlp minimizing obj;
Options via ripopt.opt (same format as Ipopt):
tol 1e-8
max_iter 1000
print_level 5
Using the C API
After make install, link against the shared library using ripopt.h:
cc my_program.c -I/path/to/ripopt -L~/.local/lib -lripopt -lm
Uninstall
make uninstall
API Guide
Defining a problem
Implement the NlpProblem trait. Every method has a default that returns zeros or empty vectors, so you only need to override what your problem uses.
Full example: HS071
HS071 is the classic Hock-Schittkowski test problem #71:
min x₁·x₄·(x₁+x₂+x₃) + x₃
s.t. x₁·x₂·x₃·x₄ ≥ 25
x₁²+x₂²+x₃²+x₄² = 40
1 ≤ xᵢ ≤ 5
x* ≈ (1.000, 4.743, 3.821, 1.379), f* ≈ 17.014
use ripopt::{NlpProblem, SolveStatus, SolverOptions}; struct Hs071; impl NlpProblem for Hs071 { fn num_variables(&self) -> usize { 4 } fn num_constraints(&self) -> usize { 2 } fn bounds(&self, x_l: &mut [f64], x_u: &mut [f64]) { for i in 0..4 { x_l[i] = 1.0; x_u[i] = 5.0; } } fn constraint_bounds(&self, g_l: &mut [f64], g_u: &mut [f64]) { g_l[0] = 25.0; g_u[0] = f64::INFINITY; // inequality: product >= 25 g_l[1] = 40.0; g_u[1] = 40.0; // equality: sum of squares = 40 } fn initial_point(&self, x0: &mut [f64]) { x0.copy_from_slice(&[1.0, 5.0, 5.0, 1.0]); } fn objective(&self, x: &[f64], _new_x: bool, obj: &mut f64) -> bool { *obj = x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2]; true } fn gradient(&self, x: &[f64], _new_x: bool, grad: &mut [f64]) -> bool { grad[0] = x[3] * (2.0 * x[0] + x[1] + x[2]); grad[1] = x[0] * x[3]; grad[2] = x[0] * x[3] + 1.0; grad[3] = x[0] * (x[0] + x[1] + x[2]); true } fn constraints(&self, x: &[f64], _new_x: bool, g: &mut [f64]) -> bool { g[0] = x[0] * x[1] * x[2] * x[3]; g[1] = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3]; true } // Jacobian: list of (row, col) pairs for non-zero entries fn jacobian_structure(&self) -> (Vec<usize>, Vec<usize>) { (vec![0,0,0,0, 1,1,1,1], vec![0,1,2,3, 0,1,2,3]) } fn jacobian_values(&self, x: &[f64], _new_x: bool, vals: &mut [f64]) -> bool { vals[0] = x[1]*x[2]*x[3]; vals[1] = x[0]*x[2]*x[3]; vals[2] = x[0]*x[1]*x[3]; vals[3] = x[0]*x[1]*x[2]; vals[4] = 2.0*x[0]; vals[5] = 2.0*x[1]; vals[6] = 2.0*x[2]; vals[7] = 2.0*x[3]; true } // Hessian of Lagrangian: lower triangle only. L = obj_factor*f + Σ λᵢ*gᵢ fn hessian_structure(&self) -> (Vec<usize>, Vec<usize>) { (vec![0,1,1,2,2,2,3,3,3,3], vec![0,0,1,0,1,2,0,1,2,3]) } fn hessian_values(&self, x: &[f64], _new_x: bool, obj_factor: f64, lambda: &[f64], vals: &mut [f64]) -> bool { vals[0] = obj_factor * 2.0*x[3] + lambda[1]*2.0; vals[1] = obj_factor * x[3] + lambda[0]*x[2]*x[3]; vals[2] = lambda[1]*2.0; vals[3] = obj_factor * x[3] + lambda[0]*x[1]*x[3]; vals[4] = lambda[0]*x[0]*x[3]; vals[5] = lambda[1]*2.0; vals[6] = obj_factor*(2.0*x[0]+x[1]+x[2]) + lambda[0]*x[1]*x[2]; vals[7] = obj_factor * x[0] + lambda[0]*x[0]*x[2]; vals[8] = obj_factor * x[0] + lambda[0]*x[0]*x[1]; vals[9] = lambda[1]*2.0; true } } fn main() { let result = ripopt::solve(&Hs071, &SolverOptions::default()); assert_eq!(result.status, SolveStatus::Optimal); println!("f* = {:.6}", result.objective); // 17.014017 println!("x* = {:?}", result.x); // [1.0, 4.743, 3.821, 1.379] println!("iters = {}", result.iterations); // ~8 }
Trait reference
| Method | Required | Description |
|---|---|---|
num_variables() | Yes | Number of decision variables n |
num_constraints() | Yes | Number of constraints m |
bounds(x_l, x_u) | Yes | Variable bounds (use ±∞ for unbounded) |
constraint_bounds(g_l, g_u) | Yes | Constraint bounds (g_l = g_u for equality) |
initial_point(x0) | Yes | Starting point — must be strictly interior to bounds |
objective(x) | Yes | Objective value f(x) |
gradient(x, grad) | Yes | Gradient ∇f(x) |
constraints(x, g) | if m > 0 | Constraint values g(x) |
jacobian_structure() | if m > 0 | Sparsity pattern as (row, col) triplets |
jacobian_values(x, vals) | if m > 0 | Jacobian values in same order as structure |
hessian_structure() | Recommended | Lower triangle sparsity of ∇²L |
hessian_values(x, obj_factor, lambda, vals) | Recommended | ∇²L = obj_factor·∇²f + Σ λᵢ·∇²gᵢ |
Note: If hessian_structure() / hessian_values() are not implemented, ripopt automatically falls back to L-BFGS Hessian approximation.
The Lagrangian sign convention
ripopt uses L = f + yᵀg (same as Ipopt). The Hessian requested is:
obj_factor · ∇²f(x) + Σᵢ λᵢ · ∇²gᵢ(x)
Only the lower triangle is needed: entries (i, j) with i ≥ j.
Reading the result
#![allow(unused)] fn main() { let result = ripopt::solve(&problem, &opts); match result.status { SolveStatus::Optimal => { println!("x = {:?}", result.x); // optimal point println!("f = {}", result.objective); // objective value println!("y = {:?}", result.y); // constraint multipliers println!("z_l = {:?}", result.z_l); // lower bound multipliers println!("z_u = {:?}", result.z_u); // upper bound multipliers } SolveStatus::MaxIterations => { /* increase max_iter or adjust options */ } SolveStatus::NumericalError => { /* try fallbacks, different starting point */ } SolveStatus::LocalInfeasibility => { /* problem is locally infeasible */ } SolveStatus::RestorationFailed => { /* feasibility recovery failed */ } } }
Sensitivity analysis
After a successful solve, compute how the optimal solution changes under parameter perturbations:
#![allow(unused)] fn main() { // Parametric sensitivity: dx/dp at optimum let (result, sens) = ripopt::solve_with_sensitivity(&problem, &opts); if let Some(s) = sens { println!("dx/dp = {:?}", s.dx_dp); // primal sensitivity println!("dy/dp = {:?}", s.dy_dp); // multiplier sensitivity } }
See src/sensitivity.rs for the sIPOPT-style implementation.
Customizing options
#![allow(unused)] fn main() { let opts = SolverOptions { tol: 1e-10, max_iter: 500, print_level: 3, hessian_approximation_lbfgs: true, // skip exact Hessian ..SolverOptions::default() }; let result = ripopt::solve(&problem, &opts); }
See Solver Options for the full reference.
Benchmarks
ripopt is benchmarked against Ipopt (native C++ with MUMPS linear solver) on three standard test suites and several domain-specific problem sets. All benchmarks run on the same hardware; both solvers receive identical problem data through the same Rust trait interface.
Hock-Schittkowski Suite (120 problems)
The HS suite is the classic test set for NLP solvers, covering small problems (n ≤ 15) with mixed equality/inequality constraints.
| Metric | ripopt | Ipopt (MUMPS) |
|---|---|---|
| Problems solved | 118/120 (98.3%) | 116/120 (96.7%) |
| Solved by ripopt only | 2 | — |
| Solved by Ipopt only | — | 0 |
On 116 commonly solved problems (strict Optimal status required):
| Metric | Value |
|---|---|
| Geometric mean speedup | 15.0x |
| Median speedup | 14.2x |
| ripopt faster | 113/116 (97%) |
| ripopt 10x+ faster | 84/116 (72%) |
| Matching objectives (rel diff < 1e-4) | 110/116 (95%) |
Run: make hs-run
CUTEst Benchmark Suite (727 problems)
CUTEst covers a wide range of problem types, sizes, and structures. Problems range from n=2 to n=10,000+.
| Metric | ripopt | Ipopt (MUMPS) |
|---|---|---|
| Total solved | 562/727 (77.3%) | 561/727 (77.2%) |
| Solved by ripopt only | 37 | — |
| Solved by Ipopt only | — | 36 |
On 525 commonly solved problems:
| Metric | Value |
|---|---|
| Geometric mean speedup | 9.9x |
| Median speedup | 18.9x |
| ripopt faster | 440/525 (84%) |
| ripopt 10x+ faster | 332/525 (63%) |
| Matching objectives (rel diff < 1e-4) | 436/525 (83.0%) |
Run: make benchmark (full suite, ~2 hours) or individual problems:
cargo run --bin cutest_suite --features cutest,ipopt-native --release -- PROBLEM1 PROBLEM2
Where ripopt is faster
- Small problems (n < 50). Stack allocation and cache-efficient dense BK factorization avoid sparse overhead. 2–5x per-iteration speedup over Ipopt.
- Tall-narrow problems (m >> n, n ≤ 100). Dense condensed KKT via Schur complement reduces an (n+m)×(n+m) sparse factorization to n×n dense. 100–800x speedup on EXPFITC (n=5, m=502), OET3 (n=4, m=1002).
- Better iteration counts. Mehrotra predictor-corrector with Gondzio centrality corrections (enabled by default) cuts iterations by 20–40% on many problems.
- Fallback cascade. NE-to-LS reformulation, two-phase restoration, and multi-solver fallback recover problems Ipopt cannot solve.
Where Ipopt is faster
- Large sparse problems (n+m > 5,000). Ipopt's Fortran MUMPS is 10–15x faster per factorization than rmumps on large systems.
- Some medium constrained problems. A handful of problems (CORE1, HAIFAM, NET1) have higher per-iteration cost in ripopt's line search or fallback cascade.
Large-Scale Benchmarks
Both solvers use the same Rust trait interface; ripopt uses rmumps, Ipopt uses Fortran MUMPS.
| Problem | n | m | ripopt | time | Ipopt | time | speedup |
|---|---|---|---|---|---|---|---|
| Rosenbrock 500 | 500 | 0 | Optimal | 0.003s | Optimal | 0.199s | 76.2x |
| Bratu 1K | 1,000 | 998 | Optimal | 0.002s | Optimal | 0.002s | 1.1x |
| SparseQP 1K | 500 | 500 | Optimal | 0.176s | Optimal | 0.004s | 0.02x |
| OptControl 2.5K | 2,499 | 1,250 | Optimal | 0.006s | Optimal | 0.002s | 0.4x |
Numbers above are fresh (v0.7.0). The larger problems (Poisson 2.5K,
Rosenbrock 5K, Bratu 10K, OptControl 20K, Poisson 50K, SparseQP 100K)
are not re-run for v0.7.0 — the stricter KKT backward-error probe
causes Poisson 2.5K to exhaust max_iter and the full sweep is gated
behind a separate investigation. Historical timings from v0.6.2 remain
in benchmarks/large_scale/large_scale_results.txt snapshots.
Run: make benchmark
Domain-Specific Benchmarks
| Suite | Problems | ripopt | Ipopt | Notes |
|---|---|---|---|---|
| Electrolyte thermodynamics | 13 | 13/13 (100%) | 12/13 (92.3%) | 17.5x geo mean; ripopt uniquely solves seawater speciation |
| Grid (AC Optimal Power Flow) | 4 | 3/4 (75%) | 4/4 (100%) | 2.8x geo mean on 3 commonly-solved; case30_ieee regression |
| CHO parameter estimation | 1 | 0/1 | 0/1 | n=21,672, m=21,660; both hit iteration limit |
| Gas pipeline NLPs | 4 | see suite README | see suite README | PDE-discretized Euler equations on pipe networks (gaslib11/40). Standalone — does not feed BENCHMARK_REPORT.md |
| Water distribution NLPs | 6 | see suite README | see suite README | MINLPLib water network design instances. Standalone — does not feed BENCHMARK_REPORT.md |
Algorithm
ripopt implements the primal-dual interior point method (IPM) described in Wächter & Biegler (2006), with several extensions for robustness and performance.
Problem form
min f(x)
s.t. g_l ≤ g(x) ≤ g_u
x_l ≤ x ≤ x_u
Barrier formulation
Inequality constraints and variable bounds are handled via a logarithmic barrier:
φ(x, μ) = f(x) − μ Σ log(xᵢ − xₗᵢ) − μ Σ log(xᵤᵢ − xᵢ) − μ Σ log(gᵤⱼ − gⱼ) − μ Σ log(gⱼ − gₗⱼ)
As μ → 0, the solution of min φ(x, μ) converges to the solution of the original NLP.
Perturbed KKT system
Stationarity of the barrier problem gives the perturbed KKT conditions:
- Stationarity:
∇f(x) + J(x)ᵀy − z_l + z_u = 0 - Feasibility:
g_l ≤ g(x) ≤ g_u - Complementarity:
z_lᵢ · (xᵢ − xₗᵢ) = μandz_uᵢ · (xᵤᵢ − xᵢ) = μ
where y are constraint multipliers and z_l, z_u are bound multipliers.
Newton step
Each iteration solves the augmented KKT system:
[H + Σ + δ_w I, Jᵀ ] [dx] [r_d]
[J, −δ_c I ] [dy] = [r_p]
- H = ∇²_xx L (Lagrangian Hessian, lower triangle)
- Σ = diag(z_l/s_l + z_u/s_u) (barrier diagonal)
- δ_w, δ_c = inertia correction perturbations
- r_d = −(∇f + Jᵀy − z_l + z_u) (dual residual)
- r_p = −g(x) + slack (primal residual)
Linear solvers
| Problem size | Solver | Notes |
|---|---|---|
| n + m < 110 | Dense Bunch-Kaufman LDL^T | Stack allocation, exact inertia |
| m ≥ 2n, n ≤ 100 | Dense condensed KKT (Schur complement) | 100-800x speedup for m >> n |
| n + m ≥ 110 | rmumps multifrontal LDL^T | SuiteSparse AMD reordering |
Inertia correction
After factorization, the inertia (sign counts of D in LDL^T) is checked. The correct inertia is (n positive, m negative, 0 zero). If wrong, δ_w and δ_c are increased and the matrix is re-factored.
Mehrotra predictor-corrector
Instead of a fixed centering parameter σ = 0.1, ripopt uses Mehrotra's adaptive rule:
- Predictor step (σ = 0): compute affine-scaling direction
dx_aff - Adaptive centering:
σ = (μ_aff/μ)³whereμ_affis the complementarity after the affine step - Corrector step: solve the same KKT system with a modified RHS using σ·μ and affine cross-terms
This reuses the same LDL^T factorization (one extra triangular solve) and typically reduces iteration counts by 20–40%. Gondzio centrality corrections (up to 3 per iteration) further drive outlier complementarity pairs back to the central path.
Filter line search
The step size α is chosen by the filter method (Fletcher & Leyffer, 2002):
- A filter maintains pairs (θ, φ) where θ = constraint violation and φ = barrier objective
- A trial point is acceptable if it is not dominated by any filter entry: θ_trial < (1−γ_θ)θ or φ_trial < φ − γ_φ·θ
- Switching condition: when sufficiently feasible (θ < θ_min), switch from filter to Armijo criterion
- Second-order corrections (SOC): when a step is rejected, correct the constraint residual to account for nonlinearity (up to 4 SOC per iteration)
Step size rules
- Fraction-to-boundary: α ≤ τ · max{α : x + α·dx > x_l, z + α·dz > 0} with τ = 0.99
- Separate primal and dual step sizes, with Ipopt's
alpha_y = alpha_dconvention
Barrier parameter update
Free mode (default): oracle-based μ selection from complementarity, with filter reset each iteration.
Fixed mode: monotone decrease μ_new = factor · μ until barrier subproblem converges to barrier_tol_factor · μ. Triggered automatically when the free-mode oracle stalls.
Restoration phase
When the filter line search fails (all backtracking steps rejected), restoration recovers feasibility:
-
Gauss-Newton restoration (fast): minimize
(1/2)||g(x)||²using GN steps. Quadratic convergence for nonlinear equalities; falls back to gradient descent when Jacobian is rank-deficient. -
NLP restoration subproblem (robust, triggered after 5 GN failures): solve an auxiliary NLP:
min ρ·(Σpᵢ + Σnᵢ) + (η/2)·||D_R(x − x_r)||² s.t. g(x) − p + n = g_targetThis minimizes constraint violation with a proximity term to the reference point x_r.
Fallback cascade
When the primary IPM fails, ripopt automatically tries:
- L-BFGS Hessian approximation: replaces exact Hessian with L-BFGS curvature pairs (no second derivatives needed)
- Augmented Lagrangian: for equality-only problems; L-BFGS inner solver with multiplier updates
- SQP: sequential quadratic programming for small constrained problems
- Slack reformulation: converts inequality constraints to equality + bounds on slacks (
g(x) − s = 0)
Each fallback uses the same interface and inherits the time budget from the primary solve.
Convergence criteria
Three scaled KKT residuals must be below tolerance:
| Residual | Formula | Default tol |
|---|---|---|
| Primal infeasibility | `max | gᵢ(x) − proj(gᵢ, [gₗᵢ, gᵤᵢ]) |
| Dual infeasibility | ` | |
| Complementarity | `max | zₗᵢ·sₗᵢ − μ |
Key papers
- Wächter & Biegler (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming.
- Mehrotra (1992). On the implementation of a primal-dual interior point method. SIAM J. Optimization.
- Gondzio (1996). Multiple centrality corrections in a primal-dual method for linear programming. Computational Optimization and Applications.
- Fletcher & Leyffer (2002). Nonlinear programming without a penalty function. Mathematical Programming.
Solver Options
All options are set via SolverOptions. Defaults match Ipopt where applicable.
#![allow(unused)] fn main() { let opts = SolverOptions { tol: 1e-8, max_iter: 500, ..SolverOptions::default() }; }
Convergence
| Option | Default | Description |
|---|---|---|
tol | 1e-8 | KKT optimality tolerance (dual infeasibility + complementarity) |
constr_viol_tol | 1e-4 | Constraint violation tolerance |
dual_inf_tol | 1.0 | Dual infeasibility tolerance (scaled) |
compl_inf_tol | 1e-4 | Complementarity tolerance |
max_iter | 3000 | Maximum iterations |
max_wall_time | 0.0 | Wall-clock limit in seconds (0 = unlimited) |
stall_iter_limit | 30 | Max iters without 1% improvement before stall detection (0 = off) |
early_stall_timeout | 120.0 | Max seconds for first 3 iterations (0 = off) |
Barrier parameter
| Option | Default | Description |
|---|---|---|
mu_init | 0.1 | Initial barrier parameter |
mu_min | 1e-11 | Minimum barrier parameter floor |
mu_strategy_adaptive | true | true = oracle-based (Free mode); false = monotone |
kappa | 10.0 | Barrier decrease divisor: μ_new = avg_compl / kappa |
mu_linear_decrease_factor | 0.2 | Monotone mode: μ_new = factor * μ |
mu_superlinear_decrease_power | 1.5 | Monotone mode superlinear exponent |
barrier_tol_factor | 10.0 | Subproblem tol = barrier_tol_factor * μ |
mu_allow_increase | true | Allow μ to increase after restoration/stall recovery |
adaptive_mu_monotone_init_factor | 0.8 | Initial μ factor when entering Fixed (monotone) mode |
mu_oracle_quality_function | false | Use quality function for mu selection |
Linear solver
| Option | Default | Description |
|---|---|---|
sparse_threshold | 110 | Switch to sparse multifrontal solver when n+m ≥ threshold |
linear_solver | Direct | Direct (MUMPS/BK), Iterative (MINRES), or Hybrid |
Mehrotra predictor-corrector
| Option | Default | Description |
|---|---|---|
mehrotra_pc | true | Enable Mehrotra predictor-corrector (20–40% fewer iterations) |
gondzio_mcc_max | 3 | Max Gondzio centrality corrections per iteration (0 = off) |
Line search and corrections
| Option | Default | Description |
|---|---|---|
max_soc | 4 | Max second-order corrections per step |
tau_min | 0.99 | Fraction-to-boundary parameter (τ in step size rule) |
constraint_slack_barrier | true | Include constraint slack log-barriers in filter merit function |
watchdog_shortened_iter_trigger | 10 | Consecutive short steps before watchdog |
watchdog_trial_iter_max | 3 | Watchdog trial iterations |
Hessian strategy
| Option | Default | Description |
|---|---|---|
hessian_approximation_lbfgs | false | Use L-BFGS instead of exact Hessian (no hessian_values needed) |
enable_lbfgs_hessian_fallback | true | Auto-retry with L-BFGS if exact Hessian IPM fails |
Fallback cascade
| Option | Default | Description |
|---|---|---|
enable_slack_fallback | true | Reformulate inequalities with explicit slacks on failure |
enable_al_fallback | true | Try Augmented Lagrangian if IPM fails (equality problems) |
enable_sqp_fallback | true | Try SQP if AL/slack also fail |
enable_lbfgs_fallback | true | Try L-BFGS for unconstrained problems on IPM failure |
Restoration
| Option | Default | Description |
|---|---|---|
restoration_max_iter | 200 | Max iterations for NLP restoration subproblem |
disable_nlp_restoration | false | Disable NLP restoration (prevents recursion in inner solves) |
Warm start
| Option | Default | Description |
|---|---|---|
warm_start | false | Initialize from a previous solution |
warm_start_bound_push | 1e-3 | Bound push for warm-started variables |
warm_start_bound_frac | 1e-3 | Bound fraction for warm-started variables |
warm_start_mult_bound_push | 1e-3 | Multiplier floor for warm start |
Initial point
| Option | Default | Description |
|---|---|---|
bound_push | 1e-2 | κ₁: push initial x away from bounds by max(κ₁, κ₂·(u−l)) |
bound_frac | 1e-2 | κ₂: fraction of bound gap used for push |
slack_bound_push | 1e-2 | Slack variable bound push |
slack_bound_frac | 1e-2 | Slack variable bound fraction |
least_squares_mult_init | true | Initialize y by least-squares on stationarity |
constr_mult_init_max | 1000.0 | Cap on LS multiplier init magnitude |
Bound thresholds
| Option | Default | Description |
|---|---|---|
nlp_lower_bound_inf | -1e20 | Treat variable/constraint bounds below this as -∞ |
nlp_upper_bound_inf | 1e20 | Treat variable/constraint bounds above this as +∞ |
Preprocessing and detection
| Option | Default | Description |
|---|---|---|
enable_preprocessing | true | Eliminate fixed variables and redundant constraints |
detect_linear_constraints | true | Skip Hessian for constraints with zero second derivatives |
proactive_infeasibility_detection | false | Early infeasibility detection during iterations |
Diagnostics
| Option | Default | Description |
|---|---|---|
print_level | 5 | Verbosity: 0 = silent, 5 = full iteration table + diagnostics |
KKT matrix dump (instrumentation)
| Option | Default | Description |
|---|---|---|
kkt_dump_dir | None | If set, write each KKT matrix to this directory after factorization |
kkt_dump_name | "problem" | Problem name prefix for dump filenames |
When kkt_dump_dir is set, ripopt writes two files per iteration:
<name>_<iter:04>.mtx— Matrix Market symmetric format, lower triangle<name>_<iter:04>.json— Metadata: n, m, rhs, inertia, status
This is useful for collecting benchmark matrices for external sparse solvers.
Diagnostics
SolverDiagnostics captures structured data about solver behavior. It is available at result.diagnostics and printed to stderr when print_level >= 5.
Diagnostic block format
--- ripopt diagnostics ---
status: Optimal
iterations: 8
wall_time: 0.001s
final_mu: 4.14e-9
final_primal_inf: 5.55e-17
final_dual_inf: 1.04e-12
final_compl: 7.75e-9
restoration_count: 0
nlp_restoration_count: 0
mu_mode_switches: 2
filter_rejects: 0
watchdog_activations: 0
soc_corrections: 0
--- end diagnostics ---
Fields
| Field | Meaning |
|---|---|
status | Final solve status |
iterations | Total IPM iterations |
wall_time_secs | Total wall-clock time |
final_mu | Barrier parameter at termination |
final_primal_inf | Constraint violation at termination |
final_dual_inf | Dual infeasibility (stationarity error) at termination |
final_compl | Complementarity error at termination |
restoration_count | Gauss-Newton restoration entries |
nlp_restoration_count | Full NLP restoration entries (heavier) |
mu_mode_switches | Barrier mode transitions (Free ↔ Fixed) |
filter_rejects | Line search failures (backtracking exhausted) |
watchdog_activations | Watchdog triggered by consecutive short steps |
soc_corrections | Second-order corrections accepted |
fallback_used | Which fallback succeeded, if any (lbfgs_hessian, augmented_lagrangian, sqp, slack) |
Interpreting the diagnostics
Healthy solve (HS071-like): 0 restorations, 0 filter rejects, 2–4 mu mode switches, final_mu near 1e-9, final_primal_inf and final_dual_inf both below tol.
Struggling solve: Many filter rejects, multiple restorations, final_mu stuck above 1e-4, or a fallback was used.
Pattern → cause → fix
| Pattern | Likely cause | Options to try |
|---|---|---|
filter_rejects > 5 | Line search fighting constraints | Increase mu_init, reduce kappa |
restoration_count > 3 | Repeated feasibility recovery | Set enable_slack_fallback: true, increase mu_init |
mu_mode_switches > 10 | Free/Fixed cycling | Set mu_strategy_adaptive: false |
final_mu stuck > 1e-4 | Barrier parameter not decreasing | Increase max_iter, reduce mu_linear_decrease_factor |
fallback_used: Some(...) | Primary IPM failed | Check which fallback; consider changing Hessian strategy |
soc_corrections > 0 | Nonlinear constraints causing step rejection | Normal; increase max_soc if filter rejects are also high |
watchdog_activations > 0 | Tiny steps detected | Try hessian_approximation_lbfgs: true |
Example: healthy solve (HS071)
#![allow(unused)] fn main() { let result = ripopt::solve(&Hs071, &SolverOptions::default()); let d = &result.diagnostics; assert_eq!(d.filter_rejects, 0); assert_eq!(d.restoration_count, 0); assert!(d.final_mu < 1e-8); // iterations ≈ 8 }
Example: struggling solve — reading and reacting
#![allow(unused)] fn main() { let r1 = ripopt::solve(&problem, &opts); if r1.status != SolveStatus::Optimal { let d = &r1.diagnostics; let mut opts2 = opts.clone(); if d.filter_rejects > 5 { opts2.mu_init = 1.0; opts2.kappa = 3.0; } if d.restoration_count > 3 { opts2.enable_slack_fallback = true; } if d.final_mu > 1e-4 { opts2.mu_strategy_adaptive = false; } if d.mu_mode_switches > 10 { opts2.hessian_approximation_lbfgs = true; } let r2 = ripopt::solve(&problem, &opts2); } }
Using diagnostics with Claude Code
Claude Code can read the --- ripopt diagnostics --- block from stderr and automatically adjust solver options:
claude -p "
Run: cargo run --example debug_tp374 2>&1
Parse the diagnostics block.
If not Optimal:
- High filter_rejects → increase mu_init, decrease kappa
- High restoration_count → try enable_slack_fallback
- mu stuck high → try mu_strategy_adaptive: false
- Large multipliers → try hessian_approximation_lbfgs: true
Adjust options, re-run, compare. Up to 3 attempts.
"
The Rust code is a pure reporter. All intelligence — pattern matching, strategy selection — lives in Claude's reasoning.
NLP Tutorial Series
A series of 15 Jupyter notebooks taking you from "beginner in nonlinear programming" to understanding every component of ripopt — written in Python (numpy, scipy, matplotlib), with each notebook connecting directly to ripopt's source code.
All notebooks are in tutorials/ in the repository.
Module 1: NLP Foundations
| # | Notebook | Key concepts |
|---|---|---|
| 01 | Introduction to Nonlinear Programming | Problem formulation, feasibility vs optimality, HS071 |
| 02 | Unconstrained Optimization | Gradient descent, Newton's method, Armijo/Wolfe line search, L-BFGS |
| 03 | KKT Conditions | Lagrangian, complementarity, geometric interpretation, LICQ |
Module 2: Classical Constrained Methods
| # | Notebook | Key concepts |
|---|---|---|
| 04 | Penalty Methods and Augmented Lagrangian | Quadratic penalty, ill-conditioning, Method of Multipliers |
| 05 | Sequential Quadratic Programming | QP subproblem, active set, BFGS update, L1 merit function |
Module 3: Barrier Methods
| # | Notebook | Key concepts |
|---|---|---|
| 06 | The Logarithmic Barrier Method | Log barrier, central path, barrier parameter sequence |
| 07 | Primal-Dual Interior Point Method | Perturbed KKT, Newton step derivation, fraction-to-boundary, complete IPM |
Module 4: Linear Algebra at the Core
| # | Notebook | Key concepts |
|---|---|---|
| 08 | The KKT (Saddle-Point) Matrix | KKT structure, Sylvester's Law of Inertia, condensed KKT |
| 09 | LDL^T and Bunch-Kaufman Pivoting | Symmetric Gaussian elimination, BK pivoting, inertia from D |
| 10 | Inertia Correction and Regularization | δ_w and δ_c perturbations, correction loop, iterative refinement |
Module 5: Line Search, Filter, and Convergence
| # | Notebook | Key concepts |
|---|---|---|
| 11 | Filter Line Search | Filter concept, acceptance criteria, switching condition, SOC |
| 12 | Convergence Criteria and Scaling | Three KKT residuals, s_d/s_c scaling, NLP scaling |
Module 6: Robustness
| # | Notebook | Key concepts |
|---|---|---|
| 13 | The Restoration Phase | Gauss-Newton restoration, NLP restoration subproblem, recovery strategies |
| 14 | Mehrotra Predictor-Corrector | Affine-scaling predictor, adaptive centering σ = (μ_aff/μ)³, Gondzio corrections |
Module 7: ripopt in Practice
| # | Notebook | Key concepts |
|---|---|---|
| 15 | ripopt in Practice | NlpProblem interface, fallback cascade, sensitivity analysis, benchmarking |
Running the notebooks
git clone https://github.com/jkitchin/ripopt.git
cd ripopt/tutorials
pip install numpy scipy matplotlib jupyter
jupyter lab
Each notebook is self-contained. No Rust installation required for notebooks 01–14. Notebook 15 describes the Rust interface with Python equivalents for all concepts.
ripopt source pointers
Each notebook ends with a "Connection to ripopt" section. The key mappings:
| Topic | Notebook | ripopt source |
|---|---|---|
| KKT assembly | 08 | src/kkt.rs — assemble_kkt() |
| Bunch-Kaufman LDL^T | 09 | src/linear_solver/dense.rs — DenseLdl |
| Inertia correction | 10 | src/kkt.rs — factor_with_inertia_correction() |
| Filter line search | 11 | src/filter.rs — check_acceptability() |
| Convergence | 12 | src/convergence.rs — check_convergence() |
| Restoration | 13 | src/restoration.rs, src/restoration_nlp.rs |
| Mehrotra PC | 14 | src/ipm.rs — main loop |
| Problem interface | 15 | src/problem.rs — NlpProblem trait |