ripopt

Tests

A memory-safe interior point optimizer written in Rust, inspired by Ipopt.

What is ripopt?

ripopt solves nonlinear programming (NLP) problems of the form:

min  f(x)
s.t. g_l <= g(x) <= g_u
     x_l <= x    <= x_u

It implements a primal-dual interior point method (IPM) with a logarithmic barrier formulation. The solver is written entirely in Rust (~21,700 lines) with no external C/Fortran dependencies.

At a glance

PropertyValue
AlgorithmPrimal-dual IPM with Mehrotra predictor-corrector
Linear solverDense Bunch-Kaufman LDL^T (small) / rmumps multifrontal (large)
HS benchmark118/120 (98.3%) — surpasses Ipopt's 116/120
CUTEst benchmark562/727 (77.3%) — surpasses Ipopt's 561/727
Speed vs Ipopt15.0x geo mean on HS suite, 9.9x on CUTEst
LanguageRust (no unsafe FFI)
InterfacesRust API, C API, Pyomo/AMPL, GAMS, Julia/JuMP

Key features

  • Primal-dual IPM with logarithmic barrier and fraction-to-boundary rule
  • Mehrotra predictor-corrector with Gondzio centrality corrections (enabled by default)
  • Filter line search with switching condition and second-order corrections
  • Two-phase restoration: fast Gauss-Newton + full NLP restoration subproblem
  • Multi-solver fallback: L-BFGS → Augmented Lagrangian → SQP → slack reformulation
  • Dense condensed KKT (Schur complement) for tall-narrow problems: 100-800x speedup on m >> n
  • Adaptive and monotone barrier strategies with automatic mode switching
  • NE-to-LS reformulation for overdetermined nonlinear equation systems
  • Parametric sensitivity analysis (sIPOPT-style)
  • Preprocessing: fixed variable elimination, redundant constraint removal, bound tightening
  • C API mirroring Ipopt's interface; Pyomo, GAMS, and Julia/JuMP wrappers

Quick example

use ripopt::{NlpProblem, SolveStatus, SolverOptions};

struct Hs071;

impl NlpProblem for Hs071 {
    // ... (see API Guide for full implementation)
}

fn main() {
    let result = ripopt::solve(&Hs071, &SolverOptions::default());
    assert_eq!(result.status, SolveStatus::Optimal);
    println!("f* = {:.6}", result.objective);  // 17.014017
}

See the API Guide for a complete walkthrough.

Installation

Prerequisites: Rust and Cargo

ripopt is written in Rust. Install the Rust toolchain via rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"

Verify: rustc --version && cargo --version

Install ripopt

git clone https://github.com/jkitchin/ripopt.git
cd ripopt
make install

This builds the release binary and shared library, then installs:

  • ripopt AMPL solver binary → ~/.cargo/bin/
  • libripopt.dylib / libripopt.so~/.local/lib/

Verify: ripopt --version

Using ripopt as a Rust library

Add to your Cargo.toml:

[dependencies]
ripopt = { git = "https://github.com/jkitchin/ripopt" }

Using ripopt with Python/Pyomo

pip install ./pyomo-ripopt
from pyomo.environ import *
solver = SolverFactory('ripopt')
result = solver.solve(model, tee=True)

Using ripopt with Julia/JuMP

cargo build --release
julia -e 'import Pkg; Pkg.develop(path="Ripopt.jl")'
ENV["RIPOPT_LIBRARY_PATH"] = "/path/to/ripopt/target/release"
using JuMP, Ripopt

model = Model(Ripopt.Optimizer)
@variable(model, 1 <= x[1:4] <= 5)
set_start_value.(x, [1.0, 5.0, 5.0, 1.0])
@NLobjective(model, Min, x[1]*x[4]*(x[1]+x[2]+x[3]) + x[3])
@NLconstraint(model, x[1]*x[2]*x[3]*x[4] >= 25)
@NLconstraint(model, x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2 == 40)
optimize!(model)
println(objective_value(model))  # ≈ 17.014

Using ripopt with GAMS

cargo build --release
make -C gams && sudo make -C gams install
option nlp = ripopt;
Solve mymodel using nlp minimizing obj;

Options via ripopt.opt (same format as Ipopt):

tol 1e-8
max_iter 1000
print_level 5

Using the C API

After make install, link against the shared library using ripopt.h:

cc my_program.c -I/path/to/ripopt -L~/.local/lib -lripopt -lm

Uninstall

make uninstall

API Guide

Defining a problem

Implement the NlpProblem trait. Every method has a default that returns zeros or empty vectors, so you only need to override what your problem uses.

Full example: HS071

HS071 is the classic Hock-Schittkowski test problem #71:

min   x₁·x₄·(x₁+x₂+x₃) + x₃
s.t.  x₁·x₂·x₃·x₄ ≥ 25
      x₁²+x₂²+x₃²+x₄² = 40
      1 ≤ xᵢ ≤ 5
x* ≈ (1.000, 4.743, 3.821, 1.379),  f* ≈ 17.014
use ripopt::{NlpProblem, SolveStatus, SolverOptions};

struct Hs071;

impl NlpProblem for Hs071 {
    fn num_variables(&self) -> usize { 4 }
    fn num_constraints(&self) -> usize { 2 }

    fn bounds(&self, x_l: &mut [f64], x_u: &mut [f64]) {
        for i in 0..4 { x_l[i] = 1.0; x_u[i] = 5.0; }
    }

    fn constraint_bounds(&self, g_l: &mut [f64], g_u: &mut [f64]) {
        g_l[0] = 25.0; g_u[0] = f64::INFINITY;  // inequality: product >= 25
        g_l[1] = 40.0; g_u[1] = 40.0;            // equality: sum of squares = 40
    }

    fn initial_point(&self, x0: &mut [f64]) {
        x0.copy_from_slice(&[1.0, 5.0, 5.0, 1.0]);
    }

    fn objective(&self, x: &[f64], _new_x: bool, obj: &mut f64) -> bool {
        *obj = x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2];
        true
    }

    fn gradient(&self, x: &[f64], _new_x: bool, grad: &mut [f64]) -> bool {
        grad[0] = x[3] * (2.0 * x[0] + x[1] + x[2]);
        grad[1] = x[0] * x[3];
        grad[2] = x[0] * x[3] + 1.0;
        grad[3] = x[0] * (x[0] + x[1] + x[2]);
        true
    }

    fn constraints(&self, x: &[f64], _new_x: bool, g: &mut [f64]) -> bool {
        g[0] = x[0] * x[1] * x[2] * x[3];
        g[1] = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3];
        true
    }

    // Jacobian: list of (row, col) pairs for non-zero entries
    fn jacobian_structure(&self) -> (Vec<usize>, Vec<usize>) {
        (vec![0,0,0,0, 1,1,1,1], vec![0,1,2,3, 0,1,2,3])
    }

    fn jacobian_values(&self, x: &[f64], _new_x: bool, vals: &mut [f64]) -> bool {
        vals[0] = x[1]*x[2]*x[3]; vals[1] = x[0]*x[2]*x[3];
        vals[2] = x[0]*x[1]*x[3]; vals[3] = x[0]*x[1]*x[2];
        vals[4] = 2.0*x[0]; vals[5] = 2.0*x[1];
        vals[6] = 2.0*x[2]; vals[7] = 2.0*x[3];
        true
    }

    // Hessian of Lagrangian: lower triangle only. L = obj_factor*f + Σ λᵢ*gᵢ
    fn hessian_structure(&self) -> (Vec<usize>, Vec<usize>) {
        (vec![0,1,1,2,2,2,3,3,3,3], vec![0,0,1,0,1,2,0,1,2,3])
    }

    fn hessian_values(&self, x: &[f64], _new_x: bool, obj_factor: f64, lambda: &[f64], vals: &mut [f64]) -> bool {
        vals[0] = obj_factor * 2.0*x[3]        + lambda[1]*2.0;
        vals[1] = obj_factor * x[3]             + lambda[0]*x[2]*x[3];
        vals[2] =                                 lambda[1]*2.0;
        vals[3] = obj_factor * x[3]             + lambda[0]*x[1]*x[3];
        vals[4] =                                 lambda[0]*x[0]*x[3];
        vals[5] =                                 lambda[1]*2.0;
        vals[6] = obj_factor*(2.0*x[0]+x[1]+x[2]) + lambda[0]*x[1]*x[2];
        vals[7] = obj_factor * x[0]             + lambda[0]*x[0]*x[2];
        vals[8] = obj_factor * x[0]             + lambda[0]*x[0]*x[1];
        vals[9] =                                 lambda[1]*2.0;
        true
    }
}

fn main() {
    let result = ripopt::solve(&Hs071, &SolverOptions::default());
    assert_eq!(result.status, SolveStatus::Optimal);
    println!("f* = {:.6}", result.objective);          // 17.014017
    println!("x* = {:?}", result.x);                   // [1.0, 4.743, 3.821, 1.379]
    println!("iters = {}", result.iterations);         // ~8
}

Trait reference

MethodRequiredDescription
num_variables()YesNumber of decision variables n
num_constraints()YesNumber of constraints m
bounds(x_l, x_u)YesVariable bounds (use ±∞ for unbounded)
constraint_bounds(g_l, g_u)YesConstraint bounds (g_l = g_u for equality)
initial_point(x0)YesStarting point — must be strictly interior to bounds
objective(x)YesObjective value f(x)
gradient(x, grad)YesGradient ∇f(x)
constraints(x, g)if m > 0Constraint values g(x)
jacobian_structure()if m > 0Sparsity pattern as (row, col) triplets
jacobian_values(x, vals)if m > 0Jacobian values in same order as structure
hessian_structure()RecommendedLower triangle sparsity of ∇²L
hessian_values(x, obj_factor, lambda, vals)Recommended∇²L = obj_factor·∇²f + Σ λᵢ·∇²gᵢ

Note: If hessian_structure() / hessian_values() are not implemented, ripopt automatically falls back to L-BFGS Hessian approximation.

The Lagrangian sign convention

ripopt uses L = f + yᵀg (same as Ipopt). The Hessian requested is:

obj_factor · ∇²f(x) + Σᵢ λᵢ · ∇²gᵢ(x)

Only the lower triangle is needed: entries (i, j) with i ≥ j.

Reading the result

#![allow(unused)]
fn main() {
let result = ripopt::solve(&problem, &opts);

match result.status {
    SolveStatus::Optimal => {
        println!("x = {:?}", result.x);       // optimal point
        println!("f = {}", result.objective); // objective value
        println!("y = {:?}", result.y);       // constraint multipliers
        println!("z_l = {:?}", result.z_l);   // lower bound multipliers
        println!("z_u = {:?}", result.z_u);   // upper bound multipliers
    }
    SolveStatus::MaxIterations => { /* increase max_iter or adjust options */ }
    SolveStatus::NumericalError => { /* try fallbacks, different starting point */ }
    SolveStatus::LocalInfeasibility => { /* problem is locally infeasible */ }
    SolveStatus::RestorationFailed => { /* feasibility recovery failed */ }
}
}

Sensitivity analysis

After a successful solve, compute how the optimal solution changes under parameter perturbations:

#![allow(unused)]
fn main() {
// Parametric sensitivity: dx/dp at optimum
let (result, sens) = ripopt::solve_with_sensitivity(&problem, &opts);
if let Some(s) = sens {
    println!("dx/dp = {:?}", s.dx_dp);  // primal sensitivity
    println!("dy/dp = {:?}", s.dy_dp);  // multiplier sensitivity
}
}

See src/sensitivity.rs for the sIPOPT-style implementation.

Customizing options

#![allow(unused)]
fn main() {
let opts = SolverOptions {
    tol: 1e-10,
    max_iter: 500,
    print_level: 3,
    hessian_approximation_lbfgs: true,  // skip exact Hessian
    ..SolverOptions::default()
};
let result = ripopt::solve(&problem, &opts);
}

See Solver Options for the full reference.

Benchmarks

ripopt is benchmarked against Ipopt (native C++ with MUMPS linear solver) on three standard test suites and several domain-specific problem sets. All benchmarks run on the same hardware; both solvers receive identical problem data through the same Rust trait interface.

Hock-Schittkowski Suite (120 problems)

The HS suite is the classic test set for NLP solvers, covering small problems (n ≤ 15) with mixed equality/inequality constraints.

MetricripoptIpopt (MUMPS)
Problems solved118/120 (98.3%)116/120 (96.7%)
Solved by ripopt only2
Solved by Ipopt only0

On 116 commonly solved problems (strict Optimal status required):

MetricValue
Geometric mean speedup15.0x
Median speedup14.2x
ripopt faster113/116 (97%)
ripopt 10x+ faster84/116 (72%)
Matching objectives (rel diff < 1e-4)110/116 (95%)

Run: make hs-run

CUTEst Benchmark Suite (727 problems)

CUTEst covers a wide range of problem types, sizes, and structures. Problems range from n=2 to n=10,000+.

MetricripoptIpopt (MUMPS)
Total solved562/727 (77.3%)561/727 (77.2%)
Solved by ripopt only37
Solved by Ipopt only36

On 525 commonly solved problems:

MetricValue
Geometric mean speedup9.9x
Median speedup18.9x
ripopt faster440/525 (84%)
ripopt 10x+ faster332/525 (63%)
Matching objectives (rel diff < 1e-4)436/525 (83.0%)

Run: make benchmark (full suite, ~2 hours) or individual problems:

cargo run --bin cutest_suite --features cutest,ipopt-native --release -- PROBLEM1 PROBLEM2

Where ripopt is faster

  1. Small problems (n < 50). Stack allocation and cache-efficient dense BK factorization avoid sparse overhead. 2–5x per-iteration speedup over Ipopt.
  2. Tall-narrow problems (m >> n, n ≤ 100). Dense condensed KKT via Schur complement reduces an (n+m)×(n+m) sparse factorization to n×n dense. 100–800x speedup on EXPFITC (n=5, m=502), OET3 (n=4, m=1002).
  3. Better iteration counts. Mehrotra predictor-corrector with Gondzio centrality corrections (enabled by default) cuts iterations by 20–40% on many problems.
  4. Fallback cascade. NE-to-LS reformulation, two-phase restoration, and multi-solver fallback recover problems Ipopt cannot solve.

Where Ipopt is faster

  1. Large sparse problems (n+m > 5,000). Ipopt's Fortran MUMPS is 10–15x faster per factorization than rmumps on large systems.
  2. Some medium constrained problems. A handful of problems (CORE1, HAIFAM, NET1) have higher per-iteration cost in ripopt's line search or fallback cascade.

Large-Scale Benchmarks

Both solvers use the same Rust trait interface; ripopt uses rmumps, Ipopt uses Fortran MUMPS.

ProblemnmripopttimeIpopttimespeedup
Rosenbrock 5005000Optimal0.003sOptimal0.199s76.2x
Bratu 1K1,000998Optimal0.002sOptimal0.002s1.1x
SparseQP 1K500500Optimal0.176sOptimal0.004s0.02x
OptControl 2.5K2,4991,250Optimal0.006sOptimal0.002s0.4x

Numbers above are fresh (v0.7.0). The larger problems (Poisson 2.5K, Rosenbrock 5K, Bratu 10K, OptControl 20K, Poisson 50K, SparseQP 100K) are not re-run for v0.7.0 — the stricter KKT backward-error probe causes Poisson 2.5K to exhaust max_iter and the full sweep is gated behind a separate investigation. Historical timings from v0.6.2 remain in benchmarks/large_scale/large_scale_results.txt snapshots.

Run: make benchmark

Domain-Specific Benchmarks

SuiteProblemsripoptIpoptNotes
Electrolyte thermodynamics1313/13 (100%)12/13 (92.3%)17.5x geo mean; ripopt uniquely solves seawater speciation
Grid (AC Optimal Power Flow)43/4 (75%)4/4 (100%)2.8x geo mean on 3 commonly-solved; case30_ieee regression
CHO parameter estimation10/10/1n=21,672, m=21,660; both hit iteration limit
Gas pipeline NLPs4see suite READMEsee suite READMEPDE-discretized Euler equations on pipe networks (gaslib11/40). Standalone — does not feed BENCHMARK_REPORT.md
Water distribution NLPs6see suite READMEsee suite READMEMINLPLib water network design instances. Standalone — does not feed BENCHMARK_REPORT.md

Algorithm

ripopt implements the primal-dual interior point method (IPM) described in Wächter & Biegler (2006), with several extensions for robustness and performance.

Problem form

min   f(x)
s.t.  g_l ≤ g(x) ≤ g_u
      x_l ≤ x    ≤ x_u

Barrier formulation

Inequality constraints and variable bounds are handled via a logarithmic barrier:

φ(x, μ) = f(x) − μ Σ log(xᵢ − xₗᵢ) − μ Σ log(xᵤᵢ − xᵢ) − μ Σ log(gᵤⱼ − gⱼ) − μ Σ log(gⱼ − gₗⱼ)

As μ → 0, the solution of min φ(x, μ) converges to the solution of the original NLP.

Perturbed KKT system

Stationarity of the barrier problem gives the perturbed KKT conditions:

  • Stationarity: ∇f(x) + J(x)ᵀy − z_l + z_u = 0
  • Feasibility: g_l ≤ g(x) ≤ g_u
  • Complementarity: z_lᵢ · (xᵢ − xₗᵢ) = μ and z_uᵢ · (xᵤᵢ − xᵢ) = μ

where y are constraint multipliers and z_l, z_u are bound multipliers.

Newton step

Each iteration solves the augmented KKT system:

[H + Σ + δ_w I,  Jᵀ      ] [dx]   [r_d]
[J,              −δ_c I  ] [dy] = [r_p]
  • H = ∇²_xx L (Lagrangian Hessian, lower triangle)
  • Σ = diag(z_l/s_l + z_u/s_u) (barrier diagonal)
  • δ_w, δ_c = inertia correction perturbations
  • r_d = −(∇f + Jᵀy − z_l + z_u) (dual residual)
  • r_p = −g(x) + slack (primal residual)

Linear solvers

Problem sizeSolverNotes
n + m < 110Dense Bunch-Kaufman LDL^TStack allocation, exact inertia
m ≥ 2n, n ≤ 100Dense condensed KKT (Schur complement)100-800x speedup for m >> n
n + m ≥ 110rmumps multifrontal LDL^TSuiteSparse AMD reordering

Inertia correction

After factorization, the inertia (sign counts of D in LDL^T) is checked. The correct inertia is (n positive, m negative, 0 zero). If wrong, δ_w and δ_c are increased and the matrix is re-factored.

Mehrotra predictor-corrector

Instead of a fixed centering parameter σ = 0.1, ripopt uses Mehrotra's adaptive rule:

  1. Predictor step (σ = 0): compute affine-scaling direction dx_aff
  2. Adaptive centering: σ = (μ_aff/μ)³ where μ_aff is the complementarity after the affine step
  3. Corrector step: solve the same KKT system with a modified RHS using σ·μ and affine cross-terms

This reuses the same LDL^T factorization (one extra triangular solve) and typically reduces iteration counts by 20–40%. Gondzio centrality corrections (up to 3 per iteration) further drive outlier complementarity pairs back to the central path.

The step size α is chosen by the filter method (Fletcher & Leyffer, 2002):

  • A filter maintains pairs (θ, φ) where θ = constraint violation and φ = barrier objective
  • A trial point is acceptable if it is not dominated by any filter entry: θ_trial < (1−γ_θ)θ or φ_trial < φ − γ_φ·θ
  • Switching condition: when sufficiently feasible (θ < θ_min), switch from filter to Armijo criterion
  • Second-order corrections (SOC): when a step is rejected, correct the constraint residual to account for nonlinearity (up to 4 SOC per iteration)

Step size rules

  • Fraction-to-boundary: α ≤ τ · max{α : x + α·dx > x_l, z + α·dz > 0} with τ = 0.99
  • Separate primal and dual step sizes, with Ipopt's alpha_y = alpha_d convention

Barrier parameter update

Free mode (default): oracle-based μ selection from complementarity, with filter reset each iteration.

Fixed mode: monotone decrease μ_new = factor · μ until barrier subproblem converges to barrier_tol_factor · μ. Triggered automatically when the free-mode oracle stalls.

Restoration phase

When the filter line search fails (all backtracking steps rejected), restoration recovers feasibility:

  1. Gauss-Newton restoration (fast): minimize (1/2)||g(x)||² using GN steps. Quadratic convergence for nonlinear equalities; falls back to gradient descent when Jacobian is rank-deficient.

  2. NLP restoration subproblem (robust, triggered after 5 GN failures): solve an auxiliary NLP:

    min ρ·(Σpᵢ + Σnᵢ) + (η/2)·||D_R(x − x_r)||²
    s.t. g(x) − p + n = g_target
    

    This minimizes constraint violation with a proximity term to the reference point x_r.

Fallback cascade

When the primary IPM fails, ripopt automatically tries:

  1. L-BFGS Hessian approximation: replaces exact Hessian with L-BFGS curvature pairs (no second derivatives needed)
  2. Augmented Lagrangian: for equality-only problems; L-BFGS inner solver with multiplier updates
  3. SQP: sequential quadratic programming for small constrained problems
  4. Slack reformulation: converts inequality constraints to equality + bounds on slacks (g(x) − s = 0)

Each fallback uses the same interface and inherits the time budget from the primary solve.

Convergence criteria

Three scaled KKT residuals must be below tolerance:

ResidualFormulaDefault tol
Primal infeasibility`maxgᵢ(x) − proj(gᵢ, [gₗᵢ, gᵤᵢ])
Dual infeasibility`
Complementarity`maxzₗᵢ·sₗᵢ − μ

Key papers

  • Wächter & Biegler (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming.
  • Mehrotra (1992). On the implementation of a primal-dual interior point method. SIAM J. Optimization.
  • Gondzio (1996). Multiple centrality corrections in a primal-dual method for linear programming. Computational Optimization and Applications.
  • Fletcher & Leyffer (2002). Nonlinear programming without a penalty function. Mathematical Programming.

Solver Options

All options are set via SolverOptions. Defaults match Ipopt where applicable.

#![allow(unused)]
fn main() {
let opts = SolverOptions {
    tol: 1e-8,
    max_iter: 500,
    ..SolverOptions::default()
};
}

Convergence

OptionDefaultDescription
tol1e-8KKT optimality tolerance (dual infeasibility + complementarity)
constr_viol_tol1e-4Constraint violation tolerance
dual_inf_tol1.0Dual infeasibility tolerance (scaled)
compl_inf_tol1e-4Complementarity tolerance
max_iter3000Maximum iterations
max_wall_time0.0Wall-clock limit in seconds (0 = unlimited)
stall_iter_limit30Max iters without 1% improvement before stall detection (0 = off)
early_stall_timeout120.0Max seconds for first 3 iterations (0 = off)

Barrier parameter

OptionDefaultDescription
mu_init0.1Initial barrier parameter
mu_min1e-11Minimum barrier parameter floor
mu_strategy_adaptivetruetrue = oracle-based (Free mode); false = monotone
kappa10.0Barrier decrease divisor: μ_new = avg_compl / kappa
mu_linear_decrease_factor0.2Monotone mode: μ_new = factor * μ
mu_superlinear_decrease_power1.5Monotone mode superlinear exponent
barrier_tol_factor10.0Subproblem tol = barrier_tol_factor * μ
mu_allow_increasetrueAllow μ to increase after restoration/stall recovery
adaptive_mu_monotone_init_factor0.8Initial μ factor when entering Fixed (monotone) mode
mu_oracle_quality_functionfalseUse quality function for mu selection

Linear solver

OptionDefaultDescription
sparse_threshold110Switch to sparse multifrontal solver when n+m ≥ threshold
linear_solverDirectDirect (MUMPS/BK), Iterative (MINRES), or Hybrid

Mehrotra predictor-corrector

OptionDefaultDescription
mehrotra_pctrueEnable Mehrotra predictor-corrector (20–40% fewer iterations)
gondzio_mcc_max3Max Gondzio centrality corrections per iteration (0 = off)

Line search and corrections

OptionDefaultDescription
max_soc4Max second-order corrections per step
tau_min0.99Fraction-to-boundary parameter (τ in step size rule)
constraint_slack_barriertrueInclude constraint slack log-barriers in filter merit function
watchdog_shortened_iter_trigger10Consecutive short steps before watchdog
watchdog_trial_iter_max3Watchdog trial iterations

Hessian strategy

OptionDefaultDescription
hessian_approximation_lbfgsfalseUse L-BFGS instead of exact Hessian (no hessian_values needed)
enable_lbfgs_hessian_fallbacktrueAuto-retry with L-BFGS if exact Hessian IPM fails

Fallback cascade

OptionDefaultDescription
enable_slack_fallbacktrueReformulate inequalities with explicit slacks on failure
enable_al_fallbacktrueTry Augmented Lagrangian if IPM fails (equality problems)
enable_sqp_fallbacktrueTry SQP if AL/slack also fail
enable_lbfgs_fallbacktrueTry L-BFGS for unconstrained problems on IPM failure

Restoration

OptionDefaultDescription
restoration_max_iter200Max iterations for NLP restoration subproblem
disable_nlp_restorationfalseDisable NLP restoration (prevents recursion in inner solves)

Warm start

OptionDefaultDescription
warm_startfalseInitialize from a previous solution
warm_start_bound_push1e-3Bound push for warm-started variables
warm_start_bound_frac1e-3Bound fraction for warm-started variables
warm_start_mult_bound_push1e-3Multiplier floor for warm start

Initial point

OptionDefaultDescription
bound_push1e-2κ₁: push initial x away from bounds by max(κ₁, κ₂·(u−l))
bound_frac1e-2κ₂: fraction of bound gap used for push
slack_bound_push1e-2Slack variable bound push
slack_bound_frac1e-2Slack variable bound fraction
least_squares_mult_inittrueInitialize y by least-squares on stationarity
constr_mult_init_max1000.0Cap on LS multiplier init magnitude

Bound thresholds

OptionDefaultDescription
nlp_lower_bound_inf-1e20Treat variable/constraint bounds below this as -∞
nlp_upper_bound_inf1e20Treat variable/constraint bounds above this as +∞

Preprocessing and detection

OptionDefaultDescription
enable_preprocessingtrueEliminate fixed variables and redundant constraints
detect_linear_constraintstrueSkip Hessian for constraints with zero second derivatives
proactive_infeasibility_detectionfalseEarly infeasibility detection during iterations

Diagnostics

OptionDefaultDescription
print_level5Verbosity: 0 = silent, 5 = full iteration table + diagnostics

KKT matrix dump (instrumentation)

OptionDefaultDescription
kkt_dump_dirNoneIf set, write each KKT matrix to this directory after factorization
kkt_dump_name"problem"Problem name prefix for dump filenames

When kkt_dump_dir is set, ripopt writes two files per iteration:

  • <name>_<iter:04>.mtx — Matrix Market symmetric format, lower triangle
  • <name>_<iter:04>.json — Metadata: n, m, rhs, inertia, status

This is useful for collecting benchmark matrices for external sparse solvers.

Diagnostics

SolverDiagnostics captures structured data about solver behavior. It is available at result.diagnostics and printed to stderr when print_level >= 5.

Diagnostic block format

--- ripopt diagnostics ---
status: Optimal
iterations: 8
wall_time: 0.001s
final_mu: 4.14e-9
final_primal_inf: 5.55e-17
final_dual_inf: 1.04e-12
final_compl: 7.75e-9
restoration_count: 0
nlp_restoration_count: 0
mu_mode_switches: 2
filter_rejects: 0
watchdog_activations: 0
soc_corrections: 0
--- end diagnostics ---

Fields

FieldMeaning
statusFinal solve status
iterationsTotal IPM iterations
wall_time_secsTotal wall-clock time
final_muBarrier parameter at termination
final_primal_infConstraint violation at termination
final_dual_infDual infeasibility (stationarity error) at termination
final_complComplementarity error at termination
restoration_countGauss-Newton restoration entries
nlp_restoration_countFull NLP restoration entries (heavier)
mu_mode_switchesBarrier mode transitions (Free ↔ Fixed)
filter_rejectsLine search failures (backtracking exhausted)
watchdog_activationsWatchdog triggered by consecutive short steps
soc_correctionsSecond-order corrections accepted
fallback_usedWhich fallback succeeded, if any (lbfgs_hessian, augmented_lagrangian, sqp, slack)

Interpreting the diagnostics

Healthy solve (HS071-like): 0 restorations, 0 filter rejects, 2–4 mu mode switches, final_mu near 1e-9, final_primal_inf and final_dual_inf both below tol.

Struggling solve: Many filter rejects, multiple restorations, final_mu stuck above 1e-4, or a fallback was used.

Pattern → cause → fix

PatternLikely causeOptions to try
filter_rejects > 5Line search fighting constraintsIncrease mu_init, reduce kappa
restoration_count > 3Repeated feasibility recoverySet enable_slack_fallback: true, increase mu_init
mu_mode_switches > 10Free/Fixed cyclingSet mu_strategy_adaptive: false
final_mu stuck > 1e-4Barrier parameter not decreasingIncrease max_iter, reduce mu_linear_decrease_factor
fallback_used: Some(...)Primary IPM failedCheck which fallback; consider changing Hessian strategy
soc_corrections > 0Nonlinear constraints causing step rejectionNormal; increase max_soc if filter rejects are also high
watchdog_activations > 0Tiny steps detectedTry hessian_approximation_lbfgs: true

Example: healthy solve (HS071)

#![allow(unused)]
fn main() {
let result = ripopt::solve(&Hs071, &SolverOptions::default());
let d = &result.diagnostics;
assert_eq!(d.filter_rejects, 0);
assert_eq!(d.restoration_count, 0);
assert!(d.final_mu < 1e-8);
// iterations ≈ 8
}

Example: struggling solve — reading and reacting

#![allow(unused)]
fn main() {
let r1 = ripopt::solve(&problem, &opts);

if r1.status != SolveStatus::Optimal {
    let d = &r1.diagnostics;
    let mut opts2 = opts.clone();

    if d.filter_rejects > 5 {
        opts2.mu_init = 1.0;
        opts2.kappa = 3.0;
    }
    if d.restoration_count > 3 {
        opts2.enable_slack_fallback = true;
    }
    if d.final_mu > 1e-4 {
        opts2.mu_strategy_adaptive = false;
    }
    if d.mu_mode_switches > 10 {
        opts2.hessian_approximation_lbfgs = true;
    }

    let r2 = ripopt::solve(&problem, &opts2);
}
}

Using diagnostics with Claude Code

Claude Code can read the --- ripopt diagnostics --- block from stderr and automatically adjust solver options:

claude -p "
  Run: cargo run --example debug_tp374 2>&1
  Parse the diagnostics block.
  If not Optimal:
    - High filter_rejects → increase mu_init, decrease kappa
    - High restoration_count → try enable_slack_fallback
    - mu stuck high → try mu_strategy_adaptive: false
    - Large multipliers → try hessian_approximation_lbfgs: true
  Adjust options, re-run, compare. Up to 3 attempts.
"

The Rust code is a pure reporter. All intelligence — pattern matching, strategy selection — lives in Claude's reasoning.

NLP Tutorial Series

A series of 15 Jupyter notebooks taking you from "beginner in nonlinear programming" to understanding every component of ripopt — written in Python (numpy, scipy, matplotlib), with each notebook connecting directly to ripopt's source code.

All notebooks are in tutorials/ in the repository.

Module 1: NLP Foundations

#NotebookKey concepts
01Introduction to Nonlinear ProgrammingProblem formulation, feasibility vs optimality, HS071
02Unconstrained OptimizationGradient descent, Newton's method, Armijo/Wolfe line search, L-BFGS
03KKT ConditionsLagrangian, complementarity, geometric interpretation, LICQ

Module 2: Classical Constrained Methods

#NotebookKey concepts
04Penalty Methods and Augmented LagrangianQuadratic penalty, ill-conditioning, Method of Multipliers
05Sequential Quadratic ProgrammingQP subproblem, active set, BFGS update, L1 merit function

Module 3: Barrier Methods

#NotebookKey concepts
06The Logarithmic Barrier MethodLog barrier, central path, barrier parameter sequence
07Primal-Dual Interior Point MethodPerturbed KKT, Newton step derivation, fraction-to-boundary, complete IPM

Module 4: Linear Algebra at the Core

#NotebookKey concepts
08The KKT (Saddle-Point) MatrixKKT structure, Sylvester's Law of Inertia, condensed KKT
09LDL^T and Bunch-Kaufman PivotingSymmetric Gaussian elimination, BK pivoting, inertia from D
10Inertia Correction and Regularizationδ_w and δ_c perturbations, correction loop, iterative refinement

Module 5: Line Search, Filter, and Convergence

#NotebookKey concepts
11Filter Line SearchFilter concept, acceptance criteria, switching condition, SOC
12Convergence Criteria and ScalingThree KKT residuals, s_d/s_c scaling, NLP scaling

Module 6: Robustness

#NotebookKey concepts
13The Restoration PhaseGauss-Newton restoration, NLP restoration subproblem, recovery strategies
14Mehrotra Predictor-CorrectorAffine-scaling predictor, adaptive centering σ = (μ_aff/μ)³, Gondzio corrections

Module 7: ripopt in Practice

#NotebookKey concepts
15ripopt in PracticeNlpProblem interface, fallback cascade, sensitivity analysis, benchmarking

Running the notebooks

git clone https://github.com/jkitchin/ripopt.git
cd ripopt/tutorials
pip install numpy scipy matplotlib jupyter
jupyter lab

Each notebook is self-contained. No Rust installation required for notebooks 01–14. Notebook 15 describes the Rust interface with Python equivalents for all concepts.

ripopt source pointers

Each notebook ends with a "Connection to ripopt" section. The key mappings:

TopicNotebookripopt source
KKT assembly08src/kkt.rsassemble_kkt()
Bunch-Kaufman LDL^T09src/linear_solver/dense.rsDenseLdl
Inertia correction10src/kkt.rsfactor_with_inertia_correction()
Filter line search11src/filter.rscheck_acceptability()
Convergence12src/convergence.rscheck_convergence()
Restoration13src/restoration.rs, src/restoration_nlp.rs
Mehrotra PC14src/ipm.rs — main loop
Problem interface15src/problem.rsNlpProblem trait

API Documentation