## Nonconvex.jl

Toolbox for non-convex constrained optimization.
Author mohamed82008
Popularity
36 Stars
Updated Last
1 Year Ago
Started In
February 2021

# Nonconvex

This package implements and wraps a number of nonconvex constrained optimization algorithms and packages making use of automatic differentiation. The following algorithms are implemented:

• `MMA87`: the original method of moving asymptotes
• `MMA02`: the globally convergent method of moving asymptotes

The following packages are wrapped:

• `IpoptAlg`: a wrapper around Ipopt.jl
• `NLoptAlg`: a wrapper around NLopt.jl
• `AugLag`: a wrapper around Percival.jl which implements the augmented Lagrangian algorithm

The method of moving asymptotes algorithms' were generalized to handle infinite variable bounds. In the augmented Lagrangian algorithm, a block constraint can be handled efficiently by defining a custom adjoint rule for the block constraint using `ChainRulesCore.jl`. This custom adjoint will be picked up by `Nonconvex.jl` when calculating the gradient of the augmented Lagrangian.

# Examples

`using Nonconvex`

## Problem definition

```f(x) = sqrt(x)
g(x, a, b) = (a*x + b)^3 - x

m = Model(f)
add_ineq_constraint!(m, x -> g(x, 2, 0))
add_ineq_constraint!(m, x -> g(x, -1, 1))```

## Block constraints

```m = Model(f)
block_constr = FunctionWrapper(x -> [g(x, 2, 0), g(x, -1, 1)], 2)

## MMA

```alg = MMA87() # or MMA02()
options = Nonconvex.MMAOptions(
tol = Nonconvex.Tolerance(kkt = 1e-6, f = 0.0), s_init = 0.1,
)
convcriteria = KKTCriteria()
r = Nonconvex.optimize(
m, alg, [1.234, 2.345], options = options,
convcriteria = convcriteria,
)
r.minimum
r.minimizer```

## NLopt

```alg = NLoptAlg(:LD_MMA)
options = Nonconvex.NLoptOptions()
r = optimize(m, alg, [1.234, 2.345], options = options)
r.minimum
r.minimizer```

## Augmented Lagrangian / Percival

```alg = AugLag()
options = Nonconvex.AugLagOptions()
r = optimize(m, alg, [1.234, 2.345], options = options)
r.minimum
r.minimizer```

## Mixed equality and inequality constraints

```f(x) = sqrt(x)
g(x, a, b) = (a*x + b)^3 - x

m = Model(f)
add_eq_constraint!(m, x -> g(x, 2, 0))
add_ineq_constraint!(m, x -> g(x, -1, 1))```

## Ipopt

```alg = IpoptAlg()
options = Nonconvex.IpoptOptions()
r = optimize(m, alg, [1.234, 2.345], options = options)
r.minimum
r.minimizer```

## Mixed integer optimization with Juniper and Ipopt

To do mixed integer optimization using Juniper and Ipopt, you can use:

```alg = JuniperIpoptAlg()
options = Nonconvex.JuniperIpoptOptions()
r = optimize(m, alg, [1.234, 2.345], options = options, integers = [false, true])
r.minimum
r.minimizer # [0.3327, 1]```

A custom gradient rule for a function should be defined using ChainRulesCore's `rrule`. For example the following can be used for the function `f` defined above.

```using ChainRulesCore

function ChainRulesCore.rrule(::typeof(f), x::AbstractVector)
val = f(x)
grad = [0.0, 1 / (2 * sqrt(x))]
val, Δ -> (NO_FIELDS, Δ * grad)
end```

You can check it is correct in your tests using ChainRulesTestUtils.jl.

```using ChainRulesTestUtils
test_rrule(f, [1.2, 3.6])```

For full details on `rrules` etc see the ChainRules documentation.

## Hack to use other automatic differentiation backends

For specific functions, if you want to use `ForwardDiff` instead of `Zygote`, one way to do this is to define an `rrule` using `ForwardDiff` to compute the gradient or jacobian, e.g:

```using ChainRulesCore, ForwardDiff

function ChainRulesCore.rrule(::typeof(f), x::AbstractVector)
val = f(x)