44 Stars
Updated Last
1 Year Ago
Started In
March 2019

Percival.jl - An augmented Lagrangian solver

CI codecov.io docs-stable docs-dev DOI

Percival is an implementation of the augmented Lagrangian solver described in

S. Arreckx, A. Lambe, Martins, J. R. R. A., & Orban, D. (2016).
A Matrix-Free Augmented Lagrangian Algorithm with Application to Large-Scale Structural Design Optimization.
Optimization And Engineering, 17, 359–384. doi:10.1007/s11081-015-9287-9

with internal solver tron from JSOSolvers.jl. To use Percival, you have to pass it an NLPModel.

How to Cite

If you use Percival.jl in your work, please cite using the format given in CITATION.bib.


Use ] to enter pkg> mode of Julia, then

pkg> add Percival


Consider the following 2-dimensional optimization problem with an equality constraint

$$ \begin{equation} \min_{(x_1,x_2)} \quad (x_1 - 1)^2 + 100 (x_2 - x_1^2)^2 \quad \text{s.to} \quad x_1^2 + x_2^2 = 1. \end{equation} $$

You can solve an JuMP model model by using NLPModelsJuMP.jl to convert it.

using JuMP, NLPModelsJuMP, Percival
model = Model()
@variable(model, x[i=1:2], start = [-1.2; 1.0][i])
@NLobjective(model, Min, (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2)
@NLconstraint(model, x[1]^2 + x[2]^2 == 1)
nlp = MathOptNLPModel(model) # thin wrapper converting JuMP Model as NLPModel
output = percival(nlp, verbose = 1)

percival accept as input any instance of AbstractNLPModel, for instance, using automatic differentiation via ADNLPModels.jl to solve the same problem.

using ADNLPModels, Percival
nlp = ADNLPModel(
  x -> (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2,
  [-1.2; 1.0],
  x -> [x[1]^2 + x[2]^2],
output = percival(nlp, verbose = 1)

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.