# Tutorial¶

This tutorial will take you through a basic optimization using `aiida-optimize`

. It assumes that you are already familiar with using AiiDA.

## Motivation¶

First of all, why do we need a special optimization framework for AiiDA workflows? Couldn’t we use an existing library like `scipy.optimize`

to do that?

Imagine you have a complex function that you want to optimize. Evaluating that function involves several steps, and calls to different codes. As such, it’s a perfect fit to be implemented in an AiiDA workflow. Now you could just pass this to an optimization function like `scipy.optimize`

by executing the workflow with the `run`

method. However, this creates a problem: Because `run`

is a blocking call, the Python interpreter which executes this function needs to stay alive during the entire time that the optimization is running. If there’s a problem anywhere in the process, the results are essentially lost.

It would be much nicer then to create the optimization process in such a way that it can be shut down at any point. In essence, we want to create a _new_ AiiDA workflow that simply wraps the one evaluating the function. As a consequence, the optimization logic cannot be written in the usual, procedural way. Instead, it needs to be encoded in a stateful “optimization engine” that can be stopped, persisted and restarted. Because doing this involves a lot of boilerplate code, `aiida-optimize`

takes away some of that complexity and provides some built-in optimization engines.

## A simple bisection¶

Now, we will see how to perform an optimization with `aiida-optimize`

. First, we need an AiiDA WorkChain or workfunction to optimize. As a simple example, we create a workfunction that evaluates the sine:

```
# -*- coding: utf-8 -*-
# © 2017-2019, ETH Zurich, Institut für Theoretische Physik
# Author: Dominik Gresch <greschd@gmx.ch>
import numpy as np
from aiida.engine import workfunction
from aiida.orm import Float
@workfunction
def sin(x):
# This is a bit improper: The new value should be created in a calculation.
return Float(np.sin(x.value)).store()
```

Equivalently, we could also write a workchain that does the same:

```
# -*- coding: utf-8 -*-
# © 2017-2019, ETH Zurich, Institut für Theoretische Physik
# Author: Dominik Gresch <greschd@gmx.ch>
import numpy as np
from aiida.engine import WorkChain
from aiida.orm.nodes.data.float import Float
class Sin(WorkChain):
"""
A simple workchain which represents the function to be optimized.
"""
@classmethod
def define(cls, spec):
super(Sin, cls).define(spec)
spec.input('x', valid_type=Float)
spec.output('result', valid_type=Float)
spec.outline(cls.evaluate)
def evaluate(self):
# This is a bit improper: The new value should be created in a calculation.
self.out('result', Float(np.sin(self.inputs.x.value)).store())
```

Now we can use `aiida-optimize`

with the `Bisection`

engine to find a nodal point. To do this, we run the `OptimizationWorkChain`

, with the following inputs:

`engine`

is the optimization engine that we use. In this case, we pass the`Bisection`

class.`engine_kwargs`

are parameters that will be passed to the optimization engine. In the case of bisection, we pass the upper and lower boundaries of the bisection interval, and the target tolerance. Also, we need to pass the`result_key`

, which is the name of the output argument of the workfunction or workchain that we are optimizing. For workfunctions, this is always`result`

.`evaluate_process`

is the workchain function that we want to optimize. In our case, that’s the`sin`

workfunction or`Sin`

workchain.

```
#!/usr/bin/env runaiida
# -*- coding: utf-8 -*-
# © 2017-2019, ETH Zurich, Institut für Theoretische Physik
# Author: Dominik Gresch <greschd@gmx.ch>
import sys
from os.path import abspath, dirname
from aiida.engine.launch import run
from aiida.orm import Dict
sys.path.append(dirname(abspath(__file__)))
from sin_wc import Sin
from sin_wf import sin
from aiida_optimize import OptimizationWorkChain
from aiida_optimize.engines import Bisection
result_wf = run(
OptimizationWorkChain,
engine=Bisection,
engine_kwargs=Dict(dict=dict(upper=1.3, lower=-1., tol=1e-3, result_key='result')),
evaluate_process=sin
)
result_wc = run(
OptimizationWorkChain,
engine=Bisection,
engine_kwargs=Dict(dict=dict(upper=1.3, lower=-1., tol=1e-3, result_key='result')),
evaluate_process=Sin
)
print('\nResult with workfunction:', result_wf)
print('\nResult with workchain:', result_wc)
```

The `OptimizationWorkChain`

returns two outputs: The optimized value of the function, and the uuid of the optimal function workchain. This can be used to retrieve the exact inputs and outputs of the best run of the evaluated function.

The other optimization engines which are included in `aiida-optimize`

are described in the reference section.

## Developing an optimization engine¶

In this section, we give a rough description of how the optimization engines itself are structured. If you wish to develop your own optimization engine, we also highly recommend looking at the code of the existing engines for inspiration.

The optimization engines are usually split into two parts: The implementation, and a small wrapper class. These classes have corresponding base classes, `OptimizationEngineImpl`

and `OptimizationEngineWrapper`

. While the implementation contains the logic of the optimization engine itself, the wrapper is a factory class which is exposed to the user, used only to instantiate an instance of the implementation.

The reason for this split is that the engine itself needs to be serializable into a “state” which can be stored between steps of the AiiDA workchain, and then re-created from that state. Since the state usually contains more parameters than what needs to be exposed when the engine is first instantiated, the wrapper is added to hide away these parameters from the end user.

The `OptimizationEngineImpl`

describes the methods which need to be implemented by an optimization engine. In particular, methods for creating new inputs, updating the engine from evaluation outputs, and serializing it to its state need to be provided. The base class itself keeps track of which evaluations have been launched. This is done using the `ResultMapping`

class, which contains a dictionary that maps a key to a `Result`

containing the evaluation inputs and outputs. The `OptimizationWorkChain`

uses these same keys to identify the corresponding processes.