# Population Annealing

Population Annealing is a sequential Monte Carlo method which aims to alleviate the susceptibility of the Metropolis Algorithm to rough cost landscapes (i.e., with many local minima) by simulating a population of metropolis walkers. Akin to Simulated Annealing, the algorithm proceeds over a set of decreasing temperatures $T$ (or increasing $\beta = 1/T$), but the population is resampled at each temperature step. During resampling, some walkers are removed and some are duplicated (with a bias towards retaining lower cost walkers). This has the effect of continuously consolidating search efforts around favorable states. In this sense, Population Annealing shares many features with evolutionary-type algorithms.

Intuitively, one can picture Population Annealing as a set of walkers spread across the configuration space of the problem. Each walker is free to explore its own neighborhood. Once a walker finds a better state, the rest of the population is gravitated toward that state. Therefore, the algorithm is designed to take advantage of the "collective knowledge" of the walkers to find its way toward the solution.

In the context of optimization problems, the algorithm starts with an initial population of random states at high temperature. In this regime, "bad" transitions are still accepted with a high probability and the bias towards lower-cost walkers in the resampling is weak. At lower temperatures, the Metropolis Criterion strongly favors transitions which decrease the cost and resampling progressively weeds out non-competitive states. The removal rate during resampling depends on the size of the individual temperature steps. Therefore, the temperature schedule must be slow enough such that a reasonable number of walkers (about 10%-20%) are dropped while roughly the same number of them are copied to keep the population size stable.

Eventually, all walkers are resampled into the same lowest-cost state discovered
("population collapse"). At this stage it is more efficient to restart the process
with a new set of random walkers (`restarts`

).

## Features of Population Annealing on Azure Quantum

Population Annealing in Azure Quantum supports:

- Parameterized mode
- Ising and PUBO input formats
- CPU only

## When to use Population Annealing

Generally speaking, given enough resources, Population Annealing can solve any problem that Simulated Annealing is used for more efficiently. However, due to the memory footprint of multiple walkers, Population Annealing is most suitable for very hard moderately-sized problems. We recommend Population Annealing for both sparse and dense graphs. The algorithm might struggle for constraint problems with large penalty terms.

Note

For further information on choosing which solver to use, please refer to this document.

## Parameter-Free Population Annealing

Note

Parameter-Free Population Annealing is currently available to users in the Azure Quantum Early Access program. It will be available to all users in the near future.

Parameter-free Population Annealing searches for "optimal" parameters of the Population Annealing solver at runtime, so that solver users have no need to set up parameters like `beta`

, and so on. The only parameter required to run
parameter-free Population Annealing solver is `timeout`

which represents the physical time in seconds that the solver is allowed to run.

Parameter Name | Default Value | Description |
---|---|---|

`timeout` (required) |
5 | Number of seconds to allow the solver to run. |

`seed` (optional) |
time based |
Seed value - used for reproducing results. |

Note that the `timeout`

parameter is required to trigger the parameter-free Population Annealing solver.
If you do not specify the `timeout`

parameter, a parameterized Population Annealing solver will be created instead, using the default parameters shown in the next section (unless otherwise specified).

You can create the parameter-free solver by using code similar to the sample shown below:

```
from azure.quantum.optimization import PopulationAnnealing
# Requires a workspace already created.
solver = PopulationAnnealing(workspace, timeout=10, seed=48)
```

## Parameterized Population Annealing

Suitable values for the `population`

size and the annealing schedule `beta`

,
will depend entirely on the problem and the magnitude (cost difference) of its
variable changes.

A good starting point for Population Annealing is to use the beta range from
Simulated Annealing
(`beta_start`

and `beta_stop`

) for the annealing schedule while renaming the
`restarts`

parameter to `population`

.

Population Annealing supports the following parameters:

Parameter Name | Default Value | Description |
---|---|---|

`sweeps` |
10000 | Number of sweeps. More sweeps will usually improve the solution (unless it is already at the global minimum). |

`beta` |
linear `0` ..`5` |
Annealing schedule (beta must be increasing) |

`population` |
number of threads |
The number of walkers in the population (must be positive). |

`seed` (optional) |
time based |
Seed value - used for reproducing results. |

To create a parameterized Population Annealing solver for the CPU using the SDK:

```
from azure.quantum.optimization import PopulationAnnealing
from azure.quantum.target.solvers import RangeSchedule
# Requires a workspace already created.
solver = PopulationAnnealing(workspace, sweeps=200, beta=RangeSchedule("linear", 0, 5), population=128, seed=42)
```

Running the solver without parameters will apply the default parameters shown in the table above. These default values are subject to change and we strongly recommend setting the values based on your problem rather than using the defaults.

## Feedback

Submit and view feedback for