Skip to content

Algorithms

Genetic algorithms are the core of the entire project; very briefly, an algorithm can be defined in the following steps

Initialize: create a population P of μ random valid solutions. - Evaluate: compute fitness for each individual. - Selection: pick parents biased to higher fitness (e.g., tournament or rank). - Mating: recombine parents to make offspring (crossover). - Mutation: randomly perturb offspring (small, bounded changes). - Evaluate offspring: compute their fitness and constraints (if given). - Survival: build next generation of size μ - (μ+λ) elitist: keep best from parents ∪ offspring - (μ,λ) generational: keep best offspring only - Stop: when max generations/time reached or no improvement.

Genetic Algorithm

# Pseudo code of a genetic algorithm
P = init(μ); evaluate(P)
repeat:
    parents = select(P)
    children = mutate(crossover(parents))
    evaluate(children)
    P = survive(P, children, μ)   # (μ+λ) or (μ,λ)
until stop

Mathematical Formulation of a Multi-Objective Optimization Problem with Constraints

Consider the following optimization problem

\[ \begin{aligned} \min_{x_1, x_2} \quad & f_1(x_1,x_2) = x_1^2 + x_2^2 \\ \min_{x_1, x_2} \quad & f_2(x_1,x_2) = (x_1-1)^2 + x_2^2 \\ \text{subject to} \quad & x_1 + x_2 \leq 1, \\ & x_1 \geq 0,\quad x_2 \geq 0. \end{aligned} \]

In moors, the algorithms are implemented as structs with a set of useful attributes. These attributes include the final population and the optimal or best set of individuals found during the optimization process.

For example, after running an algorithm like NSGA2, you can access: - Final Population: The complete set of individuals from the last generation. - Optimum Set: Typically, the best individuals (e.g., those with rank 0) that form the current approximation of the Pareto front.

use ndarray::{Axis, Array1, Array2, stack};
use moors::{
    algorithms::{MultiObjectiveAlgorithmError, Nsga2Builder},
    duplicates::CloseDuplicatesCleaner,
    operators::{RandomSamplingFloat, SimulatedBinaryCrossover, GaussianMutation}
};

/// Define the fitness function
fn fitness(population_genes: &Array2<f64>) -> Array2<f64> {
    let x1 = population_genes.column(0);
    let x2 = population_genes.column(1);

    let f1 = &x1 * &x1 + &x2 * &x2;
    let f2 = (&x1 - 1.0).mapv(|v| v * v) + &x2 * &x2;

    stack(Axis(1), &[f1.view(), f2.view()]).unwrap()
}

///  Define the constraints function
fn constraints_fn(population_genes: &Array2<f64>) -> Array1<f64> {
    let x1 = population_genes.column(0);
    let x2 = population_genes.column(1);
    x1 + x2 - 1.0
}

fn main() -> Result<(), MultiObjectiveAlgorithmError> {
    let sampler   = RandomSamplingFloat::new(0.0, 1.0);
    let crossover = SimulatedBinaryCrossover::new(5.0);
    let mutation  = GaussianMutation::new(0.1, 0.01);
    let cleaner   = CloseDuplicatesCleaner::new(1e-8);

    // Build NSGA-II with the same hyperparameters
    let mut algo = Nsga2Builder::default()
        .fitness_fn(fitness)
        .constraints_fn(constraints)
        .sampler(sampler)
        .crossover(crossover)
        .mutation(mutation)
        .duplicates_cleaner(cleaner)
        .num_vars(2)
        .num_objectives(2)
        .num_constraints(1)
        .population_size(200)
        .num_offsprings(200)
        .num_iterations(200)
        .mutation_rate(0.1)
        .crossover_rate(0.9)
        .keep_infeasible(false)
        .build()?;

    // Run
    algo.run()?;

    // Access the final population if needed
    let population_genes = algo.population()?;
    println!("Done. Final population size: {}", population_genes.len());

    Ok(())
}

In pymoors, the algorithms are implemented as classes that are exposed on the Python side with a set of useful attributes. These attributes include the final population and the optimal or best set of individuals found during the optimization process.

For example, after running an algorithm like NSGA2, you can access: - Final Population: The complete set of individuals from the last generation. - Optimum Set: Typically, the best individuals (e.g., those with rank 0) that form the current approximation of the Pareto front.

This design abstracts away the complexities of the underlying Rust implementation and provides an intuitive, Pythonic interface for setting up, executing, and analyzing multi-objective optimization problems.

import numpy as np
from pymoors import (
    Nsga2,
    RandomSamplingFloat,
    GaussianMutation,
    SimulatedBinaryCrossover,
    CloseDuplicatesCleaner,
    Constraints
)
from pymoors.typing import OneDArray, TwoDArray


# Define the fitness function
def fitness(population_genes: TwoDArray) -> TwoDArray:
    x1 = population_genes[:, 0]
    x2 = population_genes[:, 1]
    # Objective 1: f1(x1,x2) = x1^2 + x2^2
    f1 = x1**2 + x2**2
    # Objective 2: f2(x1,x2) = (x1-1)^2 + x2**2
    f2 = (x1 - 1) ** 2 + x2**2
    return np.column_stack([f1, f2])


# Define the constraints_fn function
def constraints_fn(population_genes: TwoDArray) -> OneDArray:
    x1 = population_genes[:, 0]
    x2 = population_genes[:, 1]
    # Constraint 1: x1 + x2 <= 1
    g1 = x1 + x2 - 1
    # Convert to 2D array
    return g1

# Wrap the constraints in the special class and pass lower/upper bounds
constraints = Constraints(ineq = [constraints_fn], lower_bound = 0.0, upper_bound = 1.0)


# Set up the NSGA2 algorithm with the above definitions
algorithm = Nsga2(
    sampler=RandomSamplingFloat(min=0, max=1),
    crossover=SimulatedBinaryCrossover(distribution_index=5),
    mutation=GaussianMutation(gene_mutation_rate=0.1, sigma=0.01),
    fitness=fitness,
    num_objectives=2,
    constraints_fn=constraints,
    num_constraints=1,
    duplicates_cleaner=CloseDuplicatesCleaner(epsilon=1e-8),
    num_vars=2,
    population_size=200,
    num_offsprings=200,
    num_iterations=200,
    mutation_rate=0.1,
    crossover_rate=0.9,
    keep_infeasible=False,
    lower_bound=0,
    verbose=False,
)

# Run the algorithm
algorithm.run()