Multi-Objective Optimization

Bio-Inspired Computational Intelligence

Lecture 9: From Single to Multiple Objectives

Learning Objectives

  • Understand the fundamental concepts of multi-objective optimization
  • Learn about Pareto optimality and dominance
  • Explore key multi-objective evolutionary algorithms
  • Analyze performance metrics for comparing solutions
  • Apply concepts to real-world problems

Why Multiple Objectives?

Real-world problems rarely have a single objective

Single-objective thinking:

  • "Minimize cost"
  • "Maximize performance"

The Reality of Engineering Design

Aircraft Design

  • Minimize weight ↓
  • Maximize strength ↑
  • Minimize cost ↓
  • Maximize fuel efficiency ↑

Portfolio Selection

  • Maximize return ↑
  • Minimize risk ↓
  • Maximize liquidity ↑
  • Minimize transactions ↓

Real-World Examples

Automotive Engineering
  • Performance vs. Fuel Economy
  • Safety vs. Weight
  • Cost vs. Quality
Machine Learning
  • Accuracy vs. Model Complexity
  • Training Time vs. Performance
  • Interpretability vs. Accuracy

Multi-Objective Problem (MOP)

Mathematical Formulation:

Minimize: F(x) = [f₁(x), f₂(x), ..., fm(x)]

Subject to: x ∈ X (decision space)

Where:

  • x: Decision variable vector
  • fi: Individual objective functions
  • m: Number of objectives (typically m ≥ 2)
  • X: Feasible decision space

The Fundamental Challenge

There is typically no single solution that optimizes all objectives simultaneously

Why?

  • Objectives are often conflicting
  • Improving one objective may worsen another
  • Trade-offs are inevitable

Visualization: Trade-offs

Objective 1 (Cost) → Objective 2 (Quality) → Pareto Optimal Solutions Dominated Solutions

Pareto Dominance

Definition: Solution x dominates solution y (denoted x ≺ y) if and only if:

  1. x is no worse than y in all objectives
  2. x is strictly better than y in at least one objective

Formally: For minimization problems,

x ≺ y ⟺ (∀i: fi(x) ≤ fi(y)) ∧ (∃j: fj(x) < fj(y))

Example: Comparing Solutions

Solution Cost (f₁) Time (f₂) Status
A 100 50 Non-dominated
B 120 40 Non-dominated
C 110 60 Dominated by A
D 90 55 Non-dominated

Analysis: A dominates C because 100 < 110 AND 50 < 60

Pareto Optimal Set

Pareto Optimal Solution: A solution that is not dominated by any other solution in the search space

Pareto Set (PS): The set of all Pareto optimal solutions in decision space

Pareto Front (PF): The set of objective vectors corresponding to the Pareto Set in objective space

Properties of Pareto Solutions

  • Non-uniqueness: Multiple Pareto optimal solutions typically exist
  • Trade-off representation: Each point represents a different balance between objectives
  • Equivalence: No Pareto optimal solution is objectively better than another
  • Decision support: Final choice depends on decision-maker preferences

Goals of Multi-Objective Optimization

Find a set of solutions that best approximates the true Pareto front

Three Critical Criteria

1. Convergence

  • Solutions should be close to the true Pareto front
  • Minimize distance to optimal trade-offs

2. Diversity

  • Solutions should be well-distributed
  • Cover the entire range of trade-offs

3. Coverage

  • Explore the full extent of the Pareto front
  • Avoid gaps in representation

Visual Comparison: Quality Metrics

Poor Diversity

Good Diversity

NSGA-II Algorithm

Non-dominated Sorting Genetic Algorithm II

Deb et al. (2002) - Most cited multi-objective optimization algorithm

Key Innovations:

  • Fast non-dominated sorting (O(MN²))
  • Crowding distance for diversity
  • Elitist selection mechanism

NSGA-II: Main Components

  1. Non-dominated Sorting
    • Rank solutions into fronts (Front 1, Front 2, ...)
  2. Crowding Distance Calculation
    • Measure spacing between solutions in objective space
  3. Selection Mechanism
    • Prefer: lower rank, then higher crowding distance
  4. Genetic Operators
    • Standard crossover and mutation

Non-Dominated Sorting

Concept: Organize solutions into hierarchical fronts

  • Front 1: Non-dominated solutions (best rank)
  • Front 2: Solutions dominated only by Front 1
  • Front 3: Solutions dominated only by Fronts 1 & 2
  • And so on...

Crowding Distance

Purpose: Maintain diversity by favoring solutions in less crowded regions

Crowding Distance

Calculation: Sum of normalized distances to neighboring solutions in each objective

NSGA-II: Algorithm Flow

  1. Initialize population P₀ (size N)
  2. For each generation t:
    • Create offspring Qt from Pt (size N)
    • Combine: Rt = Pt ∪ Qt (size 2N)
    • Perform non-dominated sorting on Rt
    • Calculate crowding distance for each front
    • Select best N individuals for Pt+1
  3. Return final non-dominated front

Selection Strategy

Binary Tournament Selection

Comparison Rules:

  1. If solutions have different ranks → Select solution with better (lower) rank
  2. If solutions have same rank → Select solution with larger crowding distance

Why?

  • Rank promotes convergence to Pareto front
  • Crowding distance promotes diversity along front

Alternative Approaches

Multiple algorithmic paradigms for multi-objective optimization

SPEA2: Strength Pareto EA

Strength Pareto Evolutionary Algorithm 2 (Zitzler et al., 2001)

Key Features:

  • External archive of non-dominated solutions
  • Fitness = Strength + Density
    • Strength: Number of solutions an individual dominates
    • Density: k-nearest neighbor distance
  • Archive truncation based on proximity

MOEA/D: Decomposition Approach

Multi-Objective Evolutionary Algorithm based on Decomposition (Zhang & Li, 2007)

Core Idea: Decompose MOP into multiple scalar optimization subproblems

Scalarization Functions:

Weighted Sum g(x|w) = Σ wi · fi(x)
Tchebycheff g(x|w,z*) = max{wi · |fi(x) - zi*|}

MOEA/D: Workflow

  1. Generate set of weight vectors (uniformly distributed)
  2. Each weight vector defines one subproblem
  3. Define neighborhood for each subproblem
  4. Optimize subproblems collaboratively:
    • Select parents from neighborhood
    • Generate offspring
    • Update neighboring solutions

Advantage: Effective for problems with regular Pareto fronts

Comparison of Approaches

Algorithm Main Mechanism Best For
NSGA-II Ranking + Crowding General-purpose, 2-3 objectives
SPEA2 Strength + Archive Small populations, 2-3 objectives
MOEA/D Decomposition Regular fronts, continuous problems

Many-Objective Optimization

When m > 3: New challenges emerge

Challenges with Many Objectives

Problem 1: Loss of Selection Pressure

  • Most solutions become non-dominated
  • Harder to distinguish quality
  • Slower convergence

Problem 2: Exponential Growth

  • Pareto front size grows exponentially
  • Computational cost increases
  • Visualization becomes impossible

Solutions for Many Objectives

1. Reference Point Methods (NSGA-III)

  • Use predefined reference points on hyperplane
  • Associate solutions with reference points
  • Maintain diversity through reference point distribution

2. Indicator-Based Selection

  • Use quality indicators (e.g., hypervolume) directly in selection
  • Stronger selection pressure
  • Examples: SMS-EMOA, IBEA

Evaluating Multi-Objective Algorithms

Challenge: How do we measure quality of a solution set?

Performance Indicators

1. Hypervolume (HV)

Volume of objective space dominated by solution set, bounded by reference point

  • ✓ Captures both convergence and diversity
  • ✓ Widely accepted as best single indicator
  • ✗ Computationally expensive for many objectives

Hypervolume Visualization

Reference Point f₁ (minimize) f₂ (minimize) Hypervolume

Other Key Indicators

2. Inverted Generational Distance (IGD)

Average distance from true Pareto front to obtained front

  • ✓ Measures both convergence and diversity
  • ✗ Requires knowledge of true Pareto front

3. Spacing (S)

Standard deviation of distances between consecutive solutions

  • ✓ Simple measure of uniformity
  • ✗ Doesn't capture convergence

Experimental Methodology

Proper evaluation requires:
  1. Multiple independent runs (typically 20-30)
  2. Statistical significance testing
  3. Multiple benchmark problems
  4. Multiple performance indicators
  5. Visual inspection of Pareto fronts

Real-World Applications

Multi-objective optimization in practice

Engineering Design: Aircraft Wing

Objectives:

  • Minimize: Drag coefficient
  • Maximize: Lift coefficient
  • Minimize: Structural weight
  • Minimize: Manufacturing cost

Variables: Wing shape parameters (20-50 dimensions)

Outcome: Set of wing designs representing different trade-offs for decision-makers to choose from

Neural Architecture Search

Objectives:

  • Maximize: Model accuracy
  • Minimize: Model size (parameters)
  • Minimize: Inference time
  • Minimize: Training cost

Why Multi-Objective?

  • Different deployment scenarios have different priorities
  • Mobile vs. cloud deployment
  • Real-time vs. batch processing

Sustainable Energy Systems

Objectives:

  • Minimize: Total cost
  • Minimize: CO₂ emissions
  • Maximize: Reliability/uptime
  • Minimize: Land use

Decision Variables:

  • Mix of energy sources (solar, wind, hydro, etc.)
  • Capacity of each source
  • Storage system configuration

Practical Guidelines

Making multi-objective optimization work

Choosing an Algorithm

Scenario Recommended Algorithm
2-3 objectives, general problem NSGA-II
Complex, irregular Pareto front NSGA-II or SPEA2
Regular front, continuous variables MOEA/D
4+ objectives NSGA-III or MOEA/D

Parameter Settings

Typical Recommendations:

  • Population size: 100-200 for 2-3 objectives
  • Generations: Problem-dependent (often 200-500)
  • Crossover rate: 0.8-0.9
  • Mutation rate: 1/n (n = number of variables)
Rule of thumb: For m objectives, use population size ≥ 10m

Common Pitfalls to Avoid

  • ❌ Using single-objective thinking
    • ✓ Embrace the concept of trade-offs
  • ❌ Ignoring diversity maintenance
    • ✓ Monitor spread of solutions
  • ❌ Evaluating with single metric only
    • ✓ Use multiple indicators + visualization
  • ❌ Running single experiment
    • ✓ Multiple runs + statistical testing
  • ❌ Scaling issues with objectives
    • ✓ Normalize objective values

Decision Making from Pareto Set

After optimization: How to choose a single solution?

A Posteriori Methods

Approach: First find Pareto front, then let decision-maker choose

Advantages:

  • Decision-maker sees all trade-offs
  • No preference information needed a priori
  • Most flexible approach

Tools for Decision Support:

  • Visual Pareto front exploration
  • Interactive filtering
  • Parallel coordinates plots

Preference Articulation

A Priori (Before Optimization)

  • Specify objective weights
  • Define aspiration levels
  • → Guides search toward preferred region

Interactive (During Optimization)

  • Provide feedback during search
  • Progressively refine preferences
  • → Adaptive focus on interesting regions

Software Libraries

Implementing multi-objective optimization

Available Frameworks

Library Language Algorithms
pymoo Python NSGA-II, NSGA-III, MOEA/D, and many more
PlatEMO MATLAB 100+ MOEAs with GUI
jMetal Java Comprehensive framework
DEAP Python Flexible EA framework

Key Takeaways

Fundamental Concepts

  1. Real-world problems often involve multiple conflicting objectives
  2. No single optimal solution exists → seek Pareto optimal set
  3. Pareto dominance defines solution quality
  4. Success requires balance of convergence and diversity

Algorithmic Insights

  1. NSGA-II is the standard starting point (2-3 objectives)
  2. Ranking + crowding distance = effective combination
  3. MOEA/D excels with regular Pareto fronts
  4. Many-objective problems (m > 3) require specialized methods

Practical Recommendations

For successful application:
  • Choose algorithm based on problem characteristics
  • Use multiple performance indicators
  • Always visualize results
  • Conduct proper statistical evaluation
  • Involve decision-makers early in process

Continuing Your Learning

Essential Reading

Foundational Papers

  • Deb et al. (2002): "A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II"
  • Zhang & Li (2007): "MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition"

Books

  • Deb (2001): "Multi-Objective Optimization Using Evolutionary Algorithms"
  • Coello et al. (2007): "Evolutionary Algorithms for Solving Multi-Objective Problems"

Hands-On Practice

Suggested Exercise

ZDT1 Benchmark Problem:

  1. Implement NSGA-II from scratch
  2. Test on ZDT1 (2 objectives, 30 variables)
  3. Visualize convergence to Pareto front
  4. Calculate hypervolume over generations
  5. Compare with library implementation

Next Topics

  • Constraint handling in multi-objective problems
  • Dynamic multi-objective optimization
  • Robust multi-objective optimization
  • Preference-based methods
  • Multi-level and multi-scale optimization

Thank You!

Questions?

Next Lecture: Constraint Handling in Evolutionary Algorithms