Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Particle Swarm Optimization | Swarm-Based Algorithms
Bio-Inspired Algorithms

bookParticle Swarm Optimization

Note
Definition

Particle Swarm Optimization (PSO) is a population-based, bio-inspired algorithm that draws its inspiration from the collective behavior of bird flocks and fish schools. In PSO, you work with a swarm of simple agents called particles. Each particle represents a potential solution to the optimization problem and moves through the search space by updating its position and velocity.

Core Concepts

The movement of each particle is influenced by two main factors:

  • Personal Best: The best-known position that the particle itself has discovered so far;
  • Global Best: The best-known position found by the entire swarm at any point during the search.

This dynamic allows the swarm to explore the solution space efficiently, balancing exploration (searching new areas) and exploitation (refining known good areas) as particles communicate and learn from each other.

Key Concepts in PSO:

  • Particles: agents that represent possible solutions;
  • Position: the current location of a particle in the search space;
  • Velocity: the direction and speed of particle movement;
  • Personal best position: the best position found by each particle so far;
  • Global best position: the best position found by any particle in the swarm.

By iteratively updating positions and velocities based on these principles, PSO enables you to solve complex optimization problems in a flexible and efficient manner.

Example: Basic PSO Implementation

123456789101112131415161718192021222324252627282930313233343536373839404142434445
import numpy as np # Objective function: simple 1D quadratic (minimum at x=3) def objective(x): return (x - 3) ** 2 + 2 # PSO parameters num_particles = 20 num_iterations = 50 w = 0.5 # Inertia coefficient c1 = 1.5 # Cognitive (personal) coefficient c2 = 1.5 # Social (global) coefficient # Initialize particle positions and velocities positions = np.random.uniform(-10, 10, num_particles) velocities = np.zeros(num_particles) personal_best_positions = positions.copy() personal_best_scores = objective(positions) global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] # PSO loop for iteration in range(num_iterations): # Evaluate current positions scores = objective(positions) # Update personal bests better_mask = scores < personal_best_scores personal_best_scores[better_mask] = scores[better_mask] personal_best_positions[better_mask] = positions[better_mask] # Update global best global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] # Update velocities and positions r1 = np.random.rand(num_particles) r2 = np.random.rand(num_particles) velocities = ( w * velocities + c1 * r1 * (personal_best_positions - positions) + c2 * r2 * (global_best_position - positions) ) positions += velocities print(f"Best position found: {global_best_position:.3f}") print(f"Objective value: {objective(global_best_position):.3f}")
copy

Step-by-Step: Position and Velocity Updates in PSO

  1. Evaluate Each Particle: For every particle, calculate the objective function at its current position;
  2. Update Personal Best: If the current position yields a better score than any previous position, store it as the new personal best;
  3. Update Global Best: Identify the best score among all personal bests and update the global best position accordingly;
  4. Update Velocity: For each particle, calculate the new velocity using:
    • The inertia term (w * velocity), which encourages the particle to keep moving in its current direction;
    • The cognitive term (c1 * r1 * (personal_best - position)), which pulls the particle towards its own best experience;
    • The social term (c2 * r2 * (global_best - position)), which pulls the particle towards the best found by the swarm;
  5. Update Position: Add the new velocity to the current position to obtain the particle's new location in the search space.

Particles repeat these steps for a set number of iterations or until convergence criteria are met.

Example: Visualizing Particle Trajectories

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
import numpy as np import matplotlib.pyplot as plt # Objective function: simple 1D quadratic (minimum at x=3) def objective(x): return (x - 3) ** 2 + 2 # PSO parameters num_particles = 20 num_iterations = 50 w = 0.5 # Inertia coefficient c1 = 1.5 # Cognitive (personal) coefficient c2 = 1.5 # Social (global) coefficient # Re-run the PSO loop, recording particle trajectories positions = np.random.uniform(-10, 10, num_particles) velocities = np.zeros(num_particles) personal_best_positions = positions.copy() personal_best_scores = objective(positions) global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] trajectories = [positions.copy()] for iteration in range(num_iterations): scores = objective(positions) better_mask = scores < personal_best_scores personal_best_scores[better_mask] = scores[better_mask] personal_best_positions[better_mask] = positions[better_mask] global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] r1 = np.random.rand(num_particles) r2 = np.random.rand(num_particles) velocities = ( w * velocities + c1 * r1 * (personal_best_positions - positions) + c2 * r2 * (global_best_position - positions) ) positions += velocities trajectories.append(positions.copy()) trajectories = np.array(trajectories) # Plot trajectories plt.figure(figsize=(10, 6)) for i in range(num_particles): plt.plot(trajectories[:, i], label=f"Particle {i+1}", alpha=0.5) plt.axhline(3, color='red', linestyle='--', label='Optimum (x=3)') plt.title('Particle Trajectories in PSO') plt.xlabel('Iteration') plt.ylabel('Position') plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', fontsize='small') plt.tight_layout() plt.show()
copy

Parameter Analysis

The choice of PSO parameters has a significant impact on the algorithm's behavior and performance:

  • Inertia Coefficient (w): Controls how much of the previous velocity is retained;
    • Higher values encourage exploration and help particles escape local minima;
    • Lower values favor exploitation and accelerate convergence.
  • Cognitive Coefficient (c1): Determines how strongly particles are pulled toward their own best positions;
    • Raising this value encourages independent exploration and helps maintain swarm diversity.
  • Social Coefficient (c2): Governs the attraction to the global best position;
    • Increasing this value accelerates convergence by promoting collective learning;
    • Excessively high values can cause premature stagnation.

Careful tuning of these coefficients is essential for achieving robust optimization performance in different problem domains. Adjust the parameters to balance exploration and exploitation, ensuring that particles search effectively without getting stuck or converging too quickly.

question mark

Which statements about Particle Swarm Optimization (PSO) are correct? Select all that apply.

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 3. Kapitel 2

Fråga AI

expand

Fråga AI

ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

Awesome!

Completion rate improved to 6.25

bookParticle Swarm Optimization

Svep för att visa menyn

Note
Definition

Particle Swarm Optimization (PSO) is a population-based, bio-inspired algorithm that draws its inspiration from the collective behavior of bird flocks and fish schools. In PSO, you work with a swarm of simple agents called particles. Each particle represents a potential solution to the optimization problem and moves through the search space by updating its position and velocity.

Core Concepts

The movement of each particle is influenced by two main factors:

  • Personal Best: The best-known position that the particle itself has discovered so far;
  • Global Best: The best-known position found by the entire swarm at any point during the search.

This dynamic allows the swarm to explore the solution space efficiently, balancing exploration (searching new areas) and exploitation (refining known good areas) as particles communicate and learn from each other.

Key Concepts in PSO:

  • Particles: agents that represent possible solutions;
  • Position: the current location of a particle in the search space;
  • Velocity: the direction and speed of particle movement;
  • Personal best position: the best position found by each particle so far;
  • Global best position: the best position found by any particle in the swarm.

By iteratively updating positions and velocities based on these principles, PSO enables you to solve complex optimization problems in a flexible and efficient manner.

Example: Basic PSO Implementation

123456789101112131415161718192021222324252627282930313233343536373839404142434445
import numpy as np # Objective function: simple 1D quadratic (minimum at x=3) def objective(x): return (x - 3) ** 2 + 2 # PSO parameters num_particles = 20 num_iterations = 50 w = 0.5 # Inertia coefficient c1 = 1.5 # Cognitive (personal) coefficient c2 = 1.5 # Social (global) coefficient # Initialize particle positions and velocities positions = np.random.uniform(-10, 10, num_particles) velocities = np.zeros(num_particles) personal_best_positions = positions.copy() personal_best_scores = objective(positions) global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] # PSO loop for iteration in range(num_iterations): # Evaluate current positions scores = objective(positions) # Update personal bests better_mask = scores < personal_best_scores personal_best_scores[better_mask] = scores[better_mask] personal_best_positions[better_mask] = positions[better_mask] # Update global best global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] # Update velocities and positions r1 = np.random.rand(num_particles) r2 = np.random.rand(num_particles) velocities = ( w * velocities + c1 * r1 * (personal_best_positions - positions) + c2 * r2 * (global_best_position - positions) ) positions += velocities print(f"Best position found: {global_best_position:.3f}") print(f"Objective value: {objective(global_best_position):.3f}")
copy

Step-by-Step: Position and Velocity Updates in PSO

  1. Evaluate Each Particle: For every particle, calculate the objective function at its current position;
  2. Update Personal Best: If the current position yields a better score than any previous position, store it as the new personal best;
  3. Update Global Best: Identify the best score among all personal bests and update the global best position accordingly;
  4. Update Velocity: For each particle, calculate the new velocity using:
    • The inertia term (w * velocity), which encourages the particle to keep moving in its current direction;
    • The cognitive term (c1 * r1 * (personal_best - position)), which pulls the particle towards its own best experience;
    • The social term (c2 * r2 * (global_best - position)), which pulls the particle towards the best found by the swarm;
  5. Update Position: Add the new velocity to the current position to obtain the particle's new location in the search space.

Particles repeat these steps for a set number of iterations or until convergence criteria are met.

Example: Visualizing Particle Trajectories

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
import numpy as np import matplotlib.pyplot as plt # Objective function: simple 1D quadratic (minimum at x=3) def objective(x): return (x - 3) ** 2 + 2 # PSO parameters num_particles = 20 num_iterations = 50 w = 0.5 # Inertia coefficient c1 = 1.5 # Cognitive (personal) coefficient c2 = 1.5 # Social (global) coefficient # Re-run the PSO loop, recording particle trajectories positions = np.random.uniform(-10, 10, num_particles) velocities = np.zeros(num_particles) personal_best_positions = positions.copy() personal_best_scores = objective(positions) global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] trajectories = [positions.copy()] for iteration in range(num_iterations): scores = objective(positions) better_mask = scores < personal_best_scores personal_best_scores[better_mask] = scores[better_mask] personal_best_positions[better_mask] = positions[better_mask] global_best_idx = np.argmin(personal_best_scores) global_best_position = personal_best_positions[global_best_idx] r1 = np.random.rand(num_particles) r2 = np.random.rand(num_particles) velocities = ( w * velocities + c1 * r1 * (personal_best_positions - positions) + c2 * r2 * (global_best_position - positions) ) positions += velocities trajectories.append(positions.copy()) trajectories = np.array(trajectories) # Plot trajectories plt.figure(figsize=(10, 6)) for i in range(num_particles): plt.plot(trajectories[:, i], label=f"Particle {i+1}", alpha=0.5) plt.axhline(3, color='red', linestyle='--', label='Optimum (x=3)') plt.title('Particle Trajectories in PSO') plt.xlabel('Iteration') plt.ylabel('Position') plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', fontsize='small') plt.tight_layout() plt.show()
copy

Parameter Analysis

The choice of PSO parameters has a significant impact on the algorithm's behavior and performance:

  • Inertia Coefficient (w): Controls how much of the previous velocity is retained;
    • Higher values encourage exploration and help particles escape local minima;
    • Lower values favor exploitation and accelerate convergence.
  • Cognitive Coefficient (c1): Determines how strongly particles are pulled toward their own best positions;
    • Raising this value encourages independent exploration and helps maintain swarm diversity.
  • Social Coefficient (c2): Governs the attraction to the global best position;
    • Increasing this value accelerates convergence by promoting collective learning;
    • Excessively high values can cause premature stagnation.

Careful tuning of these coefficients is essential for achieving robust optimization performance in different problem domains. Adjust the parameters to balance exploration and exploitation, ensuring that particles search effectively without getting stuck or converging too quickly.

question mark

Which statements about Particle Swarm Optimization (PSO) are correct? Select all that apply.

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 3. Kapitel 2
some-alt