Example: Using the Benchmarking Feature
JuliaOS includes a comprehensive benchmarking suite for evaluating and comparing swarm optimization algorithms. This feature helps you select the most appropriate algorithm for your specific optimization problems.
# Start the CLI
./scripts/run-cli.sh # or node packages/cli/interactive.cjs
# Select "🧬 Swarm Intelligence" from the main menu
# Choose "📊 Benchmark Algorithms"
# Select the algorithms to benchmark (e.g., DE, PSO, GWO, DEPSO)
# Choose the benchmark functions (e.g., Sphere, Rastrigin, Rosenbrock, Ackley, Griewank)
# Set the benchmark parameters (dimensions, runs, etc.)
# Run the benchmark and view the results
The benchmarking CLI provides an interactive interface for:
Selecting algorithms to benchmark (DE, PSO, GWO, ACO, GA, WOA, DEPSO)
Choosing benchmark functions with different difficulty levels
Setting dimensions, runs, and evaluation limits
Comparing algorithm performance across different metrics
Generating comprehensive HTML reports with visualizations
Ranking algorithms based on performance metrics
You can also use the Python wrapper to access the benchmarking functionality:
import asyncio
from juliaos import JuliaOS
async def run_benchmark():
# Initialize JuliaOS
juliaos_client = JuliaOS(host="localhost", port=8052)
await juliaos_client.connect()
# Run benchmark
result = await juliaos_client.swarms.run_benchmark(
algorithms=["DE", "PSO", "GWO", "DEPSO"],
functions=["sphere", "rastrigin", "rosenbrock", "ackley", "griewank"],
dimensions=10,
runs=10,
max_iterations=1000,
population_size=50
)
# Print results
print("Benchmark Results:")
for func_name, func_results in result.items():
print(f"\nFunction: {func_name}")
for algo, metrics in func_results.items():
print(f" Algorithm: {algo}")
print(f" Mean Best Fitness: {metrics['mean_best_fitness']}")
print(f" Success Rate: {metrics['success_rate']}")
print(f" Mean Iterations: {metrics['mean_iterations']}")
print(f" Mean Time: {metrics['mean_time']} seconds")
# Generate visualization
visualization = await juliaos_client.swarms.generate_benchmark_visualization(
benchmark_results=result,
visualization_type="convergence"
)
# Save visualization to file
with open("benchmark_visualization.html", "w") as f:
f.write(visualization)
print("\nVisualization saved to benchmark_visualization.html")
await juliaos_client.disconnect()
# Run the benchmark
asyncio.run(run_benchmark())
The benchmarking feature provides:
Comparison of multiple swarm algorithms on standard test functions
Performance metrics including success rate, convergence speed, and solution quality
Statistical analysis of algorithm performance across multiple runs
Visualization of convergence behavior and performance comparisons
Parameter sensitivity analysis to optimize algorithm settings
Export of results in various formats (CSV, JSON, HTML)
Last updated