In case there are too many prior algorithms, what should you do?
Option 1: If you are lucky in that all of them use the same benchmark dataset then you can compare your proporsed algorithm by using such a benchmark.
Option 2: Select a representative algorithms as baselines. The baselines should include well-known, state-of-the-art, and top-performer. Importantly, the selected baselines should cover methodologically different styles or strategies. And conduct extensive experiment with various evaluation metrics.
Option 3: Determine the formally global optimum and compare your approach against it. If your approach reach the optimum, no need to compare with local optimum algorithms at all. This is a very strong and elegant strategy when applicable. Use the following metrics:
1.Gap to optimum
2.Time to reach optimum
3.Stability over multiple runs (for stochastic algorithms)
If your algorithm consistently reaches or nearly reaches the global optimum:
That’s clear evidence that local-optimal algorithms (like greedy, GA, PSO, etc.) are unnecessary for comparison. You can claim your algorithm is globally optimal or near-optimal in practice. You can skip comparing with heuristic/metaheuristic baselines if:
Your algorithm reaches the global optimum in all test cases, or
It comes within a very tight tolerance (say, ≤1%) and is significantly faster.
This not only saves space and time in your paper, but also strengthens your scientific rigor, since you base your results on a provable benchmark.