Talk:Metaheuristics
Dear Fred and Kenneth,
This is the first time that I have to review something non-anonymously. Hence I must be careful, not to be too nasty ;o)
But this is not difficult. It is really a nice and extremely useful introduction into the area and an ideal starting point for new researchers. I have also asked my coworkers to give me comments; I will collect and forward these to you in case I get some. But I found it so intersting so that I read it myself immediately. My personal comments are:
GENERAL OBSERVARTIONS
While the general structure and writing style is really good, I have two main points:
1. What is the main guiding idea when selecting references? Clearly, the field is huge and one must make a careful selection since completeness is totally impossible. Hence for each topic one can choose (1) the chronologically first reference (to give them credit for this "invention"), (2) the most useful reference (e.g. a well written book where everything is explained very well), or (3) the most recent appropriate reference, book, or survey (providing many other recent references). In most cases, your references are not very new, but often they are also not the chronologically first ones. Maybe you could explicitly state which rule you chose and maybe also change some references.
2. When writing a survey or tutorial, it is most useful if some guidelines are provided, e.g. what seems to work very well, what does not work at all, what should be tried first when designing a metaheuristic, etc. Your article does very little in this respect, except for making clear that you do not like these metaphors (for good reason). When reporting other methods you have a very neutral point of view. This is good in the sense than nobody can be insulted, but probably having more guidelines/judgements would be even more useful for newcomers.
DETAILED COMMENTS ON THE CHAPTERS
- Definition:
When reading "Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist" I was not so sure why exactly these were mentioned. I agree that GA, TS, SA must be mentioned, but probably VNS or ALNS are more popular or competitive than ACO (even though I like ACO and have worked in this area).
At the end of Definition, you mention EU/ME, and "community1" should probably refer to a missing footnote 1?! Ibrahim Osman would probably be unhappy if you do not mention the term "modern heuristics" that some people (mainly himself) use instead of metaheuristics and maybe also his newsletter/newsgroup. But you must decide…
- Local search metaheuristics:
In general - as there is sometimes confusion in the literature - you could define or distinguish between iterative improvement, local search (LS), and LS metaheuristics. Some use LS as a synonym for iterative improvement, others for LS metaheuristics.
In general, you abbreviate some methods, VNS, ILS, but not all. Often also LS, SA, TS, GA etc. is used. Apparently you do not like these abbreviations?
I think, in addition to "called the current solution" you could/should also mention the term "incumbent" which is often used.
"A common search strategies" -> "A common search strategy"
The sentence "Selecting a random improving solution is another commonly used move strategy." is not really clear to me.
When mentioning VNS, I believe it would be good to mention at least the two basic variants VND (where the neighborhoods are searched completely) and VNS, where just random "shaking steps" are performed in these neighborhoods (followed by a LS).
In this section " Local search metaheuristics" I was wondering about the length of the paragraphs. TS and SA are explained very detailed (but not the deterministic SA versions great deluge and threshold accepting), VNS only very shortly, others, like the nowadays very popular and very competitive (A)LNS are not even mentioned.
Then RINS is mentioned also in detail, which I had not really heard of before. If mentioned, would it not better fit in "Metaheuristics and exact methods"? As far as I understand, the basic idea if RINS is similar to Local branching, relax and fix, and fix and optimize, all being popular "matheuristics". So why not mention them instead of (or in addition to RINS)?
- Constructive metaheuristics:
First sentence: "…is also called a move". Is this true? I would call it (iteration) step?!
ACO is again explained in detail (which I like). But it is not mentioned, that after the construction steps typically each ant solution should also undergo some LS in order to give a competitive method. Other methods like Estimation of Distribution Algorithm or Cross Entropy are not mentioned here. But indeed, they are not very popular in combinatorial problems, even though they have some nice theory behind.
- Population-based metaheuristics:
Here I do not have many comments. Maybe one could mention that
(1) the main difference between EA and GA is that in the EA first we have recombination and then selection, while in the GA (as you describe) there is first selection (w.r.t. fitness) and then recombination.
(2) mutation is typically a "small" change
(3) "population management" is very important, i.e., one must - without wasting too much time - make sure that the "best" solutions survive and at the same time that "diversity" is maintained.
(4) most successful GAs in fact use some kind of LS after recombination, for which it is now popular to use the term "education" rather than LS.
- “Hybdrid” metaheuristics:
I agree with "Algorithms belonging to the class of memetic algorithms (the only type of hybrid metaheuristic that has been given a specific name) (Moscato, 1989) combine recombination operators from the class of evolutionary algorithms with local search (meta)heuristics.", but also here one could have a different opinion:
(1) Pablo Moscato would strongly disagree with this characterization. He insists that MA is a much more general concept than GA with LS (even though I have never understood this difference).
(2) As mentioned above, most successful GAs in combinatorial optimization (e.g. the work of Vidal which has kind of revolutionized the vehicle routing world) do use LS after recombination without using the term MA, but calling it GA (maybe hybrid GA). In fact, Vidal typically uses DP as LS/education which is why this even could be called a matheuristic.
- The metaphor controversy:
This is always fun to read! But you do not mention "galaxy-based search" ;o)
- Metaheuristics and exact methods:
I am not sure whether this should be done, but often “Hybdrid” metaheuristics and matheuristics are put next to each other or even in the same section, in the sense that “Hybdrid” combines several approaches and matheuristics is simply the special case of “Hybdrid” where one of these approaches is an exact method.
Recent literature mainly focusses on MIP-matheuristics, while in my opinion most of these approaches are more useful for rapid prototyping. Solving some sub-problem by an exact solver and letting a heuristic select the sub-problems is always very easily implemented but hardly competitive with a tailored metaheuristic. The really competitive matheuristics are those avoiding calling CPLEX/GUROBI etc. very often, e.g. by solving a set partitioning/covering problem from time to time within a metaheuristic. This works always remarkable well and is also very easy.
By the way, the idea of LNS was developed firstly by destroying part of the incumbent solution with a heuristic and then reconstructing it with an exact method (constraint programming). This was again a matheuristic where an exact solver was called very often. This is why it was slow, did not work very well, and everybody ignored it. Only when Ropke and Pisinger replaced the exact CP by heuristics it worked extremely well. The additional adaptation aspect in ALNS is typically more or less useless.
- Continuous optimization:
Here it could be mentioned that this field has been developed completely independently of the metaheuristics in combinatorial optimization. Typically, this field (when the problem is not convex) is called "global optimization". I have not followed this stream of literature lately, but when I looked into it some years ago, it seemed that the most successful approaches were something like ILS or VNS. There are phases of intensification where some kind of LS is performed - often trying to approximate derivatives or curvatures to perform a fast steepest descent - and diversification steps similar to shaking in VNS.
- Multi-objective optimization:
Here, probably my coworkers will give some additional feedback, but I believe that (next to SPEA2), NSGAII (which you do not mention) is by far the most popular and best performing MO-GA. The others you mention are rather outdated.
- Stochastic optimization:
Also here, many things could be mentioned, e.g. conceptual things like
(1) Computing an a-priory solution, which is kept also after realization of the random variables vs. re-optimization every time some random variables are realized.
(2) Some chance constraint approach where constraints should be satisfied with a certain probability vs. a "real" stochastic optimization, where the expected costs of the recourse actions are evaluated.
Talking about the options "closed-form expression" or "Monte Carlo simulation" is a more technical detail not really related to metaheuristics, which is maybe less important than the basic concepts mentioned above.
Maybe I get some additional feedback from my coworkers.
I hope you find these comments useful. They express partly very personal opinions. Please, let me know if you disagree with some (hopefully not all) of them.