But all is not lost: just because we can't make Wget do everything, doesn't mean we can't let it do anything... we'll just have to find something we can do somewhere between nothing and everything, that will meet a reasonable amount of users' expectations.
However, it is the opinion of the current (as of the time of writing) Wget maintainer, MicahCowan, that such a feature should not be an official part of Wget, the reasons being that it could be potentially dangerous (may be difficult to handle infinite loops, and may be easier for malicious authors to write pages specifically to cause Wget to have unpleasant results), is pretty much guaranteed to severely impede Wget's performance in the best of cases, and will never be more than a hack, as it can never do all that could be expected of it—and may never even get close.
2. Levels of Support
2.1. Everything, but Everything
2.2. String-Literal URI recognition
Naturally, it would not cover anything that generates links programatically, and so would be extremely limited. Still, it would require relatively little work to write, would cover a number of simple cases, and would be one of the few algorithms simple enough so as to not significantly slow down operation, all of which makes it an attractive option.
This would be one of the potentially heavy time- and memory-consuming options, but would provide very reasonable coverage. However, it still wouldn't cover everything, as many sites may have different results based on the order in which things are clicked; and of course, pages obtained via form submissions would not be found.
2.5. Scriptable Engine
3. Other Problems
Of course, with the exception of very simple implementations like the string-literal parser, it's not possible to honor --convert-links, as generated links can't be easily converted in such a way.