A word of advice. Anytime you see the phrase "Nyquist Theory" on the internet, it's best to disregard the whole thing. 99% of the time it's brought up, it's brought up to prove something it was never intended to provide evidence for. The whole point of the Nyquist theorem is to provide a general guideline for minimal engineering standards for analog to digital conversion. It was never intended to be proof for anything. It is, by it's nature, deeply flawed. For example, the Nyquist Theory assumes a perfectly bandwidth limited system. These do not exist in nature. Therefor, under no circumstance can the Nyquist theory be applied to any system and be relied upon to give accurate results.
About the only time the Nyquist Theorem should be discussed is when you're designing or implementing an ADC system, and you want to know the ballpark for the bare minimum sampling frequency that you could potentially get away with to keep costs, processing, and/or storage space to a minimum. Even then, it should still be tested in a real world scenario to ensure that the results achieved are in line with what was expected.
So it's a handy theory that has it's uses. But more often than not, it's abused on the internet to "prove" some poorly thought out concept concocted by a neophyte with an axe to grind.
You have vastly overstated the case about the Nyquist theorem. You say it is deeply flawed, but actually, it is a rigorous mathematical theorem. No one has ever discovered a flaw in the theorem.
What can be flawed (and often is flawed) is how the Nyquist theorem is used. For example, it provides a limit on what signals can be reconstructed without error. Any signal components above the Nyquist limit will show up at a lower frequency in the reconstructed signal. When I say it provides a limit, this is not to say that in any signalling system that limit is actually achieved. You have pointed out an example of why it might not be achieved because no perfectly bandwidth limited system exists in nature. That is not the fault of the theorem, but it is the fault of the physical systems that handle the signal.
Another widely misunderstood fact is that some people assume that even if the signal were perfectly bandwidth limited, the sampled result is a good representation of the original signal. This assumption fails when frequency components of the signal are close to the Nyquist limit, even for those signals that are below the Nyquist limit. Some people do not understand that all the Nyquist theorem says is that it is possible to reconstruct the original signal without error, not that the sampled result itself is an adequate reconstruction of the signal. I started a thread on this topic a while back, and from the responses it is evident that a lot of people don't understand that a sampled signal, in and of itself, is not necessarily an adequate reconstruction of the original signal.
I don't think we have a fundamental disagreement because you point out some valid points, but some who read your post may get the incorrect idea that there is something faulty about the theorem itself, whereas the fault lies in how many people think about and attempt to use the Nyquist theorem.