A word of advice. Anytime you see the phrase "Nyquist Theory" on the internet, it's best to disregard the whole thing. 99% of the time it's brought up, it's brought up to prove something it was never intended to provide evidence for. The whole point of the Nyquist theorem is to provide a general guideline for minimal engineering standards for analog to digital conversion. It was never intended to be proof for anything. It is, by it's nature, deeply flawed. For example, the Nyquist Theory assumes a perfectly bandwidth limited system. These do not exist in nature. Therefor, under no circumstance can the Nyquist theory be applied to any system and be relied upon to give accurate results.
About the only time the Nyquist Theorem should be discussed is when you're designing or implementing an ADC system, and you want to know the ballpark for the bare minimum sampling frequency that you could potentially get away with to keep costs, processing, and/or storage space to a minimum. Even then, it should still be tested in a real world scenario to ensure that the results achieved are in line with what was expected.
So it's a handy theory that has it's uses. But more often than not, it's abused on the internet to "prove" some poorly thought out concept concocted by a neophyte with an axe to grind.