More than that, it completely ignores the factor noise, and it compares a wave, which is observed over time, with a wave, which is observed as a whole packet in one. Whatever the differences between analog vs. digital may be, they are not explained in this chart.
I know the referenced post was deleted but just one fine point of order. Sound and Light propagate by rather different means. Sound propagates by compressing the medium in which it exists. Light follows - IIRC - Maxwell's Equations and is a form of electromagnetic radiation. That's why there is light in the vacuum of space, but no sound.
In reference to this thread topic - it makes no difference how light is captured - analog, digital, a combination ... The light arrives in exactly the same way, all other things being equal. But, aye, there's the rub - they're aren't quite "equal" even if you assume a theoretically perfect taking lens to capture both ways.
With film, you have a single optical path - the taking lens - to worry about things like diffraction, especially. And, of course, you have to worry about the post capture optical paths for sharpness - either the enlarger reproduction chain or the scanner optics.
With digital, you can think of the sensor as being an array of lenses that capture the light being delivered by the taking lens. The sensor is essentially an array of additional tiny "lenses", one per pixel. These become relevant in the question of diffraction. This is why you get diffractive effects at larger f/stops with digital than with film and why early point and shoot digital cameras set their default shooting aperture around f/5.6 or so. These days, a lot of this has been remediated by better sensor design and in-camera digital post processing, but the underlying physics doesn't change.
In many respects, the difference comes down to film being asked to capture/record
what is there, whereas digital - with it's many sophisticated in-camera optimization algorithms - is, to one degree or another -
constructing an image of what is there. If you then look at the post processing tools like Topaz AI, they take this "construction" of the image to a whole new level using a whole bunch of machine learning tools to try to "fix" or "improve" the image to what it
should have been irrespective of what was actually captured.
I say this without judgement, only to note that they are rather different approaches to capturing the same light.
P.S. All of this isn't new to digital, BTW. Many years ago I worked in the music recording business and one of our competitors was recording a famous classical pianist's new album. He had a particularly bad session but had to leave to perform with the local orchestra that night. The recording engineer told him "Don't worry about it. We'll take the best bits from each of the many takes today, and splice them together to create one perfect performance." In those days "splice" was literal. You copied the "best bit" onto new (analog) tape, and literally glued these bits of tape together, end-to-end, to make a new master recording.
When the performer heard the final, now flawless master, he was really impressed. The engineer looked at him and said (allegedly), "Yes, it's really great. Don't you wish you could actually play that way?"