There are a lot of studies on emulsion, but researches on granularity and researches on aging are both less common than researches on sensitometry. So I don't have a very good answer to that.
However, if you put a gun on my head and ask for my best guess, I'll tell you the following.
One possibility is the difference in development. In the old days of Panatomic-X, most skilled darkroom workers made rather thin negatives and printed them very carefully. With old emulsions, thinner negatives gave better granularity and also better resolution. This is also true of modern negatives but not as pronouncedly so. So many young darkroom workers prefer to see dark, robust negatives. Sort of like the difference between the old world wine and the new world wine.
Another possibility is due to subthreshold fogging. Due to slow chemical changes and also radiation, emulsions can build up fogging, but slow emulsions, especially if frozen, have a long period of time before the fog becomes visible after development. The subthreshold fogging creates a very tiny silver speck in the emulsion that can change the way latent image center is created on the grain. And the size and the location (within a single grain) of the latent image can have a large influence over the rate of development and the size of the developed grain. The best way to test this factor would be to test for high intensity reciprocity failure. You would've had to test it before and after aging and compare if there's any changes with sensitivity when the film is exposed with very narrow (1, 10 or 100 microseconds) pulse of light.