That seems like you would wind up with compression of highlights that needed some kind of gymnastics to show the separation.
Two more things on this idea.
A distinct advantage of compensation (in theory) is that it affects the highlights almost exclusively and does not affect the mid and low tones. This theoretically allows scenes with a long scale subjects to retain snappy, well separated shadows and mid-tones and allows straight printing, no burn and dodge. Worthwhile goals. Highlight separation is sacrificed/compressed purposefully (whether the users admit it or not). In exchange for "better" mid and low tone separation and easy printing they are willing to give up some detail/separation in the highlights. (I need more coffee, there is probably an easier way to say that with less redundancy.)
A distinct limitation of compensation is that in the real world it doesn't always work as advertised. What I mean by that is that the toe of our film curve is very well defined in terms of its relationship to what our light meters tell us, but in my years here at APUG I can't remember seeing any numerical x,y data defining where a compensated shoulder starts or ends with various or any developer/film/time/temp combos that would allow anyone else to truly duplicate the effect, let alone a comparison to the standard curve to be able to tell us how big the effect is. That begs for me the question; how would I shoot to purposefully take advantage of that compensated curve and what E.I. would actually works best? I'm not saying the compensation does not work, or that it's not real; the theory actually makes sense. What I'm saying is that without hard data it's like grandma passing down one of her recipes over the phone and saying add a little salt, a few eggs, a couple handfuls of flour, add water until it has a good consistency the details are pretty sketchy, my result is probably going to be of lower quality than grandma's.

