On the m8, the coding allows issue like falloff and chromatic aberration etc. to be corrected after capture, in a lens-specific way. The sensor records the r,g, and b signals separately and so if you know what lens you're using, you can adjust those before interpolating to make the rgb image pixel. That's my understanding of it. The thins is, as long as you store a truly raw file which has the pre-interpolated r,g,b data, then you don't need coding. You can do the corrections later. So frankly I think it's an example of overengineering at high cost to the consumer.