I am no expert on the matter, but my understanding is that one can let the RAW file conversion to JPEG or other format take place in the camera, or save the RAW file to a computer, and with the proper software, perform the conversion there, which allows one, by taking advantage of the larger dynamic range of the RAW file, to make corrections to improve the converted quality, before sending to the lab. In some cases the RAW file may be sent to a lab to let them do the correction and conversion before printing. Workflow slows so it is best if the photographer just endeavors to shoot it right to begin with. Does this make sense to you?
Kind of. A digital camera creates a file when you press the shutter, made up of binary code. It's pure information which is then decoded in camera, or retrospectively by file conversion software, into a photograph. Editing software allows you to do the stuff traditionally associated with a darkroom, alter contrast, change colours, soften or sharpen and burn in over-exposed areas, plus a few new tricks. Files are certainly large and getting bigger all the time, but memory and storage become cheaper, so professional cameras and the equipment to make pictures from them costs roughly what it always did.
The reasons for choosing one format over another is practical and sentimental. If your aim is the biggest, sharpest, most clearly defined photograph available, you'll use a large format film camera, the best film and lenses and absorb the ergonomic and financial penalties that technique incurs. From there begins a trend towards more democratic, versatile equipment that ends in a smartphone camera. Between those two poles exist an array of choices informed by budget, skill, familiarity and aspiration. However other social factors come into play, like who are the photographs intended for and how are they to be seen (if at all)? The trend is away from the film photography era, in which people generally took a relatively small number of photographs of which a large percentage were turned into solid images, into a digital era where users take vast amounts of shots from which a tiny amount end up in print.
There are possibilities and dangers that come with this change. The opportunity to reach a hitherto unimaginable number of viewers has never been greater, and the equipment to do so more affordable, but along with that exposure comes a general devaluation of the still image as a social artefact. Basically, every photograph used to have totemic status, an heirloom to be passed on and admired. Now even great shots, the stuff of reputations, may not be viewed for more than a few seconds and have never made hard copy. For people who grew up in the film era that change is quite shocking, and many of us yearn for simpler days and more familiar cameras. That affection is not exclusive to oldies like myself, but attracts younger people who quickly come to realise their two year old Sony camera ("new" in film terms) may be three or four iterations old and have an uncertain support infrastructure.
We can have the best of both worlds. Photographers like Junku Nishimura shoot on film cameras and develop and print in a traditional darkroom, scanning the final image to a wider public than those who have access to work
in the flesh. The only person it matters to is ourselves, and fortunately we still have a choice.