I knew digital cameras and phones had to do a lot of processing and other types of magic to output anything human eyes can work with, but I had no idea just how much. This is wild.
What an unprocessed photo looks like
6 Comments
Leave a Reply
You must be logged in to post a comment.

So RAW file was never raw after all ? Or does it just include all the raw data and information to do the math ?
From what I understand sensors in cameras do not usually capture what humans can’t see.
It’s RAW in the sense it gives you the processed pixels intact without any (lossy) compression.
Have you actually READ the article? And THOUGHT about how things work? The key insight is here: “the camera’s analog-to-digital converter (ADC) output can theoretically output values from 0 to 16382”.
The camera gives you (theoretically) 16382 levels. You monitor can (theoretically) show 256 levels. 256/16382 ≈ 1.5%.
What does it mean? That means that when RAW photo is converted to ANYTHING that can be shown on your monitor 98% of data is ALWAYS lost. Your monitor simply couldn’t show that many shades of grey.
And the “processing” that’s described in that article explains how camera picks THE RIGHT 1.5% to show you.
When math says that more than 98% of data is lost… why are you surprised that the remaining part is tiny?
1.5% is 1.5%, after all, postprocessing can only decide WHICH 1.5% are retained and WHICH 98% are thrown away.
zde,
I wouldn’t make the focus so much about throwing away information. Even using arbitrarily high precision and throwing away nothing, we’d still need post processing to extract calibrated and standardized color information. So I think the better way to think about post processing is to understand that the raw sensor value and output values are non-linear with respect to each other. Post processing maps non-calibrated raw sensor values into a specific standard color space (along with de-noising and these days even more advanced AI features like red eye removal/brightening faces/etc).
I think the terminology used by manufacturers is quite vague. RAW could mean unprocessed, or minimally processed, or could mean just lossless in the sense that it wasnt a lossy format (but the image went thru all the image processing pipeline, that basically does all the stuff of this article).
To manually apply the steps on this article, you need RAW as in “unprocessed” or “minimally processed”.
Lennie,
Generally the “raw” data would be the digital value at the DAC that quantifies how much light a sensor pixel detected with no post processing at all. And this differs from camera to camera. As the picture in the article shows, even the raw sensor data (after ADC) plainly shows a recognizable image with no post processing at all (*).
The adjustments and calibrations help make this raw data match the color, noise, intensity curves, etc. In principal this raw data could be used directly by an output rendered like a projector. In practice though we need post processing to convert the component values to match the levels/ranges defined in a format that are ultimately rendered by the output device. Getting the right colors is basically a matter of applying response curves to the raw data.
In photography (and computer graphics) we usually pretend there are 3 separate color components, but the physical reality is more nuanced. A yellow light emission is NOT physically identical to a green + red emission, but it just so happens that our eyes pick up yellow as an overlap of red and green colors. Likewise we design color cameras to mimic human color perception, but this mapping is imperfect and so post processing includes some color calibration to make the camera’s colors look good for us.
The color sensors don’t actually have to match our eyes perfectly for images to look ok (a camera with a misadjusted white balance can still take photos that look good on their own), however as soon as we compare different images side-by-side mismatching white balance & hues start calling attention to themselves.
* On my computer I could easily see the room, on my phone, the whole image was black except for the xmas lights themselves, which goes to show output displays on our devices are not well calibrated, at least not to high tolerances.