I knew digital cameras and phones had to do a lot of processing and other types of magic to output anything human eyes can work with, but I had no idea just how much. This is wild.
I knew digital cameras and phones had to do a lot of processing and other types of magic to output anything human eyes can work with, but I had no idea just how much. This is wild.
So RAW file was never raw after all ? Or does it just include all the raw data and information to do the math ?
From what I understand sensors in cameras do not usually capture what humans can’t see.
It’s RAW in the sense it gives you the processed pixels intact without any (lossy) compression.
Have you actually READ the article? And THOUGHT about how things work? The key insight is here: “the camera’s analog-to-digital converter (ADC) output can theoretically output values from 0 to 16382”.
The camera gives you (theoretically) 16382 levels. You monitor can (theoretically) show 256 levels. 256/16382 ≈ 1.5%.
What does it mean? That means that when RAW photo is converted to ANYTHING that can be shown on your monitor 98% of data is ALWAYS lost. Your monitor simply couldn’t show that many shades of grey.
And the “processing” that’s described in that article explains how camera picks THE RIGHT 1.5% to show you.
When math says that more than 98% of data is lost… why are you surprised that the remaining part is tiny?
1.5% is 1.5%, after all, postprocessing can only decide WHICH 1.5% are retained and WHICH 98% are thrown away.
zde,
I wouldn’t make the focus so much about throwing away information. Even using arbitrarily high precision and throwing away nothing, we’d still need post processing to extract calibrated and standardized color information. So I think the better way to think about post processing is to understand that the raw sensor value and output values are non-linear with respect to each other. Post processing maps non-calibrated raw sensor values into a specific standard color space (along with de-noising and these days even more advanced AI features like red eye removal/brightening faces/etc).
I was basically asking: did he get this data from the RAW-file ? Or how did he get the data ?
I intended to have two separate statements, I should have made it more clear.
That a camera also does not capture everything, the sensors are made to capture what humans can see. Not other wavelengths.
I think the terminology used by manufacturers is quite vague. RAW could mean unprocessed, or minimally processed, or could mean just lossless in the sense that it wasnt a lossy format (but the image went thru all the image processing pipeline, that basically does all the stuff of this article).
To manually apply the steps on this article, you need RAW as in “unprocessed” or “minimally processed”.
“RAW” usually and traditionally means “the unprocessed stream of values exactly as they were read from the sensor”. That binary blob of sensor data is then wrapped into a container format with metadata and a preview JPEG rendering.
Lennie,
Generally the “raw” data would be the digital value at the DAC that quantifies how much light a sensor pixel detected with no post processing at all. And this differs from camera to camera. As the picture in the article shows, even the raw sensor data (after ADC) plainly shows a recognizable image with no post processing at all (*).
The adjustments and calibrations help make this raw data match the color, noise, intensity curves, etc. In principal this raw data could be used directly by an output rendered like a projector. In practice though we need post processing to convert the component values to match the levels/ranges defined in a format that are ultimately rendered by the output device. Getting the right colors is basically a matter of applying response curves to the raw data.
In photography (and computer graphics) we usually pretend there are 3 separate color components, but the physical reality is more nuanced. A yellow light emission is NOT physically identical to a green + red emission, but it just so happens that our eyes pick up yellow as an overlap of red and green colors. Likewise we design color cameras to mimic human color perception, but this mapping is imperfect and so post processing includes some color calibration to make the camera’s colors look good for us.
The color sensors don’t actually have to match our eyes perfectly for images to look ok (a camera with a misadjusted white balance can still take photos that look good on their own), however as soon as we compare different images side-by-side mismatching white balance & hues start calling attention to themselves.
* On my computer I could easily see the room, on my phone, the whole image was black except for the xmas lights themselves, which goes to show output displays on our devices are not well calibrated, at least not to high tolerances.
Seems I messed up with my original comment.
I was basically asking: did he get this data from the RAW-file ? Or how did he get the sensor data ?
I intended to have two separate statements, I should have made it more clear.
That a camera also does not capture everything, the sensors are made to capture what humans can see. Not other wavelengths/spectrum.
Lennie,
My assumption is he used the raw file that some cameras are able to save. And AFAIK those raw files should contain the raw sensor values without any post processing done to them.
I’ve heard about different software that works with these raw files, although I’ve never felt a need to do this myself.
https://umatechnology.org/6-best-software-to-open-raw-files-view-edit/
It’s less common on phones, but DSLRs more or less 100% of the time support saving to RAW, which is exactly the uninterpreted sensor data (there are rare exceptions where it might include some post processing, but that’s not typically the case). Note this is why apps like e.g. Photoshop, Lightroom or dcraw have to explicitly support the various camera models/sensors, otherwise there’s no real way to interpret the data correctly.