Pages: 1 [2] 3 4 ... 8   Go Down

Author Topic: What are the essential adjustments that SHOULD be done in the raw processor?  (Read 39326 times)

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

I really wonder how much mileage there is trying to assess the quality differences between raw converters by looking at out of the box colour differences between the originating renderings, when you consider that neither is likely colour accurate and most of these differences are anyhow adjustable.

Well it's true that accurate color rendering is a big stretch - but if a converter renders some colors inaccurately then it would be a minus for that converter, or it would point to the need to have a better profile (if the converter supports this) or to make some canned color corrections before rendering the image.

Here is a test, again with C1 and LR:



I've compared the colors to the Lab standard after color balance first in the raw converter, then with fine-tuning on the middle gray.  If I found that one converter was significantly closer to the correct color I put a tick for that converter, with L at the top, a in the middle, and b at the bottom.

C1 is significantly more accurate on the orange and red, while LR is significantly better on the brown and cream.

So my conclusion is that for very accurate color rendering (given a particular lighting condition) that one of the things that would be advantageous would be to fine tune the colors pre-rendering, either by making a better camera profile than the canned one (C1) or by tweaking the camera calibration (LR).

Quote
I think the far more important factor to look for is whether either converter produces artifacts once the image is magnified to more or less replicate the size at which it would be printed. As for what adjustments to do where, previous advice from experts who tested these alternatives rather carefully suggest that the cleanest editing is performed on the raw data in the raw converter application to the extent it allows.

Yes, well my conclusion is that between LR and C1 there is nothing to choose from a detail/artifact point of view (unless one starts to use clarity, structure, sharpening, noise reduction adjustments).  Which does get back to my original question, which is whether or not there is an advantage in making these additional adjustments pre-rendering; but as it becomes very difficult and time-consuming to gauge the benefits or otherwise, I'm hoping that someone who has a better technical understanding of raw conversion than me can shed some light on this.
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

How do you judge that .... colors were incorrectly rendered? Surely you aren't trying to remember the colors of the scene you shot? The colors are subjective and trying for "reality" is futile and ultimately time wasting.

Well in many cases it's reasonable to say that colors are subjective and the best thing is to adjust to taste.  But if you're trying to accurately reproduce a painting, for example (which is something that I do need to do), then getting the colors as right as possible is very important.  Which, after all, is the whole purpose of color management.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914

The benefits of Raw data are in that no irreversible assumptions are made with regards to image detail and color, e.g. until demosaiced we can still improve the radial alignment of lateral CA (which improves the demosaicing result), and while we are in linear gamma it's much easier to adjust things like Exposure and Color, deconvolution Capture sharpening, and noise reduction.

Technically, if we were to process our images in floating-point accuracy, it would be possible (but not necessarily easier) to convert back to linear gamma for those operations that benefit from linear gamma blending/adjustment. But once rendered in a gamma pre-compensated integer number data file, we lose precision, especially for stronger adjustments, due to rounding errors that can accumulate.

The benefit of Raw processor engines such as used in C1 or LR, is that they are pretty much parametric processors, and that internally the image data stays in  linear gamma space until rendered/exported.

As for differences in color rendering, that's more of a profiling issue, not a pre-/post-Raw processing issue, as long as colors are blended/interpolated in linear gamma and tonal adjustments are done in luminosity and not chromaticity. As soon as we start with non-linear adjustments, then all bets are off, especially in integer number gamma adjusted space and with chromaticity.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

The benefits of Raw data are in that no irreversible assumptions are made with regards to image detail and color, e.g. until demosaiced we can still improve the radial alignment of lateral CA (which improves the demosaicing result), and while we are in linear gamma it's much easier to adjust things like Exposure and Color, deconvolution Capture sharpening, and noise reduction.

Technically, if we were to process our images in floating-point accuracy, it would be possible (but not necessarily easier) to convert back to linear gamma for those operations that benefit from linear gamma blending/adjustment. But once rendered in a gamma pre-compensated integer number data file, we lose precision, especially for stronger adjustments, due to rounding errors that can accumulate.

The benefit of Raw processor engines such as used in C1 or LR, is that they are pretty much parametric processors, and that internally the image data stays in  linear gamma space until rendered/exported.

As for differences in color rendering, that's more of a profiling issue, not a pre-/post-Raw processing issue, as long as colors are blended/interpolated in linear gamma and tonal adjustments are done in luminosity and not chromaticity. As soon as we start with non-linear adjustments, then all bets are off, especially in integer number gamma adjusted space and with chromaticity.

Cheers,
Bart

Excellent Bart ... exactly the information I hoped to get (asuuming I understand you, that is, not a given by any means :)).

So, if I understand you correctly, the following should ideally be done before demosaicing:
- Tonal adjustments
- CA
- Color adjustments
- Deblur (do any of the raw converters currently offer this? I guess not or else you wouldn't be using Focus Magic)
- Color noise reduction
- Luminosity noise reduction (if the raw converter does a decent job of it, which LR doesn't IMO, don't know about C1)

Correct?

As for differences in color rendering, surely the right place to fix this is in the raw converter - not because of technical reasons perhaps, but because the raw converters automatically apply a camera profile on the way to the monitor color space.  Perhaps converters like dcraw are more flexible in that regard, but as far as I can see, with LR and C1 there's no choice.

So is it fair to say that when we evaluate one raw converter against another (leaving aside whether or not their adjustments are parametric, the user interface, how well they fit in to your workflow, etc) we need to look at pretty much all aspects of the image: tone, color, CA, noise, sharpness?  Also, in the same way that the color rendition of one lens may be significantly better than that of another lens, in your experience, have you found that one raw converter gives better color 'rendition' than another?

Cheers,

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

sandymc

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 350

To pick up the original question, there is one area where there is at least a definite theoretical benefit from being done in the raw processor, and that's highlight recovery. This is because the very different channel sensitivities provide the opportunity to do clever things. For a similar reason, there may be benefits as regards noise reduction. Maybe. The rest - not so much. IMHO. Results in any given raw converter may or may not conform to theory, spending on implementation.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

To pick up the original question, there is one area where there is at least a definite theoretical benefit from being done in the raw processor, and that's highlight recovery. This is because the very different channel sensitivities provide the opportunity to do clever things. For a similar reason, there may be benefits as regards noise reduction. Maybe. The rest - not so much. IMHO. Results in any given raw converter may or may not conform to theory, spending on implementation.

I was just reading Capture One Color and the point is made strongly that the color corrections in the raw converter are carried out in a very large color space (similar to the camera's) and this means that a lot of corrections can be made without too much risk of clipping.  So, as the page says:

"This is why it is paramount to perform color corrections and optimizations to images before processing to a smaller color space".

This would be a good reason to do as much of the color adjustments as possible in the raw converter.  I've chosen to work in Adobe RGB largely because my monitor has a 100% Adobe RGB space - even though I know that my printer color space does extend beyond this in places.  Any conversions from one color space to another will potentially cause clipping, color shifts etc., so going from raw to ProPhoto, say, and then converting that to a smaller color space (which is automatic for viewing, clearly) is a bad idea IMO.  So I would make as much of the color adjustments in the raw converter as possible and then go to Adobe RGB, and only make color tweaks in the tiff if necessary.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto

Hi,

Much of the information coming from Phase One does not make any sense to me. I don't even think that it is correct to talk about a camera colour space.

There is something called "Pointer's gamut" that includes all non spectacular colours in nature, and I see little benefit of an ill defined colour space describing non existing colours. Prophoto RGB does contain the whole Pointer's colour space.

In addition, camera sensors don't really have a colour space in the traditional sense. According to Bruce Fraser, they are sort of colour mixing devices.

Think a little bit of an RGB colour sensor, it has three humps of sensitivity for channels called 'B', 'G' and 'R'. So any signal you get out of that sensor is an integrated value of those three humps. That integrated signal does not give any information of the sampled colour.

Check the image below, any read signal between 630 and 750 nm would give a single integrated value and it would not be possible to deduce the actual colour of the red. Between 560 and 630 nm we would have a signal in both the green and red channels, so we could deduce some information about colour in that range by looking at the ratio of the green signal and the red signal.

This is just a very simple demo of the colour space being a pretty fuzzy, talking about a well defined camera colour space does just makes very little sense.

Best regards
Erik




I was just reading Capture One Color and the point is made strongly that the color corrections in the raw converter are carried out in a very large color space (similar to the camera's) and this means that a lot of corrections can be made without too much risk of clipping.  So, as the page says:

"This is why it is paramount to perform color corrections and optimizations to images before processing to a smaller color space".

This would be a good reason to do as much of the color adjustments as possible in the raw converter.  I've chosen to work in Adobe RGB largely because my monitor has a 100% Adobe RGB space - even though I know that my printer color space does extend beyond this in places.  Any conversions from one workspace to another will potentially cause clipping, color shifts etc., so going from raw to ProPhoto, say, and then converting that to a smaller color space (which is automatic for viewing, clearly) is a bad idea IMO.  So, for me I would make as much of the color adjustments in the raw converter and then go to Adobe RGB, and only make color tweaks in the tiff if necessary.

Robert
« Last Edit: March 16, 2016, 05:17:22 pm by ErikKaffehr »
Logged
Erik Kaffehr
 

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

Hi,

Much of the information coming from Phase One does not make any sense to me. I don't even think that it is correct to talk about a camera colour space.

There is something called "Pointer's gamut" that includes all non spectacular colours in nature, and I see little benefit of an ill defined colour space describing non existing colours. Prophoto RGB does contain the whole Pointer's colour space.

In addition, camera sensors don't really have a colour space in the traditional sense. According to Bruce Fraser, they are sort of colour mixing devices.


I think they mean that they use a very large color space in the same way that LR does, not that they use a camera color space.
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 331

I have read on a few occasions, here at this forum, that noise reduction would best be performed prior to demosaicing. The idea sure seems compelling.

Is there any application that actually lets you do this or is it currently just a hope that someday there will be?

Thank you.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914

I think they mean that they use a very large color space in the same way that LR does, not that they use a camera color space.

Correct, and that may be a better approach than restricting oneself to a fixed working space such as ProPhoto RGB or variation thereof. Even ProPhoto RGB may be inadequate if one increases saturation or shifts to a significantly different White/Color-Balance.

The way I interpret the info from PhaseOne, they use a coordinate space that is flexible, just large enough to accommodate the Raw processing, until the result is exported to a fixed colorspace.

Whether one can call the cameraspace a colorspace is not that relevant to me, but since they use ICC scene referred camera color profiles one could say that the input profile characterizes the colors that the Raw data represents, a sort of colorspace. Since each scene has a different palette of colors, it makes sense to use a camera/color space that is large enough to encode those color coordinate values but if saturation is boosted can adapt to the larger space coordinates that are required. Using a space that is small (but just large enough to encode available color data) will give the highest numerical precision. A very large colorspace wastes precision by unnecessarily large integer coordinate intervals. One in large gamut space steps is a larger absolute difference than one in in a smaller gamut space.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914

Excellent Bart ... exactly the information I hoped to get (asuuming I understand you, that is, not a given by any means :)).

So, if I understand you correctly, the following should ideally be done before demosaicing:
- Tonal adjustments
- CA
- Color adjustments
- Deblur (do any of the raw converters currently offer this? I guess not or else you wouldn't be using Focus Magic)
- Color noise reduction
- Luminosity noise reduction (if the raw converter does a decent job of it, which LR doesn't IMO, don't know about C1)

Correct?

I think tonal and color adjustments are easier to do after demosaicing (there is no 'color' before that data conversion), but still in linear gamma space. Of course, assigning an input profile is also something of an adjustment, but it just takes whatever the demosaicing will output.

Do note that 'ideally' and 'more practical' can be two different approaches. While something would ideally be done before demosaicing, it may be much easier to implement after demosaicing (and maybe with measurable but insignificant losses). As an analogy, it may be more accurate to disassemble an engine's transmission-block for cleaning, but draining the warmed up oil and letting gravity do most of the work is more practical/efficient.

Quote
So is it fair to say that when we evaluate one raw converter against another (leaving aside whether or not their adjustments are parametric, the user interface, how well they fit in to your workflow, etc) we need to look at pretty much all aspects of the image: tone, color, CA, noise, sharpness?  Also, in the same way that the color rendition of one lens may be significantly better than that of another lens, in your experience, have you found that one raw converter gives better color 'rendition' than another?

A Raw converter's Color rendition of a given Raw data file, depends very much on profiling and the built-in 'look' of the Raw converter's profiles. I happen to prefer the CaptureOne look (also the less intrusive and more predictable tonality adjustments), and the level of control to adjust it to my liking (with the Color Editor that can create a modified profile for future use or as default) if needed.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914

Much of the information coming from Phase One does not make any sense to me. I don't even think that it is correct to talk about a camera colour space.

There is something called "Pointer's gamut" that includes all non spectacular colours in nature, and I see little benefit of an ill defined colour space describing non existing colours. Prophoto RGB does contain the whole Pointer's colour space.

Hi Erik,

But after assigning the camera profile to the Raw data demosaicing result, one can boost (or reduce) saturation and even the total color balance of the colors in "Pointer's gamut". The color/working space that Capture One uses internally apparently adapts to match the requirements. It then doesn't waste coordinate precision on non-existing colors, or assign too little space for larger gamuts. I wouldn't call that flexibility ill defined, it's purposely not fixed but flexible, that's how I interpret the info.

Quote
In addition, camera sensors don't really have a colour space in the traditional sense. According to Bruce Fraser, they are sort of colour mixing devices.

While correct strictly speaking, after assignment of a given color profile it does have a colorspace and scene referred gamut.

Quote
Check the image below, any read signal between 630 and 750 nm would give a single integrated value and it would not be possible to deduce the actual colour of the red.

Well, the absence of Blue of Green channel info in that specific sensor response gives a clue that it is Red, just not which/how Red. But that changes with capture (which blurs some of the signal to neighboring photosites) and demosaicing. If those neighboring photosites also detect no signal, then apparently it was very deep Red. An OLPF makes it even easier to map to the correct color.

But this is a bit off-topic for the question at hand, Raw- or post-processing.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387

[quote author=ErikKaffehr link=topic=108832.msg897382#msg897382 date=1458162365
In addition, camera sensors don't really have a colour space in the traditional sense. According to Bruce Fraser, they are sort of colour mixing devices.
[/quote]

Bruce Fraser's opinion on whether or not a raw fill has a color space is not agreed upon by such authorities as Thomas Knoll, Eric Walowit, and Chris Murphy. See this link. The entire thread appears here.

Doug Kerr has an excellent post explaining why the camera does not have a strictly defined colorimetric color space.

It all depends on how one defines a color space.

Regards,

Bill
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914

I have read on a few occasions, here at this forum, that noise reduction would best be performed prior to demosaicing. The idea sure seems compelling.

Is there any application that actually lets you do this or is it currently just a hope that someday there will be?

I think you would have to look at the better astronomy software applications (e.g. PixInsight), but they require a lot of calibration and statistical noise modeling and large datasets to do an accurate job, and the resulting differences may be small. In astronomy the small differences are often all they've got, so they tend to delve deep and make each photon (of which there are few) count.

Some of the noise reduction can already be done in hardware (sensor cooling, correlated double readout, multiple signal read outs, etc), but some of the noise is just caused by Photon statistics and residual electronic noise and multiple image averaging or more elaborate statistical distribution mapping is necessary. Often the software needs to allow calculations with high precision floating-point numbers to not waste precision or introduce quantization noise. It is often necessary to treat individual scene shots slightly different, making it a very laborious process.

And research is ongoing, so who knows what the future has in store. Hang on to your Original camera Raws, they may allow even better conversions in the future.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

I think tonal and color adjustments are easier to do after demosaicing (there is no 'color' before that data conversion), but still in linear gamma space. Of course, assigning an input profile is also something of an adjustment, but it just takes whatever the demosaicing will output.


I think I'm using terms a bit too loosely. What I meant is that ideally it is better (if I understand you correctly) to do as many of the adjustments as possible in the raw converter, and as few as possible after conversion to tiff.  There are a few reasons from an image quality point of view:
- larger color space (pros and cons here)
- linear mode (used internally by the raw processor)
- better CA and fringe correction (resulting in less artifacts in the image)
- better for color noise correction
- potentially better for deblur (assuming a good deblur tool in the raw converter)
- potentially better for noise reduction (assuming a good denoise tool in the raw converter)
- ... ?

This assumes that the raw converter actually makes the adjustments on the sensor data directly where at all possible.

Which does lead to another question: what image adjustments/corrections are typically done on the sensor data, pre-demosaic?  I would have thought not too many (CA and color noise possibly?).  Would the other adjustments not then be made on the linear, demosaiced image?  If so then the only advantage of doing something like a saturation adjustment in the raw converter is that the image is still linear and in a large color space.

And how big an advantage is it?  As you point out, it may be measurable but not noticeable ... and against this is the risk of making adjustments that will clip when the image is converted to a working space (aRGB etc) and to posterisation because of the large color space used.

Which all goes back to my original question.  It makes sense (to me) to make tonal adjustments in the raw converter. But it makes much less sense that it is necessarily better to make color adjustments in the raw converter.  It might possibly be better to convert to the intended final working space and to make corrections in that color space, because a smaller color space will lead to less posterisation.

Of course if C1 is in fact using a dynamic, image-dependent color space, that would be very good.  But it would have to leave elbow room for the possible adjustments ... or it would have to grow as required.  This enlarging would cause degradation I would have thought (but maybe C1 are working in 32-bit or more?).

It would be good to do some actual comparisons between pre and post conversion to working space ... which is what I tried to do earlier in this topic.  But it isn't an easy thing to do, which is why I'm looking for a theoretical answer :)

Cheers

Robert


Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog

... which is what I tried to do earlier in this topic.  But it isn't an easy thing to do, which is why I'm looking for a theoretical answer

The theoretical (practical) answer is the one I gave you earlier :)

PS By color profile I mean how the neutral raw data gets converted to a colorimetric color space.  And the answer to your other question is: Correct, in 16 bits for most intents and purposes.
« Last Edit: March 17, 2016, 04:42:17 pm by Jack Hogan »
Logged

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 331

Googled RAW noise reduction and made my way to downloading DxO Optics Pro 10 to try out its Prime noise reduction. It takes a long time to process but the results seem very nice.

The DxO marketing seems to suggest that the process is working on the RAW data, and I am trying to figure out if I am wishful thinking and misinterpreting the product description or if it really is using the RAW data prior to demosaicing.

Are there any other apps that run noise reduction on the RAW data?
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

The theoretical (practical) answer is the one I gave you earlier :)

PS By color profile I mean how the neutral raw data gets converted to a colorimetric color space.  And the answer to your other question is: Correct, in 16 bits for most intents and purposes.

Hi Jack ... yes, yes :); but you and Bart don't seem to be in agreement on this.  Or perhaps you are in that the benefits of, say, making chromaticity changes before rendering to the chosen working space is generally too small to be noticeable.

What might change that view though is if there are repeated changes.  In the raw converter these are presumably all applied at once (perhaps in the same way that multiple layers in Photoshop (may) result in one cumulative change (??)) whereas on the tiff (if the changes are not made using adjustment layers) the repeated changes may well result in a noticeable degradation.


BTW ... when you said that the choice of color profile is fiendishly difficult ... what did you mean?  If you meant that it's hard to choose whether to use ProPhoto or BetaRGB or AdobeRGB or sRGB then surely this isn't so hard?

Robert
« Last Edit: March 18, 2016, 04:40:44 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland

Googled RAW noise reduction and made my way to downloading DxO Optics Pro 10 to try out its Prime noise reduction. It takes a long time to process but the results seem very nice.

The DxO marketing seems to suggest that the process is working on the RAW data, and I am trying to figure out if I am wishful thinking and misinterpreting the product description or if it really is using the RAW data prior to demosaicing.


I've tried DxO Prime on a VERY noisy image(taken at ISO 20,000 and +3EV) and (with the caveat that I've only use Prime once) the results are not exactly impressive, compared to Topaz DeNoise, as you can see here:



A you can see the DxO image appears to have a skin disease, whereas the Topaz image is still noisy, but very clean otherwise.  Ignore the colors as I didn't color balance the Topaz image and just applied a saturation boost post DeNoise to more or less make like with like.  You may need to right-click and download the image to view it properly.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 331

I compared with a back lit photo of a deer standing in saw grass which I made at 3200 ISO moments before the sun came over the horizon and turned it into a silhouette. I pushed the exposure 2/3rds of a stop in the RAW conversions.

I was fairly impressed with the DxO results.

In this instance I had already prepared a finished picture, based on using Topaz Denoise as a first step after conversion, that I thought I was satisfied with. The new DxO version has replaced it in my master files.

I wasn't as pleased with the color rendition of DxO for my camera and had to use Photoshop to get what I wanted to see, so the dialog about when and where to work with color has been interesting.
« Last Edit: March 18, 2016, 07:53:46 am by earlybird »
Logged
Pages: 1 [2] 3 4 ... 8   Go Up