Pages: 1 2 3 [4] 5 6   Go Down

Author Topic: New color chart for high quality profiling in captures  (Read 2954 times)

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1865
Re: New color chart for high quality profiling in captures
« Reply #60 on: July 11, 2019, 09:37:49 pm »

Absolutely agree on all points. Hugo might wish to see what two of us on the ICC photography committee had to say on this in a white paper aimed at photographers:
http://www.color.org/ICC_white_paper_20_Digital_photography_color_management_basics.pdf

Yeah, surprising little awareness of this in the general photography community.

This is why I roll my eyes when people talk about "accuracy" and standard, output referred images in the same breath. Now with printers one can talk accuracy but even then there's Perceptual intent. Still, the changes PI induces are small compared to the much larger ones typical of capture devices using output referred profiles. And that's quite apart from metameric shifts from L/I deviation in the CFA. A whole other can of worms.
Logged

Alexey.Danilchenko

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 256
    • Spectron
Re: New color chart for high quality profiling in captures
« Reply #61 on: July 12, 2019, 04:50:29 am »

Well, I think a pro photographer and/or a color management technician can do some things to achieve a quite fine reproduction without all that. Just with a good camera, good lighting and a good chart and software.
How many photographers or color mngmt consultants do have such equipment?
I don't want to seem ironic, but if we follow your recommendations, what would be the next step? Call the NASA?

Photographers went down that road for quite a few years.

And taking reliable spectral curves with even modest setup (all manual) is much simpler than getting target shots correctly - I did post that somewhere on dcamprof thread here a few years ago. Then Iliah Borg and I worked on making automated solution to do this (see here if interested - the project is still ongoing). The end result is that you can build high quality tuned profile without reshooting your target in required lighting conditions. Bottomline - there is no universal target for all cases that will allow you to correctly approximate sensor response adjustments and behavior in any given lighting (and that is what profile does).

And last, given your reply above, I'd like to quote your own earlier reply to Andrew right back at you:

The problem with those who think that know everything is that is hard to make them understand they don't.
« Last Edit: July 12, 2019, 05:27:07 am by Alexey.Danilchenko »
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1832
    • http://www.guillermoluijk.com
Re: New color chart for high quality profiling in captures
« Reply #62 on: July 12, 2019, 09:37:09 am »

By definition output referred profiles distort an image, typically increasing contrast in midrange while decreasing it at the high and low Ls. Scene referred profiles also have the intrinsic limitation of the output medium's dynamic range.

Hi Doug, not sure if I understand what you mean here by distorting the original image. Let's assume our output device can handle without any limitation both the dynamic range and the colour gammut of the captured scene. With a properly profiled camera (let's forget the method, we are just certain we can convert its RAW values into exactly the expected colour values a spectrophotometer would measure), and a properly calibrated output device (again let's forget the method to achieve that, this monitor/projector/print can simply render the colours exactly as they are defined in the PCS).

Why should there be any difference in contrast or colours between when looking at the original scene vs looking at the output device render?. I know it's an hypothetical scenario, just to understand your claim of the image distortion.

Regards

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #63 on: July 12, 2019, 09:48:18 am »

I'll let Doug express his ideas of 'distort' but the ICC white paper describes this way:


In technical jargon, the measured scene color the camera captures is known as Scene-Referred. Since we need to view this image on something like a display or a print, it’s usually necessary to make the image appear more pleasing on the output device and to produce the desired color appearance the image creator wishes to express and reproduce.  These image colors are known as Output- Referred. The need to fit the color gamut and dynamic range of the scene-referred data to output referred data is called rendering.
Logged
Andrew Rodney
Author “Color Management for Photographers"

Alexey.Danilchenko

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 256
    • Spectron
Re: New color chart for high quality profiling in captures
« Reply #64 on: July 12, 2019, 10:02:10 am »

Why should there be any difference in contrast or colours between when looking at the original scene vs looking at the output device render?. I know it's an hypothetical scenario, just to understand your claim of the image distortion.
It's not so much of a difference between original scene and output but original scene as captured by camera and output.
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1832
    • http://www.guillermoluijk.com
Re: New color chart for high quality profiling in captures
« Reply #65 on: July 12, 2019, 10:08:31 am »

I'll let Doug express his ideas of 'distort' but the ICC white paper describes this way:


In technical jargon, the measured scene color the camera captures is known as Scene-Referred. Since we need to view this image on something like a display or a print, it’s usually necessary to make the image appear more pleasing on the output device and to produce the desired color appearance the image creator wishes to express and reproduce.  These image colors are known as Output- Referred. The need to fit the color gamut and dynamic range of the scene-referred data to output referred data is called rendering.
Thanks Andrew, I read the paper before asking Doug. However this doesn't answer the question as to why should the render get any distortion vs the original image assuming the output device can handle both the DR and colour gammut of the original scene, and that all our workflow was aimed at getting accurate reproduction. In other words, why can't I look at a printed copy of a colour chart I captured with my profiled camera/scanner and then printed again with my properly profiled printer. Why should I look at them both and see any contrast or colour distortion?.

EDIT: sorry, my fault, that's understood and agreed. I re-read what Dough said; I thought he said: "By definition output scene referred profiles distort an image ...".

Regards
« Last Edit: July 12, 2019, 10:15:11 am by Guillermo Luijk »
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #66 on: July 12, 2019, 11:03:17 am »

Thanks Andrew, I read the paper before asking Doug. However this doesn't answer the question as to why should the render get any distortion vs the original image assuming the output device can handle both the DR and colour gammut of the original scene, and that all our workflow was aimed at getting accurate reproduction.
That is of course possible but unlikely in many situations. This is well illustrated visually and textually in this white paper by Karl Lang:
http://www.lumita.com/site_media/work/whitepapers/files/pscs3_rendering_image.pdf
Again, the term rendering is used and I like that term, but "distorting" could certainly be another way of suggesting this.  ;)
Logged
Andrew Rodney
Author “Color Management for Photographers"

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1832
    • http://www.guillermoluijk.com
Re: New color chart for high quality profiling in captures
« Reply #67 on: July 12, 2019, 11:22:31 am »

That is of course possible but unlikely in many situations. This is well illustrated visually and textually in this white paper by Karl Lang:
http://www.lumita.com/site_media/work/whitepapers/files/pscs3_rendering_image.pdf
Again, the term rendering is used and I like that term, but "distorting" could certainly be another way of suggesting this.  ;)
I'll give it a careful read, but I have a feeling the text focus on the (taken from the text): "The reality is that light levels in a natural scene can’t be reproduced using any current technology and certainly not in a print." assumption.

I wouldn't agree with that unless the author added some 'in most cases', but he didn't. I started to think about these matters years ago when processing HDR captures, and concluded that the key for HDR imaging is not at all the capture (which can be easily solved with just a couple of shots), but what's the dynamic range of the output device in which the images are to be viewed. If our output device can provide both the real contrast (dynamic range) and the genuine colours (gamut) present in the original scene, why shouldn't I be able to render that scene on the output device in a nearly indistinguishable way to my eye?.

Take a foggy day, to ensure low contrast and dim colours:



If my workflow is aimed at the goal of getting exactly the same L and colours for every area of the scene, why can't I get a print, a projection, or a monitor view with exactly the same colours and contrast I saw with my eyes in the foggy landscape?. Note I don't mean it will always be possible, nor that it will be the most pleasant image. I just disagree with the "light levels in a natural scene can’t be reproduced" generalization. It will basically depend on how powerful in terms of contrast and colours our output device is. This could be totally offtopic but also leads me to think that the best way to look at photographs is not a print (highly limited output device), but much more powerful active devices such as large gamut and resolution screens.

Regards

PS: sorry for the offtopic Hugo

« Last Edit: July 12, 2019, 11:34:30 am by Guillermo Luijk »
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #68 on: July 12, 2019, 11:34:09 am »

Check figure 3.
This represents a scene dynamic range of 100,000:1.
Logged
Andrew Rodney
Author “Color Management for Photographers"

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1832
    • http://www.guillermoluijk.com
Re: New color chart for high quality profiling in captures
« Reply #69 on: July 12, 2019, 11:36:26 am »

Check figure 3.
This represents a scene dynamic range of 100,000:1.
That scene can't be represented with any current output device. So what? other scenes can, and I gave a counterexample that makes the generalization false.

The author says: "Scene dynamic range is the key concept to understand" but this is not accurate. The key concept is if the scene dynamic range can be captured and if it fits into the output device's dynamic range.

Regards
« Last Edit: July 12, 2019, 11:40:06 am by Guillermo Luijk »
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #70 on: July 12, 2019, 11:44:05 am »

That scene can't be represented with any current output device. So what?
So it has to be rendered to 'fit' any number of current output devices and as such, it's output referred, not scene referred.
As I said, there are obviously scenes that can 'fit' some/all output devices. They may be scene referred but does the image creator like the rendering? In the ICC white paper, you see three examples provided by my co-author, Jack Holms of his family. And the scene referred image isn't 'pleasing' is it? Jack (or the camera producing the JPEG in sRGB), provided two different renderings. Neither can be said to be accurate colorimetrically. They are subjectively better appearing right? They therefore are output referred.

We might (someday) have a camera system and output device that can show us the entire DR of Karl's scene and a device like a display or print that could be shown to us as scene referred. So what IF the rendering isn't pleasing or what the image creator wishes to express? That's the crux of Karl's article: rendering the print image.
« Last Edit: July 12, 2019, 11:47:11 am by digitaldog »
Logged
Andrew Rodney
Author “Color Management for Photographers"

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1832
    • http://www.guillermoluijk.com
Re: New color chart for high quality profiling in captures
« Reply #71 on: July 12, 2019, 11:59:41 am »

there are obviously scenes that can 'fit' some/all output devices. They may be scene referred but does the image creator like the rendering?
This is what I'm saying Andrew, so the generalization we can read in the paper is false.

And regarding the pleasantness of the pictures, if a render with the same contrast and colours as the original scene, i.e. a totally accurate reproduction, looks less pleasing to anyone than some processing derived from it, it's fine, but it's a totally subjective story. It simply means that the rendering process managed to enhance what our eyes saw in the scene, but this doesn't change the fact that the less pleasant image is closer to what we experienced in front of the subject.

Regards

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 769
Re: New color chart for high quality profiling in captures
« Reply #72 on: July 12, 2019, 12:14:06 pm »

Scanners operate under a fixed light, cameras don't. Shooting complex targets under some fixed light (unless the same light is used to capture the scene) is a very limited and obsolete idea. Profiling from spectral response curves solves the issues caused by different lighting conditions much better.

Some questions:

Profiling software is optimized for certain types of targets (unless you are using Argyll, but even with it it is somewhat moot).

Number of patches (and especially the number of patches in gradients) is not a criterion of quality, independence of patches is. This independence correlates with a number of spectrally different pigments in use. How many spectrally different pigments are there in Epson ink? Five?

What's the point of having "neutral" patches around the target, as no existing software knows the locations and can't use those patches for flat-fielding?

What's the point of having very dark "neutral" patches when they have non-Lambertian reflection profile and don't allow to establish black point?

Do you know if profiling software of your choice relies on neutral patches being spectrally flat?

How does this target deal with metameric failure issues, especially given that the solutions hard-wired into profiling software can't deal with unknown targets? 

Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #73 on: July 12, 2019, 12:22:44 pm »

This is what I'm saying Andrew, so the generalization we can read in the paper is false.
Which paper and what is false?
By definition, scene referred means the colors measured at the scene (if done) and that which is output colorimetrically match. Lab in (measured scene color) = Lab out. When people (like Doug) speak of color accuracy, it better be that case, otherwise how can we define accuracy? Scene referred is colorimetrically accurate but it may not be pleasing or produce a match to something else.

As you know, those of us who have tried doing reproductions digitally, it's really, really hard work. But a scene referred image may not match the original. It may require editing to produce that match. It's therefore no longer scene referred. It's not colorimetrically an accurate reproduction but it may produce a match. We know about issues with Metameric failures which can take place in various areas of this reproduction chain (camera, what's seen on a display, what's output on a printer).

Say the goal is a visual match but some of the measured color (of say a painting) has to differ to produce a visual match, it's not scene referred any longer.

It can still be scene referred but it's not pleasing because there's a mismatch somewhere and when we edit the area of that image to provide a match, it's no longer scene referred. It's been rendered for the goal of a match.
Quote
if a render with the same contrast and colours as the original scene, i.e. a totally accurate reproduction, looks less pleasing to anyone than some processing derived from it, it's fine, but it's a totally subjective story.
Yes, no question. But it is output referred IF in order to get the pleasing result, a match, the image had to be edited. It is no longer scene referred. And that's not a problem, it's the solution.
Again, when discussing the two terms (scene or output referred), each defines the measured color and the resulting color having the same or differing measured values.
Logged
Andrew Rodney
Author “Color Management for Photographers"

hurodal

  • Newbie
  • *
  • Offline Offline
  • Posts: 42
    • My website
Re: New color chart for high quality profiling in captures
« Reply #74 on: July 12, 2019, 01:07:16 pm »

It might surprise you but even highly visible moire, with the exception of long wavelength moire, has little to no affect on the patch averages. The physics demands it. Even in the case of longer wavelength moire, using a Hann window is effective at measuring the light consistently. One just has to keep the moire at relatively high frequencies. Say, a wavelength of 10 pixels or less for a reasonably high re monitor. It's usually much smaller than that.
Well, yes, it surprises me. Anyway, I don't feel very confident capturing a screen while I still can see some patterns of different frequency depending on what distance I set the focus to.

Flatbed. Large area crosstalk is intrinsic to the design of virtually all of them.

Crosstalk is a term that doesn't exists in spanish, and the direct translation doesn't make sense, so excuse me if I still don't get the idea. Can you describe the issue precisely?

There are a few areas aside from repro that use scene referred profiles. Biology and scientific work, yes. Product and fashion photography, not so common.  By definition output referred profiles distort an image, typically increasing contrast in midrange while decreasing it at the high and low Ls. Scene referred profiles also have the intrinsic limitation of the output medium's dynamic range. And people are used to seeing photos that are made that way. To the point that a colorimetrically accurate print will look unattractive next to a standard print made using an output referred process. Just look around and see just how little scene referred profiles are used.

In spanish we don't know, nor use, these terms 'scene-referred' and 'output-referred', but I I know what you mean.
When you say that output-referred profiles 'distort' I get the point. That's a linear response develop, without any kind of curve to make it more appealing, right?
In product and e-commerce fashion photography (specifically, shots for the brand's website, not those that usually appear in fashion magazines) I explain my customers how to use my output-referred profiles, and they usually use them in plain shots (those on where you can see the garment alone over a flat background).
But when the model wears the garment, they prefer to add the standard curve (output-referred) and keep using the profile. Sometimes, specially in difficult colors they combine two develops: on for the model and the entire scene and the other (output-ref.) for the garment. then they combine both shots in one image.
For them, that means getting accurate color.
Logged
Hugo Rodriguez

Coloratti member | PhaseOne certified professional | BenQ Ambassador

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #75 on: July 12, 2019, 01:22:09 pm »

In spanish we don't know, nor use, these terms 'scene-referred' and 'output-referred', but I I know what you mean.
Do you have terms in Spanish for "Relative Colorimetric" or "Black Point Compensation" or "Profile Connection Space"? Because Scene Referred and Output Referred are no different; industry accepted terms, some 'invented' by the ICC etc.
Quote
When you say that output-referred profiles 'distort' I get the point. That's a linear response develop, without any kind of curve to make it more appealing, right?
The terms were defined in the white paper and text below.
Quote
In product and e-commerce fashion photography (specifically, shots for the brand's website, not those that usually appear in fashion magazines) I explain my customers how to use my output-referred profiles, and they usually use them in plain shots (those on where you can see the garment alone over a flat background).
IF the numbers captured and/or output don't produce the same measured values as the scene, due to editing the numbers, its output referred. It's as simple as that. You don't have to go into curves, gamut etc. You measure a color at the scene using say a Spectrophotometer and you end up with spectral data or perhaps a conversion to Lab. You do the same on whatever is the result of that capture and if they are the same values, if the numbers haven't been edited for a preferred rendering, it's scene referred.

The ICC states this rather clearly: (formatting to focus attention on the key points)
A scene-referred image is an image where the image data is an encoding of the colors of a scene (relative to each other), as opposed to a picture of a scene. In a picture, the colors are typically altered to make them more pleasing to viewers when viewed using some target medium.

Quote
But when the model wears the garment, they prefer to add the standard curve (output-referred) and keep using the profile. Sometimes, specially in difficult colors they combine two develops: on for the model and the entire scene and the other (output-ref.) for the garment. then they combine both shots in one image.
For them, that means getting accurate color.
Well for them, they should know that's wrong! How can you define accurate color? YOU MEASURE that color and get some numeric value. How do you define if the resulting colors are accurate? You compare the reference to the measurement and produce a deltaE report**. The formula of course plays a role but the point is, once you define how accurate or inaccurate that dE value is (less than 1dE, more than 2dE? Just specify the intended accuracy), NOW and only now do you have a basis for accurate color. Pleasing color cannot be measured!
All this is kind of basic color management FWIW.  ;)

**Delta-E and color accuracy

In this 7 minute video I'll cover: What is Delta-E and how we use it to evaluate color differences. Color Accuracy: what it really means, how we measure it using ColorThink Pro and BableColor CT&A. This is an edited subset of a video covering RGB working spaces from raw data (sRGB urban legend Part 1).

Low Rez: https://www.youtube.com/watch?v=Jy0BD5aRV9s&feature=youtu.be
High Rez: http://digitaldog.net/files/Delta-E%20and%20Color%20Accuracy%20Video.mp4
Logged
Andrew Rodney
Author “Color Management for Photographers"

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1832
    • http://www.guillermoluijk.com
Re: New color chart for high quality profiling in captures
« Reply #76 on: July 12, 2019, 01:33:06 pm »

Which paper and what is false?

Karl Lang's paper, and this quote: "The reality is that light levels in a natural scene can’t be reproduced using any current technology and certainly not in a print."
He'd better say: "Light levels in a natural scene can’t generally be reproduced using any current technology and certainly not in a print.". And even that sentence would probably have a date of expiry since active output devices continuousy increase their DR while real world scenes and human vision remain the same.

I somewhat reconciled with Karl Lang when reading: "There are some new (and very expensive) display technologies on the market that have a real dynamic range of 10,000:1 and can produce extremely bright whites. With one of these, we could send a properly exposed scene-referred image directly to the display. The image would look just like we were there, except for the clipping at white and black caused by the sensor.". That is what I expected to read in such an article.

Regards

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 15647
    • http://www.digitaldog.net/
Re: New color chart for high quality profiling in captures
« Reply #77 on: July 12, 2019, 01:37:26 pm »

Karl Lang's paper, and this quote: "The reality is that light levels in a natural scene can’t be reproduced using any current technology and certainly not in a print."
He'd better say: "Light levels in a natural scene can’t generally be reproduced using any current technology and certainly not in a print.".
True, the idea of a natural scene is vague. Certainly it's not vague nor untrue in Figure 3.
Quote
I somewhat reconciled with Larl Lang when reading: "There are some new (and very expensive) display technologies on the market that have a real
dynamic range of 10,000:1 and can produce extremely bright whites.
I don't know what he was referring to based on the date this was released.
What I do know is Karl was the product manager and engineer for both the Radius PressView and Sony Artisan and believe, he knows a good deal about display technology, even if the reference is again vague.
The article was aimed at photographers, the key take away is the idea of rendering the image and as I think has been illustrated, that act has little if anything to do with accurate color or scene referred imagery.
Logged
Andrew Rodney
Author “Color Management for Photographers"

hurodal

  • Newbie
  • *
  • Offline Offline
  • Posts: 42
    • My website
Re: New color chart for high quality profiling in captures
« Reply #78 on: July 12, 2019, 01:40:57 pm »

Photographers went down that road for quite a few years.
I think that you expect from a photographer to be a color scientist. Photographers usually don't want to dig very deep into the technical stuff, and certainly color management represents a nightmare for most of them.
So they like solutions that doesn't make their life even more complicated. As simple as that. :-)

And taking reliable spectral curves with even modest setup (all manual) is much simpler than getting target shots correctly - I did post that somewhere on dcamprof thread here a few years ago. Then Iliah Borg and I worked on making automated solution to do this (see here if interested - the project is still ongoing). The end result is that you can build high quality tuned profile without reshooting your target in required lighting conditions. Bottomline - there is no universal target for all cases that will allow you to correctly approximate sensor response adjustments and behavior in any given lighting (and that is what profile does).

I didn't know about your project, but although it looks very interesting, I don't think a normal photographer would be interested in buy (if it could be bought), learn to use and use it.
Same reason because less than 0,001%? of photographers use advanced tools that can offer more sophisticated results or experimentation, but they are complicated to use, and/or there's few people out there using them, and there's few documentation. Tools like Rawdigger, Rawthrapee and many more.
By the way: do you have information about how to use that and what were the results? Any colorimetric analysis?


And last, given your reply above, I'd like to quote your own earlier reply to Andrew right back at you:
Please don't misunderstand me, I didn't want to seem rude at all. What I was trying to say is that the solution of measuring every spectral curve of the camera sounds rather complicated even for most pro photographers. At least in my country.
Logged
Hugo Rodriguez

Coloratti member | PhaseOne certified professional | BenQ Ambassador

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1865
Re: New color chart for high quality profiling in captures
« Reply #79 on: July 12, 2019, 01:41:45 pm »

Thanks Andrew, I read the paper before asking Doug. However this doesn't answer the question as to why should the render get any distortion vs the original image assuming the output device can handle both the DR and colour gammut of the original scene, and that all our workflow was aimed at getting accurate reproduction. In other words, why can't I look at a printed copy of a colour chart I captured with my profiled camera/scanner and then printed again with my properly profiled printer. Why should I look at them both and see any contrast or colour distortion?.

EDIT: sorry, my fault, that's understood and agreed. I re-read what Dough said; I thought he said: "By definition output scene referred profiles distort an image ...".

Regards

Hi,

The following is highly simplified and ignores things like color gamut shifts common with standard imaging. The distortion can be shown using just B&W, in-gamut images. Peruse the articles and white papers in www.color.org for in depth look at how gamuts are typically mapped.

Do not take my use of "distort" as a pejorative. It was intended to convey the idea that the image was changed in a way that isn't consistent between different makers, models, and settings and hence can't be easily reversed. Here's a specific example of scene-referred v output referred camera imaging.

Take the following setup. A singular light bulb 10' away from a small target that has two patches on black, one a neutral gray patch and the other white.

A picture is taken in Adobe RGB and the image is inspected in Photoshop. Assume we are perfectly white balanced and the gray patch reads exactly 100,100,100 while the white patch reads 200,200,200.

Now we move the light from 10' away to about 8' away and take a new picture using the same camera settings and inspect the gray patch. It now reads RGB 120,120,120. What should the white patch read?

If we used a scene referred process, the white patch would read 240,240,240. But virtually all other, standard photography processes will produce a value somewhere between 215 and 235. Output referred imaging compresses the light tones and each manufacturer tends to do so based on a proprietary formula.

This is the reason repro people use scene referred processes as it makes it easier to duplicate originals. At least to the degree the originals have a gamut that is within the capabilities of the printer and metameric shifts from the camera's CFA and ink/illuminant aren't too large.

Logged
Pages: 1 2 3 [4] 5 6   Go Up