Pages: 1 ... 10 11 [12] 13 14 ... 17   Go Down

Author Topic: DSLR testing sites like DXOmark and Imaging Resource use HMI and LEDs for color  (Read 55959 times)

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197

Thanks, good to know. I wonder however, why Adobe provided that option. Is there any reason to use it? It's just a readout that's quite foreign to me.
They've had that forever it seems. I've not found it particularly useful.

What would be nice is to enter fractional Lab values in the color dialog. It seems one is still limited to integer values.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197

I also read Jim's blog posts

https://blog.kasson.com/the-last-word/the-color-reproduction-problem/.  As I posted earlier "In blog post 14 you noted how tedious it was to transcribe Lab numbers from Photoshop"  and then the post referenced Matlab.  I did a quick check of Matlab and that is "write for a quote", which I assumed didn't mean that it was cheap.  So I kept on with my script and ArgyllCMS programs and asked for suggestions on how to improve my methodology.

My random walk continued.  Last night I got to GNU Octave which is free and claims to be "Drop-in compatible with many Matlab scripts".   So possibly this could be used as a free untedious way of comparing CC chart Lab values? (And for doing many other things.)

But, not knowing anything about Matlab, I could use a head start in doing this.

MATLAB is expensive. I've used the commercial version since the late 80's for unrelated stuff and the image toolbox add-in since 2003. However, they came out with a "home" version that's about 10% of the price though MATLAB and the Image Toolbox still comes to about $200.

Octave is quite good and continues to improve but, even with the image toolbox, has little for dealing with printer ICC profiles.

https://www.mathworks.com/products/matlab-home.html?s_tid=htb_learn_gtwy_cta4

ICC profile tranformations:
https://www.mathworks.com/help/images/ref/makecform.html


MATLAB makes it fairly easy to write your own functions. For instance this one finds the Lab skin color (70,20,20) printed in Perceptual Intent.

>> ProfileConvert([70 20 20], '9800 Costco Randomized I1P 2871pch.icm', 'f', 0, 'r', 3)

ans =

   67.0665   18.1250   17.7305

Which is very close to what is actually printed if read with a spectro. Here ProfileConvert takes a Lab values, converts to device space using Perceptual "-f 0", then back to Lab retaining actual colors using Abs. Col. with the "-r 3"


« Last Edit: June 17, 2018, 01:51:26 pm by Doug Gray »
Logged

Alexey.Danilchenko

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 257
    • Spectron

I started out using CC&T to compare Lab values of different CC charts (ones I shot under different light against the reference CC Lab values) and hand transcribing Lab values from the PhotoShop info panes into CC&T got tedious real quick.  I wanted something that would compare the Lab values from each square of the CC charts as one operation. 

BabelColor PatchTool?
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197

BabelColor PatchTool?
Yep, Patchtool works great for comparing any two colors lists in CGATs. Nice histogram and stats too.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/

My assumption was based on several premises
  • That smaller is smaller and is computationally cheaper.  CPUs and storage space neither free nor infinite.
  • If you do extensive editing in a smaller space there is less danger of posterization in gradients (skies) than if you do extensive editing in a larger space and then export to a smaller space.
  • If you do extensive editing in a larger space (that contains colors that any real world monitor can't display) that there is increased danger of hue and saturation shifts when you export to a smaller space.
 
If you know beforehand how your real world image fits in various color spaces, so you don't have to spend a lot of time with Photoshop's crude gamut clipping soft proofing tools.

I am working on a tool that uses 100% free components and is easy to use.  Where is the disadvantage?

I already told you I watched your sRGB Myths video.  I just watched it again.  I obliquely referenced it in my post to Doug when I highlighted CC chart cyan poking out of sRGB.   Your video covered a CC chart and an unedited image of a white dog and snow.  Neither illustrate the concerns I raised in my list, above.

I also said that I did a random walk examining different spectral analysis tools.  Babelcolor CC&T ($125), Robin Myers Imaging SpectraShop 5 ($99), and Chromix ColorThink Pro ($399).

I started out using CC&T to compare Lab values of different CC charts (ones I shot under different light against the reference CC Lab values) and hand transcribing Lab values from the PhotoShop info panes into CC&T got tedious real quick.  I wanted something that would compare the Lab values from each square of the CC charts as one operation.  I examined all three programs and I don't think that any of them could do it.  So I returned to my existing methods of running images (and color spaces) through the ArgyllCMS utilities to produce 3D plots.  This is quick and easy--all I had to do was add the filenames of the various image files and ICC profiles to a configuration file and run my script.  And the interactive HTMLish 3D plots are easily sharable.  (ColorThink Pro's...?)

I also read Jim's blog posts

https://blog.kasson.com/the-last-word/the-color-reproduction-problem/.  As I posted earlier "In blog post 14 you noted how tedious it was to transcribe Lab numbers from Photoshop"  and then the post referenced Matlab.  I did a quick check of Matlab and that is "write for a quote", which I assumed didn't mean that it was cheap.  So I kept on with my script and ArgyllCMS programs and asked for suggestions on how to improve my methodology.

My random walk continued.  Last night I got to GNU Octave which is free and claims to be "Drop-in compatible with many Matlab scripts".   So possibly this could be used as a free untedious way of comparing CC chart Lab values? (And for doing many other things.)

But, not knowing anything about Matlab, I could use a head start in doing this.
Your assumptions are colorimetricly incorrect and with high bit data, data from all raw capture moot! You seem more concerned with computationally than image data, perhaps a faster more modern computer is necessary. There is zero harm using a wide gamut processing color space which is being used in Adobe raw processed data, and lots of potential data loss encoding after that wide gamut process due to color clipping using a smaller gamut. But it’s your data.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770

> That smaller is smaller and is computationally cheaper.

Not in the least, you still need the same number of bits for each pixel, independent of colour space used. 8-bit images are not best practices anyway for image preservation purposes.

More, smaller colour spaces may turn out to be computationally more demanding because of dealing with the edits that result in out-of-gamut colours.

> CPUs and storage space neither free nor infinite.

CPU use have no bearing here, also because modern algorithms operate with images on a GPU, mostly in 16 bits floating point.

Storage space depends on the number of images to be stored, but storing images in 8 bits is not an option for many of us.

> If you do extensive editing in a smaller space there is less danger of posterization in gradients (skies) than if you do extensive editing in a larger space and then export to a smaller space.

I would love to see a demonstration of this. Normally, when colorimetric intent is used to map from larger space to a smaller space, that's were posterisation tends to kick in. Raw processing colour spaces are rather wide.

> If you do extensive editing in a larger space (that contains colors that any real world monitor can't display) that there is increased danger of hue and saturation shifts when you export to a smaller space.

But that means that you are suggesting to work in destination colour space. That was tried for 15+ years and proved to be not satisfactory. Colour management was adopted as a better solution. Dangers are everywhere, sometimes it's good to put a number and an example on a danger, otherwise it is rhetorics a la DPR.
« Last Edit: June 17, 2018, 07:07:24 pm by Iliah »
Logged

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162

Your assumptions are colorimetricly incorrect and with high bit data, data from all raw capture moot! You seem more concerned with computationally than image data, perhaps a faster more modern computer is necessary.

My concern is finding better methods for evaluating light sources.  My comment about small vs. large color spaces was an offhand remark and is diverting attention away from this, so let's drop it.  It isn't a hill worth fighting over.  In this thread.
Logged

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162


> If you do extensive editing in a larger space (that contains colors that any real world monitor can't display) that there is increased danger of hue and saturation shifts when you export to a smaller space.

But that means that you are suggesting to work in destination colour space. That was tried for 15+ years and proved to be not satisfactory. Colour management was adopted as a better solution. Dangers are everywhere, sometimes its good to put a number and an example on a danger, otherwise it is rhetorics a la DPR.

I was referring to editing in Prophoto and exporting to sRGB.  Sorry I wasn't more specific.

For the rest of it, my original comment was offhand and I'll yield to you.  My intent in this thread is trying to find better ways of evaluating light sources.  Arguing about editing in large vs. small color spaces is a divergence I didn't intend.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/

My concern is finding better methods for evaluating light sources.  My comment about small vs. large color spaces was an offhand remark and is diverting attention away from this, so let's drop it.  It isn't a hill worth fighting over.  In this thread.
Off hand and incorrect. Now you can avoid repeating it!  ;D
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162

Off hand and incorrect. Now you can avoid repeating it!  ;D

For my use case images mostly end up as sRGB, so running my NEC PA241W in sRGB emulation mode, editing in sRGB in 16 bit and only exporting 8 bit sRGB JPEGs as the final step minimizes hue shift and posterization artifacts.  Use cases need to be established before making claims.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/

For my use case images mostly end up as sRGB, so running my NEC PA241W in sRGB emulation mode, editing in sRGB in 16 bit and only exporting 8 bit sRGB JPEGs as the final step minimizes hue shift and posterization artifacts.  Use cases need to be established before making claims.
Thats all good and fine but doesn’t change your incorrect impression about the size of a color gamut you have provided and believe (believed?). Several posters have told you why.
What color shifts and posterization?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770

For my use case images mostly end up as sRGB, so running my NEC PA241W in sRGB emulation mode, editing in sRGB in 16 bit and only exporting 8 bit sRGB JPEGs as the final step minimizes hue shift and posterization artifacts.  Use cases need to be established before making claims.

Before taking any use cases into an account one needs to check how good is the workflow supporting the use case, and how valid is the use case itself.

Before asking a generic question about evaluating light source quality it may be worth asking a couple of different questions, like what is the use case, because when it comes to photography, the properties of sensitive material matter. Same light source may be good for one camera or film and not so good for another. CIPA documents, however, have some recommendations.
Logged

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162

Thats all good and fine but doesn’t change your incorrect impression about the size of a color gamut you have provided and believe (believed?).

I'm confused.  I've been referring to sRGB, Adobe RGB (1998) and Prophoto.  These color spaces.   And showing samples of images whose gamuts exceeded sRGB.  Presented like this because it is easier to visualize as an interactive 3D plot than it is with Photoshop's soft proof out-of-gamut tool.

Assuming that the tool is quick and easy to use, like mine is.

Quote
What color shifts and posterization?

The kind that can happen if an image is edited in a large space, such as Prophoto, and is exported as sRGB without careful examination.  If the gamut of the image exceeds sRGB, which can happen to an image that didn't start out exceeding sRGB, but had had, say, the saturation pumped up while the image was in Prophoto (which has colors that aren't visible on any known monitor.)

Anther use case.  I've done a lot of camera scanning of negatives and I've found that it is best to use Prophoto throughout when inverting and color processing the negative.  Often when I'm done editing, I have to carefully unsaturate a few colors while in ProPhoto before converting it to sRGB.  Possibly images that don't originate as film negatives aren't as problematic when converting from Prophoto to sRGB.

My modern "originated as digital" images usually are in sRGB through the whole process, from ACR out to saving as 8 bit JPEG (in addition to as 16 bit TIFF.) 

The controversy might be "people who place a lot of emphasis on printing" talking past people that don't.  And vice versa.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/

I'm confused.
Let's examine the text that's not Kosher:

Under the assumption that you shouldn't use a color space that is larger than is needed.
Let's go there first (or not) then move onto 'color shifts' that have been requested to be shown. Again, that assumption above isn't correct. I've shown the use of a small color gamut working space and a very large one with an image that easily fits within that smaller color gamut working space. It makes no difference using the smaller or the larger gamut color space; that's the point and the correction. Therefore you can use a color space that's larger than the image and that larger gamut could very well be needed on other images. Doing so is just fine. Not doing so could and often does, depending on that smaller color space (sRGB as an example but Adobe RGB (1998) too) results in color data loss. Clipping of colors. There's no reason to do this. You gain nothing and lose something. The assumption that you shouldn't use a color space that is larger than necessary (and how you'd do so or why) doesn't wash. Just use the largest container, working space gamut, you can in high bit. Forget worrying about what size the image gamut or scene gamut may be, or trying to fit it into something just large enough to contain it. It's work that few can do and it is unnecessary to do anyway. You're shooting raw? Your converter uses a very large processing color space gamut for a reason and encoding that into a smaller working space gamut serves zero purpose and can only result in clipping.

Some people here place a lot of emphasis on not tossing data you can capture and output. Today or in the future.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word



My random walk continued.  Last night I got to GNU Octave which is free and claims to be "Drop-in compatible with many Matlab scripts".   So possibly this could be used as a free untedious way of comparing CC chart Lab values? (And for doing many other things.)

But, not knowing anything about Matlab, I could use a head start in doing this.

If you don't want to pay for the home license of Matlab, Octave is a decent way to go (as is Python). If you do decide to use Matlab, OptProp will make your life easier:

https://www.mathworks.com/matlabcentral/fileexchange/13788-optprop-a-color-properties-toolbox

Unfortunately, it hasn't been updated in a while, and its clever argument-passing methods are likely to soon be obsolete.

It may run in Octave. I never tried.

Jim

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162


Some people here place a lot of emphasis on not tossing data you can capture and output. Today or in the future.

Which is why I shoot raw and keep a death grip on my raw files.  If I ever need to use a larger color space for an image (i.e., want to have a large print made), then I'd set my PA241W to "native", go back to the raw file and reprocess it from scratch.  For that particular printer. 

Starting in a large color space and editing colors for an unknown printer's gamut seems to me an awful lot like editing for generic CYMK.  Which I believe that you and other members of the Pixel Mafia have cautioned against.
« Last Edit: June 17, 2018, 09:17:48 pm by WayneLarmon »
Logged

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162

Before taking any use cases into an account one needs to check how good is the workflow supporting the use case, and how valid is the use case itself.

Before asking a generic question about evaluating light source quality it may be worth asking a couple of different questions, like what is the use case, because when it comes to photography, the properties of sensitive material matter. Same light source may be good for one camera or film and not so good for another. CIPA documents, however, have some recommendations.

From here?  Membership seems to be required to access the papers.

My premise the entire time I've participated in this thread is that the existing metrics of evaluating light (CRI, TLCI, and the R numbers) leave a lot to be desired when evaluating light.  And that, even though cameras have the Luther/Ives issue, that, in general, light that renders color well for human eyes also renders light better for cameras. 

After learning (on this thread) that there are a lot more nuances of testing camera response than I knew before, I'm especially suspicious of the single scalar TLCI "one size fits all" metric.  At least CRI is aligned with the time tested "CIE standard observer."

New topic: while I've been researching during the course of this thread, I rediscovered DCamProf and Lumariver Profile Designer (I've been away from LuLa for several years.)  Should I spend more time with one or both of those before waiting for better color rendering metrics?  In addition to studying ArgyllCMS (and a lot of other things) closer?   
Logged

WayneLarmon

  • Full Member
  • ***
  • Offline Offline
  • Posts: 162

If you don't want to pay for the home license of Matlab, Octave is a decent way to go (as is Python). If you do decide to use Matlab, OptProp will make your life easier:

https://www.mathworks.com/matlabcentral/fileexchange/13788-optprop-a-color-properties-toolbox

Unfortunately, it hasn't been updated in a while, and its clever argument-passing methods are likely to soon be obsolete.

It may run in Octave. I never tried.

Jim

I just realized that I have already done some baby Matlab programming.  I set up an account on ThingSpeak (which is a branch of Matlab) about a year ago to monitor several ESP8266 temp/humidity sensors I put together and adjusted the sample Matlab code in the web interface editor. 

I don't mind paying for Matlab home for my own purposes but I'll like to make tools that can be freely shared with others.  But probably the only feasible way to do this is to start with full blooded Matlib (that has support for icc profiles) and then figure out how to achieve the same thing in Octave (ArgyllCMS iccdump (convert an ICC profile to ASCII text) might be a good starting point.

But I'm also losing my momentum.  I'm trying to talk about building tools to share but that portion of my posts always gets clipped away in responses.  After a certain point...

Thanks for your help.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197

I just realized that I have already done some baby Matlab programming.  I set up an account on ThingSpeak (which is a branch of Matlab) about a year ago to monitor several ESP8266 temp/humidity sensors I put together and adjusted the sample Matlab code in the web interface editor. 

I don't mind paying for Matlab home for my own purposes but I'll like to make tools that can be freely shared with others.  But probably the only feasible way to do this is to start with full blooded Matlib (that has support for icc profiles) and then figure out how to achieve the same thing in Octave (ArgyllCMS iccdump (convert an ICC profile to ASCII text) might be a good starting point.

But I'm also losing my momentum.  I'm trying to talk about building tools to share but that portion of my posts always gets clipped away in responses.  After a certain point...

Thanks for your help.
Home Matlab is like the full version except it's pretty much limited to non commercial, non institutional uses. The image toolbox is only $45 more and it has full ICC support. Also has a nice editor/debugger and is well integrated with Git source control so easy to keep track of your stuff. Most functions your write can be shared but if you charge money for them or are paid for your work then you need to buy the commercial version. I have both but my commercial version is a few years old.
Logged

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770

> Membership seems to be required to access the papers.
http://www.cipa.jp/std/documents/e/DC-004_EN.pdf page 19.

> the existing metrics of evaluating light

... are not for photography. Sensor measures light differently from how we perceive it. Sensor / Film metameric error is not accounted for in those metrics.

> light that renders color well for human eyes also renders light better for cameras.

Hm... Some cameras, like Nikon D3..D5 series, are optimized for artificial light common at sports venues. It's a complex process, that optimization, causing a rather significant deviation from L-I condition to present not the colours that we see under that artificial light but rather the colours we would see if the light would be closer to daylight.

> while I've been researching during the course of this thread

I think systematic study starts with textbooks on colour science.
Logged
Pages: 1 ... 10 11 [12] 13 14 ... 17   Go Up