Hi,
I'm not sure if I understood correctly the article in the home page, but "Precise Digital Exposure" using the rendered RGB values in Capture one? seriously?
I agree that one should not base exposure on rendered values, but rather on raw values as determined by RawDigger or a similar tool.
I agree that one should not base exposure on rendered values, but rather on raw values as determined by RawDigger or a similar tool.Or (to beat a dead horse):
I'm not sure if I understood correctly the article in the home page, but "Precise Digital Exposure" using the rendered RGB values in Capture one? seriously?
In terms of the rendering, can we equate that with ‘development’? If so, can we separate exposure from development?
That said, I glanced at the piece and it’s very, very possible I didn’t understand correctly either.
Apologies if what I explain seem confusing. :P After learning this method I have found it simple and useful as a tool for precise spot metering exposure as per the exact capture latitude of my digital back. In principle it is same as spot metering for different film was (which likewise were different media/technology). Thus it is a specialised exposure tool for when you need to spot meter for preciseness and for and based on the digital technology.
---
The attached is a characteristic curve for my Leaf AFi-II 12, made by shooting test shots of an 18% grey card in constant light, which was made same way as Ansel Adams explained in his book The Negative of making test images one step apart. I have also included curves for extreme adjusted highlight and shadow compensations and together with extreme negative shadow exposure.
What do the curves bring? These tell me the changes in rendering of tonal values when I develop from defaults to extreme adjustments in post. This enables, at time for spot metering of a scene, to visualise precise the exposure latitude compared to the scene. Also for making a choice for a purpose made transfer zone into shoulder.
Linear response? Please note there is actually a slight shoulder in highlight end at defaults for my digital back (I assume due settings/processing in Capture One).
In terms of Ansel Adams, the raw file is your negative and the rendered file represents the manipulations added in printing.
In terms of Ansel Adams, the raw file is your negative and the rendered file represents the manipulations added in printing.Development of neg? I asked earlier but haven’t heard back about that.
Development of neg? I asked earlier but haven’t heard back about that.
I’d agree, using a tool to examine the raw itself reduces any other process and tells us more truthful information about just exposure. I wish we had such tool on the damn camera. But in terms of the article and the rendered values that were not defined in highlights, isn’t that fair game since rendering is (I believe) akin to the development of the neg/raw?
Development of neg? I asked earlier but haven’t heard back about that.
I’d agree, using a tool to examine the raw itself reduces any other process and tells us more truthful information about just exposure. I wish we had such tool on the damn camera.
You could look at it in that way, but how do you know that your rendering is optimal?I’d consider that somewhat (or highly) subjective. I’d prefer not to blow out highlights but if that’s the intent of the image creator, so be it. Blocking up shadows is fine.
With a negative (film) once you develop it, there is not much you can do.To a degree yes, but you now have to make a print. So perhaps it’s fair to say rendering is part development (normalizing ETTR) and part subjective (making ‘the print’).
since rendering is (I believe) akin to the development of the neg/raw?I don't think this is correct or a useful analogy.
I don't think this is correct or a useful analogy.Isn’t that the same in the raw converter? If I use ETTR, the initial (default) rendering/development looks too light. Using Michael’s term, I ‘normalize’ the using various sliders.
With film 'development' was a process that could be manipulated to change the characteristics of the latent image, but on a 'once only' basis after exposure(capture).
In a digital system, there's little other than choosing an ISO setting that can change the characteristics of the medium once you've chosen the camera(sensor) to use.Agreed.
People refer to converting raw files as 'development' or rendering, but really the process is more akin to printing in analogue termsI suppose one could look at it that way.
I just fail to understand why people still bother with such out dated concepts as the zone system.We are in violent agreement!
I just fail to understand why people still bother with such out dated concepts as the zone system. It was designed in the days when taking multi different exposures was impossible/time consuming/expensive, when there were post exposure possibilities that needed consideration and there was no immediate feedback of results. That just isn't the case now, you can cover all exposure variables in a swift bracketed burst that leaves far more options at negligible/no cost.
I've found that ETTR techniques usually result in subtle and undesirable color shifts in the finished result. I'd rather have better colors and tonalities and a bit more noise.ETTR or ETTR and your raw converter + whatever "camera profile" you are using... granted you can ETTR things to where raw data will not be linear, but I don't think you are talking about such highlights
And two years before that, in 2007, I developed a piece of software to calculate and plot the Zone System of a scene from a linear image of it (it can be applied to RAW data if used properly):
I assume due settings/processing in Capture One.try linear scientific curve from C1 CH edition.
"CSC world" - what's that?
ETA: Also, can anyone recommend any good online resource regarding camera profiles with Lightroom?
Cheers for that! I'm just checking his site out now.actually you need to read his postings in various forums, his site is of a lesser value for your purpose of reading about dcp related technicalities - if you are talking abour mr. Chan.
Isn’t that the same in the raw converter?No. RAW converters work non destructively, the original data never changes.
Modern cameras display a pre-visualization (wasn't that concept so important to Ansel Adams?) of the scene on the EVF that makes metering or bracketing a waste of time and resources.Not completely. You know well that the image shown on the EVF won't fully represent the possibilities of the image saved to file. A degree of understanding of how a raw converter will handle bringing out shadow detail or highlight recovery remains an important skill, akin to 'pre-visualisation'.
I've found that ETTR techniques usually result in subtle and undesirable color shifts in the finished result. I'd rather have better colors and tonalities and a bit more noise.
ETA: Also, can anyone recommend any good online resource regarding camera profiles with Lightroom? Every now and then I revisit some of my old canon 5D raws and I'm nowhere near as happy with the LR 'default' camera profile settings for the 5D as I am for my Nikon.
Yeah, his site doesn't actually have anything about LR camera profiles.
I'm more after a condensed resource on the topic. I don't really have time to chase down individual posts by people all over the internet. I'm hoping someone has put together a resource on this.
Not completely. You know well that the image shown on the EVF won't fully represent the possibilities of the image saved to file. A degree of understanding of how a raw converter will handle bringing out shadow detail or highlight recovery remains an important skill, akin to 'pre-visualisation'.
you have a choice - read the original or read the hearsay...
Or read a good summary...did he still states that ICC profiles are not scene-referred ?
As soon as any big adjustment are done in post this seems to happen. The file may have a better SNR but the raw converter algorithms are not perfect. It is a big like making big EQ changes in audio.
did he still states that ICC profiles are not scene-referred ?You should take this up with the ICC (you know about them right?). From their white paper #17, Using ICC profiles with digital camera images:
Also, ICC color management workflows generally assume that the colorimetry expressed in the PCS is of a [color-rendered] picture, and not of a scene. There is currently no mechanism to indicate that the colorimetry represented in the PCS by a camera profile is relative scene colorimetry. Even if there were, use of the PCS to contain relative scene colorimetry is not fully compatible with current ICC workflows, which assume color rendering has been performed. This distinction is especially important with respect to highlight reproduction. Many scenes contain highlights that are brighter than the tone in the scene that is reproduced as white in a picture. An important part of the color rendering process is selection of the tone in the scene that is considered "edge of white", and graceful compression of brighter tones to fit on the reproduction medium (between the "edge of white" tone and the medium white).
An ICC working group* has been formed to attempt to address issues with the use of ICC profiles in digital photography applications, but at present progress is difficult. Even if improved characterization targets (such as narrow-band emissive targets) and profiling tools are introduced, colorimetric intents will still be illumination specific, and perceptual intents will optimally be scene-specific. Some argue that scene-to-picture color rendering should be restricted to in- camera processing and camera raw processing applications, and correction of color rendering deficiencies limited to image editing applications.
Correcting exposure needs no algorithm.
If 1000 photons produce RAW level=500, then 2000 photons (+1EV in exposure) will produce RAW level=1000
So correcting RAW exposure is simply scaling all its RAW numbers by a constant factor. e.g. 1000/2 recovers 500. When color shifts take place is surely because some data got clipped in the RAW file or wrong post processing technique was applied.
I've found that ETTR techniques usually result in subtle and undesirable color shifts in the finished result. I'd rather have better colors and tonalities and a bit more noise.
I've found that ETTR techniques usually result in subtle and undesirable color shifts in the finished result. I'd rather have better colors and tonalities and a bit more noise.
Of course, the flip side of the debate is whether ETTR applies at all, with modern sensors, for most photographers and shooting situations.
I think what MarkL was referring to (and I've wondered the same thing in the past) is pulling back exposure in LR, which is dependent on what demosaicing algorithm is used and what camera colour profile is used. And then, as you say, you can get their attempt at rebuilt highlights if one or more channels has clipped. Essentially, it seems as if the "exposure" slider in LR isn't a simple linear function. Or perhaps it's linear over most of the range, but non-linear near the highlights.
The way I see it, ETTR will always apply.Yes, I agree, of course it will. ETTR is just optimal exposure. The exposure is either such it produces optimal data or it isn’t and there are degrees in which sub optimal data affects our work. Now what the comment might imply is that less than optimal exposure (ETTR) will produce results no one can see and that I suppose is possible. This is much like the use of editing in high bit (16-bit) because we know rounding errors could, possibly result in data loss that is visible at some point on some output devices. It might not. But why take the chance?
...perhaps it’s fair to say rendering is part development (normalizing ETTR) and part subjective (making ‘the print’).
I posted on my blog what I think are 4-Phases of digital image processing and it’s parallel to black and white film. I think it fits this discussion well.But it's wrong. See #reply 32 http://forum.luminous-landscape.com/index.php?topic=99565.msg815359#msg815359
Also, ICC color management workflows generally assume that the colorimetry expressed in the PCS is of a [color-rendered] picture, and not of a scene.
I think what MarkL was referring to (and I've wondered the same thing in the past) is pulling back exposure in LR, which is dependent on what demosaicing algorithm is used and what camera colour profile is used. And then, as you say, you can get their attempt at rebuilt highlights if one or more channels has clipped. Essentially, it seems as if the "exposure" slider in LR isn't a simple linear function. Or perhaps it's linear over most of the range, but non-linear near the highlights. I pretty much gave up on bothering with ETTR years ago due to the crappy renderings I was getting out of LR for images that were pushed right to the ETTR limit. I got much better renderings detail and gradient wise with dcraw, but it was too much effort getting good colour due to the need to go through two different bits of software just for a half-decent initial rendering. Then to photoshop if necessary after that. Since pulling back just a tad from true ETTR, I've had much better results. But if there is a fool proof way of normalising true ETTR images, then I'm all for learning it.
But it's wrong... Your idea o f stage 1 'import' where you apply "Apply Camera Defaults" is NOT like film development because at any stage in the future those defaults can be modified and changed.Rhossydd
I think what MarkL was referring to (and I've wondered the same thing in the past) is pulling back exposure in LR, which is dependent on what demosaicing algorithm is used and what camera colour profile is used. And then, as you say, you can get their attempt at rebuilt highlights if one or more channels has clipped. Essentially, it seems as if the "exposure" slider in LR isn't a simple linear function. Or perhaps it's linear over most of the range, but non-linear near the highlights. I pretty much gave up on bothering with ETTR years ago due to the crappy renderings I was getting out of LR for images that were pushed right to the ETTR limit. I got much better renderings detail and gradient wise with dcraw, but it was too much effort getting good colour due to the need to go through two different bits of software just for a half-decent initial rendering. Then to photoshop if necessary after that. Since pulling back just a tad from true ETTR, I've had much better results. But if there is a fool proof way of normalising true ETTR images, then I'm all for learning it.
in LR if you pull the exposure around you can see non-linear changes to the channels in the histogram.
That's expected because the scaling is performed on the raw channels, not the rendered RGB channels.Adode code does not do exposure correction on raw channels, the data are already after demosaick and then after color transform...
BTW, I'm the guy that wrote the article on "The Optimum Digital Exposure" on LuLa. The one that Anders says “presents essentially nothing new". Have you read my LuLa article? A lot of the concepts are from my book which, if your interested, I can send to you.
How did you arrive at your +2½ to 3 ½ stops for highlights? Trial and error? Experience?
“I was able to establish… key values to spot meter, where… the location of extreme highlight and shadow values with RGB values maintained or with recovery”
It doesn't need to be this complicated! Simply expose a gray card to +5 stops in 1/3 stop increments. This will produce 16 exposures. View these 16 exposures in your digital Raw Processing software and see which exposure reads 99% brightness - the Optimum White Point [OWP] for your system (meter/camera/software) combination.
I've found that ETTR techniques usually result in subtle and undesirable color shifts in the finished result. I'd rather have better colors and tonalities and a bit more noise.
Yes, I agree, of course it will. ETTR is just optimal exposure. The exposure is either such it produces optimal data or it isn’t and there are degrees in which sub optimal data affects our work. Now what the comment might imply is that less than optimal exposure (ETTR) will produce results no one can see and that I suppose is possible. This is much like the use of editing in high bit (16-bit) because we know rounding errors could, possibly result in data loss that is visible at some point on some output devices. It might not. But why take the chance?
Yes that's right. It might not need them but certainly in LR if you pull the exposure around you can see non-linear changes to the channels in the histogram.
The behavior of the exposure slider in ACR/LR depends on which process version is in use. With the current process, PV2012, the exposure slider as well as all the other sliders in the basic panel are image adaptive. See the post (https://forums.adobe.com/message/4253400) by Eric Chan on the Adobe forums. Auto highlight recovery is always in use, which can hide overexposure in the raw file. With the earlier process version, PV2010, I think that the exposure slider was not image adaptive (that is was linear) unless highlight recovery is taking place.
The newer Adobe profiles have hue twists, which can cause problems when exposure and recovery are in use. See SandyMc's post here (http://chromasoft.blogspot.com/2009/02/adobe-hue-twist.html). These are introduced with profiles that have lookup tables in addition to the matrix math. Sandy's DCPtool an address this problem. The whole topic is complicated and beyond the scope of my expertise. However, in my experience the hue shifts are not problematic if there are no blown highlights.
Hopefully, some of forum heavies will enter into discussion.
Bill
Adode code does not do exposure correction on raw channels, the data are already after demosaick and then after color transform...
How did you arrive at your +2½ to 3 ½ stops for highlights? Trial and error? Experience?
“I was able to establish… key values to spot meter, where… the location of extreme highlight and shadow values with RGB values maintained or with recovery”
It doesn't need to be this complicated! Simply expose a gray card to +5 stops in 1/3 stop increments. This will produce 16 exposures. View these 16 exposures in your digital Raw Processing software and see which exposure reads 99% brightness - the Optimum White Point [OWP] for your system (meter/camera/software) combination.
My experience with all my cameras shows the [OWP] falls between +3_2/3 stops to +4_1/3 stop (arrived at by the exposure just before I reach LR's red “Highlights Clipping” warning). This is the Exposure Bias [EB] that needs to be applied to your spot meter reading of the brightest area in your scene. This will produce 99% brightness in your raw software- the [OWP].
Adode code does not do exposure correction on raw channels, the data are already after demosaick and then after color transform...
It has to be doing something with the raw data, or else it could never rescue blown highlights.no, inventing the data in "blown" highlights does not need (it is not about what is better - but about what is possible) "raw" data... you have for example 2 channels in their internal RGB color space after the color transform w/o "clipping" - so you can use that information to invent the data in the 3rd channel...
If data is demosaiced and linearly (matrix) colour transformed, and even if it has been gamma transformed (this only applies to pure gamma lifting, not sRGB-like gammas), it should be possible to change exposure without any hue shift through scaling the RGB values by a constant value.
In addition, LR/ACR have a baseline exposure adjustment.Adobe code has one hidden expocorrection inside it (hardcoded, might be zero) and one picked (if present) from DCP profiles (BaselineExposureOffset)... so the real hidden expocorrection is a sum of two components... convert a raw to DNG using Adobe's code (ACR, LR, DNG converter) and check "BaselineExposure" tag for what is in the code (vs what is in profiles)... that tag is "ISO" dependent... it can be different for raws shot with different nominal "ISO"s... for example Sony A7 = ISO 50 : BaselineExposure = -0.65 and ISO 100+ : BaselineExposure = +0.35 ... you can also check Fuji's x-trans cameras for high ISOs ;D
no, inventing the data in "blown" highlights does not need (it is not about what is better - but about what is possible) "raw" data... you have for example 2 channels in their internal RGB color space after the color transform w/o "clipping" - so you can use that information to invent the data in the 3rd channel...
LR absolutely works on the raw data.
I accept your point that it's not working directly on the raw data itself... as opposed to after white balancing and gamma correction
We'd just do it all in photoshop after rendering with dcraw or something equally similar and fast.consider this - such raw converter as lightzone ( http://www.lightzoneproject.org/ ) was doing everything after running dcraw executable ;)
no, it does not... as it was noted - demosaick and color transform (linear /matrix/ or non linear /matrix + LUT or dummy matrix + LUT/) before exposure correction in UI... that makes it a non-raw data... demosaick alone makes it a non-raw data...I'd be surprised if Adobe chose to do highlight recovery on data that has been processed by demosaic, and even more if it has been processed by (potentially nonlinear) color processing. After such processing, the saturation point is hard to define and a single blown sensel in a single channel can affect all 3 channels in a spatial neighborhood.
consider this - exposure correction is done there (ACR/LR) after WB operations and after color transform which might be as well applying some "gamma" (or any other curve - whatever is in the 1st stage LUTs by their design)
So now it's after demosaic, colour transform, AND white balance?? What's it going to change to next comment?
I'd be surprised if Adobe chose to do highlight recovery on data that has been processed by demosaic
that was always the case - where did you see the changes ?
You originally said 'demosaic and colour transform'. Twice. Unless "colour transform" includes white balance. I assumed colour transform means conversion to a colour space.sorry... I forgot to insert the WB between demosaick and color transform... WB operation is closely related to a color transform as ColorMatrix tags are part of dcp profile (in fact if your profile has only CM tags - those are guiding both WB and color transform operations) and potential interpolation of various matrices/luts from dcp profile also depends on your selection of white balance in ACR/LR UI (when you have a dual illuminant profile).
But once again, this is all dodging around the point of the discussion.
So how is it possible for the exposure slider to 'normalise' blown highlights in the rendered data then? And no, I'm not talking about 'highlight reconstruction' from 1 or 2 channels. I'm talking about when all three channels are blown in the LR histrogram (and in the jpg preview).
If the slider worked on the post rendered data, then it shouldn't be able to pull blown highlights back into normal ranges.
You really don't make much sense.
Adobe doesn't invent data for all three channels, as you could confirm for yourself if you did a comparison between a dcraw conversion and a LR conversion.
You appear to be trolling. Have you ever recovered detail from a nominally blown raw file? I have, many times, and no, it's not a case of making a "nice gradual transition to "white". It's about putting actual detail back into the image.
if the data in all 3 channel is "clipped" what do you think is happening ?ASAIK, based on text from Thomas and Eric at Adobe, if one (perhaps two?) channels are clipped, they can reconstruct them from the remaining data. I don't believe they've ever indicated they can do this if all three are actually clipped although PV2012 *may* be different.
Top one - jpg out of camera. Bottom one - highlights recovered in Lightroom.that does not illustrate anything, Bernie... what you see on screen (and what raw converter in camera saves as "OOC JPG") is not what is inside the raw converter after demosaick and all those operations before exposure adjustment for its code to work with, that's it... when you play with exposure adjustment in LR/ACR the code does not work with some fixed result of some previous exposure adjustment (like when you load that "OOC JPG")...
:D
if the data in all 3 channel is "clipped" what do you think is happening ?
PS: if the details "appear" that simply means "nominally blown raw file" was not "nominally blown" in all channels
that does not illustrate anything, Bernie... what you see on screen (and what raw converter in camera saves as "OOC JPG") is not what is inside the raw converter after demosaick and all those operations before exposure adjustment for its code to work with, that's it... when you play with exposure adjustment in LR/ACR the code does not work with some fixed result of some previous exposure adjustment (like when you load that "OOC JPG")...
ASAIK, based on text from Thomas and Eric at Adobe, if one (perhaps two?) channels are clipped, they can reconstruct them from the remaining data.
Exactly (in a broad sense) what is happening in DCRAW when all three channels are clipped in the jpeg and LR histogram.
According to your description, that data is lost.yes, data is lost (forever that is) - some details and color can be invented/guessed (plus Adobe tries to make it visually nice while inventing/guessing) if 1 or 2 channels still not clipped... if 3 are clipped - see the quote from EChan above... this is not a reconstruction like with ECC, this is purely guesswork - but it works quite close to the reality (or simply logical/pleasant from visual perspective) in a lot of cases
we are not talking about dcraw or LR histogram - we are talking about when exposure correction is done inside LR/ACR... that is after demosaick, wb/color transform = not with raw data...
yes, data is lost (forever that is) - some details and color can be invented/guessed (plus Adobe tries to make it visually nice while inventing/guessing) if 1 or 2 channels still not clipped... if 3 are clipped - see the quote from EChan above... this is not a reconstruction like with ECC, this is purely guesswork - but it works quite close to the reality (or simply logical/pleasant from visual perspective) in a lot of cases
We actually were, which is why I made the point earlier that you don't seem to be following the debate very well. Follow the quote trails back and you'll see.may be you did, myself just interested in a code flow /stages/ inside ACR/LR...
We are talking about where the data is clipped in the LR histogram
That is, for your description of how the exposure slider works to be accurate
It is very simple, optimal capture is where as many photons as possible are collected, without clipping any channel.
...
Expose ETTR, or for mid tones if concerned about hue twists. Once capture has been made, we can apply any kind of processing.
Having determined the following extreme points per 1/3 stops increment testing (textural maintained):
(A) Stops above 18% grey card just prior to clipping of first channel
(B) Stops above 18% grey card just prior to clipping of last channel
(C) Stops below 18% grey card just prior black point
(D) Same as C) with maximum negative exposure in post. My assumption is that deliberate negative exposure is only of interest to extend shadow end due benefits of ETTR to other data.
I can check for what falls on -2 ~ -3 stops and upwards which is what for my digital back per my trial and error appear to maintain quality data when some require to be brought upwards in post.
Lets get back on subject…What I did was akin to a film test. Bracket a controlled studio setup with a very white tile (BableColor). View where in my raw converter of choice I really clipped all three channels compared to no clipping the exposure below that.
Where to locate the maximum of photons collected, and how ?? ?
Ensure shadows will be adequate collected ?That's rather the point of ETTR (optimal exposure).
paintinggood one
you fail to consider that it is not my description - it is how processing was explained (more then once) by developers, that is exposure correction stage (pull/push/leave as is) in ACR/LR is after demosaick, wb/color transform (dcp has several parts, matrices /CM, FM/ and HueSatMap LUTs are applied before exposure correction)
...I would not advise using LR/ACR... especially with the current process version (PV2012)...
...the BaselineExposure... increase or decrease the rendered values by the amount of the baseline adjustment.
...Most light meters (including those built into our cameras) are calibrated to yield 12% saturation rather than the 18%
Sorry but I don't see how the two are mutually exclusive.Photographers are supposed to take pictures not run tests and be scientists.
Well it is your description here in this thread. Do you understand it, or are you just parroting something without understanding it?
If acr/lr is working in 16bit then when white balance is applied to a nominally over-exposed image a whole bunch of pixels will max out at 65535. Pulling back the exposure slider will be unable to create detail out of that blown data. It will only be able to render those areas as some shade of grey. So either you are wrong, or acr/lr are working in a larger bit-depth than 16bit. If that's the case, then you've got to explain how it is that both DCRAW and LR render nominally blown highlights the same way. DCRAW I'm pretty sure only works in 16 bits (well, it did when I last played with this stuff in 2009 or so). Can you explain all this, or are you uninterested in the details of your beliefs?
"but a large part, perhaps not equally so IS running tests, and understanding the science behind the process."And IMHO that involves testing, some science and of course, 'taking' pictures.
My comment was never intended to mean that becoming a photographer meant you no longer have to practice and keep learning your craft!
again - you fail to comprehend that those are not my "beliefs" - I am merely repeating you (as you can't find out yourself) what more than once was said by Adobe developers (E.Chan) about where the exposure happens in ACR/LR pipeline in terms of operations with the data :)
if you are interested in the data types that Adobe are using internally during various operations then you probably need to see not dcraw.c code, but rather Adobe's DNG SDK code (available publicly) - that might give you an idea (because they use that code in ACR/LR).
if you are interested to find out how Adobe achieves the sequence of operations they disclosed (demosaick -> ... -> wb/color transform till/including HueSatMap tables -> ... -> exposure ->... ) w/o losing any necessary data please feel free go and dig there, but that's not something I am interested in here, sorry, so don't drag me into "bits" discussion... or you can chase E.Chan and challenge his description about the sequence of operations with the data in ACR/LR (that is, if I understand you correctly, that exposure operations /relevant UI sliders/ in ACR/LR code must be done before WB as you, apparently, want to say, not after, as he says)... here I want to include again one his many quotes = ""the DNG processing model performs a linearization of the original raw image values followed by demosaicing, then white balance. All of the other image stages follow. So to answer your question, all of the image ops except for linearization (which isn't under user control anyways) happens after demosaicing. "
Andrew,I know we are. I'm just not as comfortable as you are with the statement: Photographers are supposed to take pictures not run tests and be scientists.
We are on the same "wavelength" on this - trust me.
I'm not questioning what Chan says, all I'm saying is that there's a chance that you might not have understood what he was saying.
1. I plotted many bracketed exposures many time testing the top 2 stop occur in the top 10% of our exposure. (Yes, I read George Jardine's" article about revisiting the Zone System a while back explaining this... but, I'm an empirical kind of guy... I need to test it on "my" system)
(http://onezone.photos/wp-content/uploads/OWP_XLS.png)
Lets get back on subject…
Where to locate the maximum of photons collected, and how ?? ?
Ensure shadows will be adequate collected ?
EXAMPLES:
How to expose the attached scenes precisely; Ziczac Bridge, View from Resturant, Bierstadt (painting by)?
Consider exposing for allowing in processing for an extended shoulder transition into highlights of around 1/3 stops more than available highlight recovery (Velvia slide film, page 19-20 in my paper), and for maintaining as much quality pixel info as possible in the "important" parts of image.
I can spot meter for (A) or (B) above, or if I want to expose per knowing more for any of remaining in above. After, all I need is one shot. To add, although my Leaf back has an excellent RAW histogram, due 80MP it is slower to use than a DSLR and more battery hungry.
Anders
Apart from the discussion about exposure determination, the images that Anders HK has posted are both really good and quite illustrative.
I agree that Anders' images are lovely, but I think that attempts to adapt zone principles to digital are misdirected since film and digital sensors have a very different response to light. Digital is linear whereas film responds proportionally to the logarithm of exposure.That's always been my understanding and belief too so it's good to hear confirmation on this.
I just found the article that this image seems to stem from:
(http://cnet1.cbsistatic.com/hub/i/2012/04/18/2d7fb674-fdc3-11e2-8c7c-d4ae52e62bcc/751dadeda7198d4ab1d5f1f66dba0322/DxO-film-vs-digital-dynamic-range.png)
"An objective protocol for comparing the noise performance of silver halide film and digital sensor", Frédéric Cao, Frédéric Guichard, Hervé Hornung, Régis Tessière
https://www.dxo.com/sites/dump.dxo.com/files/dxoimages/ei/sci-publications/2012%20Film_vs_Digital_final_copyright.pdf
... a proper ETTR exposure is always optimal. It is simply the maximum exposure that avoid clipping non specular highlights.
I would suggest to check the raw files with a tool showing a proper histogram, without manipulation in the raw processor.