That looks great ... sounds simple too with just two shots to make the image... I can't believe the final outcome ... both of the shots that you took would have had a certain amount of noise.. but the detail in the final image looks great.. almost if you had been using a fill light. I would certainly use this for my workflow ... what type of program did you make? Is it similar to a lens cast correction program?
Ros
[a href=\"index.php?act=findpost&pid=124764\"][{POST_SNAPBACK}][/a]
I'd certainly like to know more about it. Please do the translation. Thanks, Jim
[a href=\"index.php?act=findpost&pid=124789\"][{POST_SNAPBACK}][/a]
GLuijk,
In what way would you say your method is better than standard blending procedures as outlined in this Luminous Landscape tutorial?
http://www.luminous-landscape.com/tutorial...-blending.shtml (http://www.luminous-landscape.com/tutorials/digital-blending.shtml)
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124781\")
Can you show us a sample image of a region spanning the bright-dark transition, where your program switches image source pixels? A good example would be a crop including both the bright lamp and the nearby dark speaker from your first posted image. If the program has any glitches, that is where they will be found.
[a href=\"index.php?act=findpost&pid=124824\"][{POST_SNAPBACK}][/a]
This result can already be achieved in PS pretty easily using the "blend if" sliders under blending options. Expose one shot for the highlights and one for the shadows, bring the over exposed shot down in the RAW processor so the tonalities are the same, layer the frames in PS and use the blend if tool to reveal the overexposed shot in the shadow regions only to get noise free shadows. As mentioned before the difficult/problematic areas are where there is sensor blooming and fringing.
Does your program offer any advantages to this or is it a similar idea with automation?
tim
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124830\")
Can't this be done in CS 3 Extended by taking multiple exposures on a tripod and merging under Automate? However it won't extend the dynamic range.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124831\")
If you have a Mac version of your utility, I'd be glad to do some testing for you.
Regards,
Bernard
[a href=\"index.php?act=findpost&pid=124771\"][{POST_SNAPBACK}][/a]
The biggest advantage of this method is a reduction in quantisation noise in the shadows resulting from low (12-bit) conversion of analogue to digital signal. This gives much smoother tonality in the shadows and is useful for those who like to manipulate there images. The reduction in shot and thermal noise comes from averaging over multiple exposures (as per traditional correlation processes to remove noise in signal processing).
This result can already be achieved in PS pretty easily using the "blend if" sliders under blending options. Expose one shot for the highlights and one for the shadows, bring the over exposed shot down in the RAW processor so the tonalities are the same,The emphasized text is the difficult part, at least for the test images provided by Guillermo. At least my meagre skills aren't quite up to matching them well enough to avoid posterization effects.
This technique has been around since...a long time. Though it is good to see it get some airing again.[a href=\"index.php?act=findpost&pid=124850\"][{POST_SNAPBACK}][/a]
I have tested a technique to completely eliminate noise* on digital images based on the signal/noise ratio improvement achieved through overexposition.
At the same time this technique extremely expands the dynamic range of your image in the shadows (don't think of HDR, it's not like that) and recovers in high detail all textures present in the darkest areas of your image.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124758\")
To do this you simply need to shoot twice making use of a tripod. One shot will be as usual, keeping highlights unburnt.
[a href=\"index.php?act=findpost&pid=124758\"][{POST_SNAPBACK}][/a]
Mostly with non-linear data, though, which is more unwieldy.
The best place to do most image math is in the RAW linear state where everything is very simple.
[a href=\"index.php?act=findpost&pid=124892\"][{POST_SNAPBACK}][/a]
A friend of mine is modifying DCRAW's C source code to perform all these operations, not only in linear as I do, but over the RAW file itself prior to Bayer demosaicing, white balance or any scaling (in fact we are having some trouble with the black point offset most cameras keep in their RAW files that must be substracted before being able to consider a linear behaviour of the sensor, which is not strictly linear due to this offset).
And if he manages, I'll try to convince him to pursue a 16-bit DNG RAW file as an ouptput. That would be simply great, can you imagine? put a bunch of RAW files, with different arbitrary exposition, into a 16-bit RAW file free of noise ready for developing on your favourite RAW developer.
But I have a feeling that recreating the DNG RAW format is not a joke so perhaps we must be happy with just putting our fingers into the image before the developing process.
Regards.
[a href=\"index.php?act=findpost&pid=124921\"][{POST_SNAPBACK}][/a]
If you could manage that, than it would be gigantic. I really would love to use such a thing.
Now if you can't do that it would use to RAW images to produce a Tiff, right ? Now do we than lose all the things from raw like whitebalance, and other adjustments ? That would be a drawback for less noise.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124939\")
David Coffin, the author of DCRAW, told me it is not recommended to develop RAW files without applying the white balance; however I have found perfect results doing it afterwards. Look at this example:
Image developed without WB:
[a href=\"index.php?act=findpost&pid=124943\"][{POST_SNAPBACK}][/a]
Having a somewhat automatic but customizable program to produce expanded dynamic range images - for lack of a better term[a href=\"index.php?act=findpost&pid=125005\"][{POST_SNAPBACK}][/a]
I don't profess a detailed knowledge of raw development, but isn't Tim Farrar's method the same or similar to what you propose (with a linear TIF output).
http://www.farrarfocus.com/ffdd/bracket.htm (http://www.farrarfocus.com/ffdd/bracket.htm)
Mike
[a href=\"index.php?act=findpost&pid=124992\"][{POST_SNAPBACK}][/a]
The kind of problems he is talking about probably affect mainly saturated colors of certain hues. The RGB response of the camera is different than the RGB used in display files and mediums. RAW converters with optimized color correction need to shift hues and vary saturation, based on hue and saturation of the white-balanced image. If you don't WB before doing the full conversion, the wrong hues will be shifted and saturation-adjusted after white-balancing. Some demosaicing algorithms also work with separate luminance and chroma, and these separate differently before and after white balance.[a href=\"index.php?act=findpost&pid=125010\"][{POST_SNAPBACK}][/a]
Better terms are definitely needed. What most people call "HDR" is reall low, compressed DR display of a high DR scene. Would you call AM Radio "HDR"? No, but it is quite analogous to what is called "HDR" in digital photography.
A simple linear image with high dynamic range is really only an image with low noise in the shadows; an image in which tones are usable many stops below the maximum signal level.
I've taken the liberty to post a few CS3 HDR edits on a temporary page of my website, see http://hornerbuck.com/reference.aspx (http://hornerbuck.com/reference.aspx).
I have wondered for a long time why film scanners did not use this technique to achieve extremely high dynamic range (or low shadow noise). They clearly consider this an important specification, since they all inflate it so much in their marketing Scanners have no stability problem, so aligning multiple exposures should be easy. This technique would allow a scanner to deliver a merged image with full 16-bit depth, even if the sensor is capable of only 12-bit depth each pass. Apparently a few scanners are now starting to do this ....
[a href=\"index.php?act=findpost&pid=124788\"][{POST_SNAPBACK}][/a]
Two points about this. First the new Silverfast does exactly this by using two scans of different exposure. BUT Sub $1000 flatbed scanners have huge stability problems and great difficulty aligning images. Because of heat expansion and cheap step motors the prosumer flatbeds have difficulty making two scans exactly the same length and there is some loss of resolution though noise is virtually iliminated.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=125218\")
Two points about this. First the new Silverfast does exactly this by using two scans of different exposure. BUT Sub $1000 flatbed scanners have huge stability problems and great difficulty aligning images. Because of heat expansion and cheap step motors the prosumer flatbeds have difficulty making two scans exactly the same length and there is some loss of resolution though noise is virtually iliminated.Good point. With a bit more effort, though, the two exposures could be interleaved line-by-line to eliminate alignment problems. Step to first line position, acquire scan line with short exposure, acquire scan line with long exposure, step to next line position, repeat. At the end, you have two images with different exposure but perfect alignment, even with a poor stepper. I don't think anybody actually does this.
[a href=\"index.php?act=findpost&pid=125218\"][{POST_SNAPBACK}][/a]
Wouldn't it be wonderful if they could incorporate the technique right into the camera. i.e.: with one click of the shutter have the camera take two exposures, automatically adjusting the sensitivity for each, and without having the mirror move twice.Some cameras allow exposure bracketing with mirror lock-up and a self-timer, but I'm not aware of any current cameras doing that without releasing and cocking the shutter for each exposure.
Another useful technique is described here:Hmm, that seems useful, although it seems to require quite a few exposures to achieve that usefulness.
http://photoshopnews.com/2007/03/27/image-...p-cs3-extended/ (http://photoshopnews.com/2007/03/27/image-stacks-in-photoshop-cs3-extended/)
With the Align Layer's command, you don't need a tripod (if you're careful) and once you use Median on the multiple SO, the noise is greatly reduced.
Hmm, that seems useful, although it seems to require quite a few exposures to achieve that usefulness.
[a href=\"index.php?act=findpost&pid=125409\"][{POST_SNAPBACK}][/a]
The requirement for the extended version of CS3 is also a bit bothersome; Adobe doesn't appear to provide an upgrade from CS3 to CS3 Extended.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=125409\")
I have tested a technique to completely eliminate noise* on digital images based on the signal/noise ratio improvement achieved through overexposition.
At the same time this technique extremely expands the dynamic range of your image in the shadows (don't think of HDR, it's not like that) and recovers in high detail all textures present in the darkest areas of your image.
[...]
[a href=\"index.php?act=findpost&pid=124758\"][{POST_SNAPBACK}][/a]
A very interesting technique, especially for its simplicity. Have you compared your results from those delivered by PhotoAcute (http://www.photoacute.com)? Yes, PhotoAcute can do other things, but the noise reduction is one of the most interesting in it, and it'd be interesting to see if your method can deliver the same.
-Lars
[a href=\"index.php?act=findpost&pid=125418\"][{POST_SNAPBACK}][/a]
A very interesting technique, especially for its simplicity. Have you compared your results from those delivered by PhotoAcute (http://www.photoacute.com)? Yes, PhotoAcute can do other things, but the noise reduction is one of the most interesting in it, and it'd be interesting to see if your method can deliver the same.
-Lars
Kirk,
I was interested in using the Silverfast S/W you refer to with my Minolta Multi Pro (non flatbed) film scanner (6x7 film) until reading about sharpness degradation due to mis-registration. Here is a reference:
http://tech.groups.yahoo.com/group/multipro/message/2991 (http://tech.groups.yahoo.com/group/multipro/message/2991)
I know from experience that the Multi Pro hardware is capable of producing two very similar scans that can be very accurately aligned in Photoshop, so perhaps it is (or was) a Silverfast problem that prevents them from being aligned at scan time ?
Do you know if this problem has been fixed in a newer version of Silverfast?
Ken
[a href=\"index.php?act=findpost&pid=125251\"][{POST_SNAPBACK}][/a]
Two points about this. First the new Silverfast does exactly this by using two scans of different exposure. BUT Sub $1000 flatbed scanners have huge stability problems and great difficulty aligning images. Because of heat expansion and cheap step motors the prosumer flatbeds have difficulty making two scans exactly the same length and there is some loss of resolution though noise is virtually iliminated.
[a href=\"index.php?act=findpost&pid=125218\"][{POST_SNAPBACK}][/a]
Good point. With a bit more effort, though, the two exposures could be interleaved line-by-line to eliminate alignment problems. Step to first line position, acquire scan line with short exposure, acquire scan line with long exposure, step to next line position, repeat. At the end, you have two images with different exposure but perfect alignment, even with a poor stepper. I don't think anybody actually does this.
[a href=\"index.php?act=findpost&pid=125296\"][{POST_SNAPBACK}][/a]
Some cameras allow exposure bracketing with mirror lock-up and a self-timer, but I'm not aware of any current cameras doing that without releasing and cocking the shutter for each exposure.
The mode you're suggesting seems to have rather few benefits over the use of multiple shutter releases.
[a href=\"index.php?act=findpost&pid=125380\"][{POST_SNAPBACK}][/a]
I was thinking along the lines of the camera taking bracketed (ISO) exposures and doing the pixel by pixel selection and replacement using the technique decribed at the start of this thread. Only the final combined image would be saved to the memory card. Perhaps the user would set two ISO's for the exposures. One ISO gets the highlights, the other the shadows. The shutter speed and f stop would have to be the same for each exposure to avoid having wierd things happen to the combined image.I think there will be more "weird things" happening to the combined image from ISO bracketing than shutter speed bracketing.
I have tested a technique to completely eliminate noise* on digital images based on the signal/noise ratio improvement achieved through overexposition.
At the same time this technique extremely expands the dynamic range of your image in the shadows (don't think of HDR, it's not like that) and recovers in high detail all textures present in the darkest areas of your image.
* It actually does not eliminate noise at all, just takes for every pixel that one with the best signal to noise ratio. That is why textures are not only 100% preserved, but improved.
To do this you simply need to shoot twice making use of a tripod. One shot will be as usual, keeping highlights unburnt. The second shot with be done with a severe overexposition (I found +4EV to be a good value). A simple piece of software merges those two shoots into one final image with no noise on it and fine detail even in the darkest zones. I have converted my modest 350D in a virtually noise-free digital camera with 12 f-stops of real usable dynamic range.
I've used this technique successfully for many years.Unfortunately, it doesn't work well at all with the sample raw files provided by Guillermo Lujik. There are nasty highlight artifacts. Granted, the difference is 4 stops, not 3. Adjusting the sliders so that one gets rid of the highlight artifacts also seems to get rid of the reduced noise.
Unfortunately, it doesn't work well at all with the sample raw files provided by Guillermo Lujik. There are nasty highlight artifacts. Granted, the difference is 4 stops, not 3. Adjusting the sliders so that one gets rid of the highlight artifacts also seems to get rid of the reduced noise.
The post has a link to his article, which has a link to a forum discussion with no RAW links. Post a link direct to the RAWs, so I can find them.
Unfortunately, it doesn't work well at all with the sample raw files provided by Guillermo Lujik. There are nasty highlight artifacts. Granted, the difference is 4 stops, not 3. Adjusting the sliders so that one gets rid of the highlight artifacts also seems to get rid of the reduced noise.
[a href=\"index.php?act=findpost&pid=126439\"][{POST_SNAPBACK}][/a]
The problem is; how do I get from this rather flat image to the final vision without introducing artifacts around the edges of the window frames?
I think what a lot of you are missing here is that this technique is not making an image where you actually see the HDR range increased, but reducing noise in the shadow areas, and bringing back detail, like nothing I've seen, like RAW's "lluminance" etc smoothing, or even PS's "Reduce Noise" filter.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=126537\")
I mentioned before, in blending images there's often a halo effect around borders between high contrast transitions. I don't have the skills to get rid of these, at least without painstaking hours of work.
I'm not trying to knock what is being discussed but it really isn't new and has been discussed several times e.g.Yes, this appears to accomplish with a bunch of exposures what this technique manages with two, and at the cost of more than a little bit of extra manual labour. The exception may be the benefits of the image stack program, which appears to introduce additional flexibility.
Noise Reduction with multiple exposures (http://luminous-landscape.com/forum/index.php?showtopic=3581&hl=)
If you seach through the forums there are examples of using HDR to reduce noise as well as image stacking and we have discussed a number of software packages to reduce noise and increase dynamic range on a number of ocassions.From what I've seen and recall, none appear to have the same simplicity as this method.
If you have halos around high contrast edges that you need to get rid of then I would suggest shooting multiple exposures no more than 2/3 or 1-stop apart.Why shoot four exposures 2/3 or 1 stop apart, when you can settle for two exposures three or four stops apart?
Blending in Photoshop (and I guess any other technique) will mitigate the effect of halos provided you have sufficient exposures from which to extract 'clean' data. Two exposures 4-stops apart doesn't give sufficient information to eliminate all types of artifacts.It seems to work well enough in the examples provided in this thread; I see no artifacts in the images presented to us.
Does this work for you, Ray?
[attachment=2761:attachment]
[a href=\"index.php?act=findpost&pid=126543\"][{POST_SNAPBACK}][/a]
Two exposures 4-stops apart doesn't give sufficient information to eliminate all types of artifacts.
[a href=\"index.php?act=findpost&pid=126545\"][{POST_SNAPBACK}][/a]
However, in a situation like this, if I have to use the lasso tool, I could simply copy & paste the 'window view' from the dark image to the light image. With both methods I have the problem of that transition edge along the window frame, a problem which is not particularly apparent in the jpeg but is definitely there as can be seen in the crop.
Whenever you're compositing images, you always should have a bit of fuzziness at the edge where one layer transitions to another, or you'll either have matte lines or an artificial "cut-out" look to the edge. This technique makes it easy to get natural-looking blends along composited edges.
Yes, I realise this. It's really an issue of how much stuffing around one needs to do to achieve the right balance which looks natural. However, I'll try to go through those procedures you've outlined tomorrow with a clear head.
I noticed that the CS3 demo version had a feature whereby one can enlarge or diminish a selection by a specified number of pixels, as well as specifying a degree of feathering. That could be useful with this particular image.
The attached ZIP file has the layer styles
I'm not familiar with DCRAW but don't mind working on the command line. Is there somewhere I can download your version? Will this be able to output a DNG file?[a href=\"index.php?act=findpost&pid=126905\"][{POST_SNAPBACK}][/a]
we were thinking about that, to produce a 16-bit DNG as an output. Would be orgasmic (lol). But need to know a lot about RAW file formats. Perhaps with assistance from David Coffin...
[a href=\"index.php?act=findpost&pid=126909\"][{POST_SNAPBACK}][/a]
Well DNG would be a quite welcomed feature!
[a href=\"index.php?act=findpost&pid=126910\"][{POST_SNAPBACK}][/a]
Hi Jonathan Wienke, just to say that I never stated that reducing noise through multiexposure was a new idea; I am sorry if it looked like that. In particular in my website I textually say: "Las ideas descritas hasta ahora no son nuevas, la novedad consiste en aplicarlas con el fin de obtener una reducción de ruido radical y de forma automatizada (...)", that means "The ideas described here [referring to the noise reduction process through overexposition] are not new, the new thing consists of applying them with the goal of achieving a radical noise reduction in an automated way".My Spanish is very limited, so I wasn't able to read your web site article. My apologies. The first program you posted about breaks no new ground, and offers little or no advantage over blending techniques that have been available for years, including the method I posted about using layer blend styles.
If the Bayer interpolation routine was modified so that it took all source images into account simultaneously, slight mis-registration would actually be an advantage, because there would be more than one color channel at each output pixel site. Imagine that when shooting, the +3 exposure was shifted 1 pixel vertically relative to the 0 exposure, and the -3 exposure was shifted 1 pixel horizontally relative to the 0 exposure. After registering the RAW data, the Bayer interpolation now has 2 color channels per pixel to use, either red and green, or green and blue. The exposure scaling would have to be accounted for, but increased color accuracy could be achieved. Some medium format digital backs do this (but not at different exposure levels) to improve color accuracy; 3 exposures are taken, whith the sensor being moved 1 pixel vertically or horizontally between frames.
You can download the complete DNG specification from a link at the bottom of this page (http://www.adobe.com/products/dng/).
Edit: The Bayer interpolation modification wouldn't really be necessary, the color accuracy improvement happens already when the different exposures are blended together. Never mind.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=126968\")
oh I understand now. Your way to proceed is to bracket a couple of stops below and above the "correct" exposure. In my opinion bracketing underexposed as a rule is not necesarry. Any underexposed shot (and -3 is VERY underexposed) doesn't provide additional clean information to the 0 and +3 shots.
My concept here is slightly different: take one shot making sure that you capture all highlights, but not underexposing at all, just make sure you don't blow information (a RGB splitted camera histogram is good enough to check this). This is the most important shot of all and after it you can forget about any additional underexposed shot.
[a href=\"index.php?act=findpost&pid=127035\"][{POST_SNAPBACK}][/a]
I've spent several hours over the past few days comparing Jonathan's 'split top layer blending method' with 2, 3 and 4 RAW images of the same scene loaded into HDR
The 'split top layer' method using just 2 images does not need an underexposed image. The bottom layer should just be an image correctly exposed for the highlights, which usually means when converting in ACR, approximately a minus 1 stop EC adjustment should be applied to 'recover' highlights.
My initial impression was that I was still getting a hint of the halo effect, but I now believe this was due to traces of silicon sealant around the edges of the window panes and/or inappropriate adjustments with Photoshop's Shadow/Highlight tool.
What I have noticed is that HDR in PSCS2 is not able to recover highlights well. If the lowest exposure is a full exposure to the right, the highlights will be slightly blown. In order to avoid this, I think it's necessary to include an underexposed image when using HDR.
Loading 16 bit TIF conversions into HDR seems to produce some pretty awful results.
[a href=\"index.php?act=findpost&pid=127206\"][{POST_SNAPBACK}][/a]
May I have your RAW files to test them with my routine?
Some people have come to me with different HDR programs (like Photomatix) and after fiddling some time with them achieved similar results to pixel selection for blending. But usually HDR's tone mapping enforces local microcontrast keeping low the overall contrast, and this provides very unreal results; it's simply a different concept. I prefer not to alter local nor overall contrast and let the user the task to get the best from the noise free image in the way he likes best (contrast curves, zone edition, even HDR on other sofware,...).
[a href=\"index.php?act=findpost&pid=127226\"][{POST_SNAPBACK}][/a]
Hmm, that seems useful, although it seems to require quite a few exposures to achieve that usefulness.jani,
The requirement for the extended version of CS3 is also a bit bothersome; Adobe doesn't appear to provide an upgrade from CS3 to CS3 Extended.
[a href=\"index.php?act=findpost&pid=125409\"][{POST_SNAPBACK}][/a]
You are really reinventing the wheel here. Blending together the best parts of frames shot with different exposure levels has been around several years prior to HDR blending being added to Photoshop as a feature. I've been doing so since 2001 or so when I got my first digital camera.
What did you do to blend the exposures? You mentioned a mask, how did you make it? Jim
[a href=\"index.php?act=findpost&pid=129457\"][{POST_SNAPBACK}][/a]
GLuijk,
When do you plan to release the new version of the program? It is very interesting!
I should (could) be on broadband, but I object to signing a 24 month contract when I intend travelling a lot in the near future.
[a href=\"index.php?act=findpost&pid=129651\"][{POST_SNAPBACK}][/a]
Hi, for all those that could be interested, I have just uploaded an English version of the article explaining this technique: ZERO NOISE PHOTOGRAPHY (http://www.guillermoluijk.com/article/nonoise/index_en.htm)
Hopefully I will write a final version of the program ready to use along this month (Aug 2007).
Regards
[a href=\"index.php?act=findpost&pid=132099\"][{POST_SNAPBACK}][/a]
Hi, for all those that could be interested, I have just uploaded an English version of the article explaining this technique: ZERO NOISE PHOTOGRAPHY (http://www.guillermoluijk.com/article/nonoise/index_en.htm)
Hopefully I will write a final version of the program ready to use along this month (Aug 2007).
Regards
[a href=\"index.php?act=findpost&pid=132099\"][{POST_SNAPBACK}][/a]
Any update on this?
[a href=\"index.php?act=findpost&pid=134367\"][{POST_SNAPBACK}][/a]
Looks great. Charge a reasonable fee for a decent GUI based Windows program and I'll pay up
Quentin
[a href=\"index.php?act=findpost&pid=134540\"][{POST_SNAPBACK}][/a]
Looks great. Charge a reasonable fee for a decent GUI based Windows program and I'll pay up
Quentin
[a href=\"index.php?act=findpost&pid=134540\"][{POST_SNAPBACK}][/a]
Looks great. Charge a reasonable fee for a decent GUI based Windows program and I'll pay up
Quentin
[a href=\"index.php?act=findpost&pid=134540\"][{POST_SNAPBACK}][/a]
Me too, for a Mac version.
[a href=\"index.php?act=findpost&pid=134992\"][{POST_SNAPBACK}][/a]
Im in......windows
davidbogdan
[a href=\"index.php?act=findpost&pid=134997\"][{POST_SNAPBACK}][/a]
I belive it is logical to do it for both, WINDOWS & MAC
[a href=\"index.php?act=findpost&pid=135101\"][{POST_SNAPBACK}][/a]
Any update on this program?
[a href=\"index.php?act=findpost&pid=137467\"][{POST_SNAPBACK}][/a]
I have already thought of how to implement all the algorithms, and now I have come to the user interface. It will consist of 5 stages:
1. RAW development
2. Image Alignment
3. Relative Exposure calculation
4. Advanced features (blend thresholds, anti ghost, progressive blending)
5. Blending
These stages have to be calculated in a sequence and we can perform them all at a time or just up to the currently selected option. And we can change parameters and repeat the process only from a specific point, not having to repeat the calculation of previous stages.
However I want to keep it simple for non advanced users, so there will be a "Do it all" button that will use default parameters to get the resulting image at a mouse click. Advanced features can be learnt in a progressive way.
If you have any suggestions this is the time!
(I am sorry Mac users, I don't have a Mac, and I don't know how to program your beautiful machine).
(http://img248.imageshack.us/img248/8377/dibujogf8.jpg)
[a href=\"index.php?act=findpost&pid=137923\"][{POST_SNAPBACK}][/a]
Looks pretty, but in this case the proof is definitely in the eating.
As for the Mac users (and me Linux user): What are you using for a graphics toolkit? If you used QT, it'd be autoportable to Mac and Linux.
If that's not an option, are you going to release the source so somebody else could have a go at a port?
-Lars
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=138192\")
What's the meaning of 'the proof is definitely in the eating'? hehe it's the first time I hear about that expression.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=138197\")
mmm I am using the Gfl SDK (http://perso.orange.fr/pierre.g/xnview/engfl.html) 16-bit graphic library by a French guy called Pierre E. Gougelet. It's a C library but he provides a VB6 API so I can call its functions from my code. In fact I just need to be able to read/write image files and read/write pixel channels in 16 bit. It was hard to come to a library that can just do that.Ah, well, since it's in Visual Basic 6, the program won't be portable, that's a shame.
Isn't it enough that Guillermo is very generously and graciously sharing and writing this for free (BTW Guillermo, I'll also happily pay for it - you should be rewarded for your work) that we also have to have people complaining that he's not writing a Mac version, or asking him to make his source code freely available??
Load Bootcamp and use that. If I was writing this product and I saw this response to my efforts, then I'd keep it to myself. But then I'm not as nice or as generous a person as Guillermo ...
[a href=\"index.php?act=findpost&pid=138349\"][{POST_SNAPBACK}][/a]
I would suggest making this program Open Source unless you want some company ripping it off and reprogramming it in C++Turbo or something. Then we WILL be paying for it. Also, if you make it Open Source, people will work on it indefinitely and it can never be sold--ever. Either that or Copyright it ASAP.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=139709\")
Isn't it enough that Guillermo is very generously and graciously sharing and writing this for free (BTW Guillermo, I'll also happily pay for it - you should be rewarded for your work) that we also have to have people complaining that he's not writing a Mac version, or asking him to make his source code freely available??Hey, cool down a bit, you don't have to be quite that hostile and fanatic about your dislike for everything non-Windows and open source, especially when I've suggested no such thing.
Load Bootcamp and use that.:roll:
If I was writing this product and I saw this response to my efforts, then I'd keep it to myself.If you were the author of the program and posted such a response, you'd definitely get to keep it to yourself; very few people want to use software made by people with such hostile attitudes, because they simply don't want to be treated that way.
But then I'm not as nice or as generous a person as Guillermo ...You certainly aren't.
Well, I am using DCRAW to develop the RAW files and Xnview libraries to read/write pixel values, so the only think it's really mine is the idea to put all that together and make it work to blend images.In the US, those ideas are probably still possible to patent.
However I think I have in mind some nice algorithms to eliminate ghosting and visible borders with local progressive blending, as well as producing high quality B&W images doing all calculations (exposure correction, B&W channel mixing and even gamma) in floating point precision before the final 16-bit rounding is applied.It will be interesting to see (yet another) demonstration of your results!
Just for the record, I too am a Mac user. I also use a PC at those times when I can't get appropriate software for the Mac, and vice versa. Bootcamp too difficult to use, too inconvenient? I use it, it works fine: it's not perfect, but life's like that.- again
My Chequebook / Paypal account is ready the moment the program goes on sale
[a href=\"index.php?act=findpost&pid=141818\"][{POST_SNAPBACK}][/a]
My Chequebook / Paypal account is ready the moment the program goes on sale - can I assume raw conversion within the program is not essential, or if it is, then Mamiya ZD files will be supported?
Quentin
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=141818\")
However I am planning to introduce a tonal richness quality increase and a virtually unlimited DR expansion (16 f-stops or more is definitively possible) thanks to the introduction of the gamma correction performed in floating point precission, prior to one only 16-bit final integer rounding. In a 16-bit linear RAW no more than ~12 f-stops can be coded with a reasonable tonal richness due to the lack of levels in the lowest f-stops.
The difficult part will be to find in the real world a scene with such a huge dynamic range!
Not sure that such a big compression will give a good picture in a screen or worse in a paper.
I've seen so many awfull HDR pictures, partly because authors wanted to show everything in the picture, even in very dark areas !
Keep in mind that our eyes have also an "instantaneous limited dynamic range" !
So, a good idea will be creating an "S-shape" to preserve dynamic and to keep good contrast in the the "mid-tones" !
But I'm very impressed by your work and results !!
Thierry
[a href=\"index.php?act=findpost&pid=142159\"][{POST_SNAPBACK}][/a]
if you could also add a superresolution algorithm to ths program, this will be priceless.
There is a program PhotoAcute (photoacute.com) it works, but only on small images, it cannot be used for high end photography.
I started this post: http://luminous-landscape.com/forum/index....showtopic=19860 (http://luminous-landscape.com/forum/index.php?showtopic=19860)
(Multishot superresolution software) as I am looking for a solution to digitize my 6x7 negatives using Mamiya ZD, yet at a higher resolution than the sensor can record. PhotoAcute can create higher resolution images due to sub-pixel misalignment of the originals, it also cleans noise as you get better signal statistics from multiple captures.
Combine all these features into what you are doing - and this will be a complete marvel.
GLuijk,
In what way would you say your method is better than standard blending procedures as outlined in this Luminous Landscape tutorial?
http://www.luminous-landscape.com/tutorial...-blending.shtml (http://www.luminous-landscape.com/tutorials/digital-blending.shtml)
[a href=\"index.php?act=findpost&pid=124781\"][{POST_SNAPBACK}][/a]
GLuijk tecnique can be used to recover details in the higlights but can be used also with a totally different goal: the lowlights are not made brighter, but less noisy. The picture has the same exposure but looks as if was taken with an "ideal" camera, with almost no noise.
GLuijk,
I have Nikon Collscan 8000 which can generate NEF raw files.
I you would like, I can provide samples at various exposure levels, let me know the specifics.
[a href=\"index.php?act=findpost&pid=142483\"][{POST_SNAPBACK}][/a]
Hi Guillermo,
Any chance for your Software Release before Christmas?
[span style=\'font-size:11pt;line-height:100%\']Optimist [/span]
[a href=\"index.php?act=findpost&pid=145507\"][{POST_SNAPBACK}][/a]
I hope so! although I will be on vacation in Namibia from tomorrow for the next 3 weeks.
I was this morning shooting at a typical-for-tourists restaurant in Madrid downtown, those places plenty of disgusting bull heads hanging from the walls. And the routine worked perfect to recover all hair texture in the black bulls (there were two of them) which were full of noise in the least exposed shot, at the same time as light areas were not blown. Worked very well.
[a href=\"index.php?act=findpost&pid=145513\"][{POST_SNAPBACK}][/a]
please tell me the requirements for zero noise images
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=150894\")
The blending tecnique(s) described in the mentioned articles are some kind of substitute for "Curves" when the scenes has a contrast that streches beyond the dynamic range and curves don't work. The result is always to lighten up the shadows or dodging the highlight in order to recover detail and legibility where it would be lost.
GLuijk tecnique can be used to recover details in the higlights but can be used also with a totally different goal: the lowlights are not made brighter, but less noisy. The picture has the same exposure but looks as if was taken with an "ideal" camera, with almost no noise.
[a href=\"index.php?act=findpost&pid=142253\"][{POST_SNAPBACK}][/a]
Read this thread ...
http://luminous-landscape.com/forum/index....showtopic=17775 (http://luminous-landscape.com/forum/index.php?showtopic=17775)
[a href=\"index.php?act=findpost&pid=151017\"][{POST_SNAPBACK}][/a]
I don't quite understand:Yes.
do I need to take the same photo two times? with different camera settings each time? and then somehow merge them?
what about photos that we don't have the time or cannot take two times?Then you're s**t out of luck with those photos; you just have to make the best of the camera's limitations.
and why no camera manufacturer hasn't implement this technique to happen automaticaly?At this point in time, it probably is too resource intensive to do in-camera. It may be possible some time in the future.
At this point in time, it probably is too resource intensive to do in-camera. It may be possible some time in the future.
[a href=\"index.php?act=findpost&pid=151053\"][{POST_SNAPBACK}][/a]
mm what if I set the camera to shoot automaticaly 2 times with the minimum time between shots?
[a href=\"index.php?act=findpost&pid=151058\"][{POST_SNAPBACK}][/a]
But for those scanning film, this tecnique can be used always! This would make a program that would handle scanned TIFFs superarchgigauseful![{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=142330\")SilverFast scanning software already does this.
I don't think this is quite true. Whatever method of image blending is used, the principle for getting less noise in the shadows is always to have an image that is correctly exposed for the shadows, which in a high contrast scene necessarily means an image with blown highlights.
Like-wise, in order to get detail in the highlights (with a contrasty scene) one needs an image correctly exposed for the highlights, which also necessarily means an image which is very noisy in the shadows. This applies to Gluijk's method also. There's no getting away from it.
When blending with the method described in the LL tutorial, one always has a degree of control over the individual layers after the blending procedure is completed, just as one has a choice as to how much EC to apply to each RAW image before converting. The result therefore is not to lighten the shadows, but to blend an image with noise-free shadows and blown highlights, with an image with detailed highlights and noisy shadows. How dark or light those shadows are in the final blend is entirely up to you. If you want them darker, then use the 'levels' control for that particular layer to make them darker. If you want them lighter just to see how much noise is there, you will find that there is very little noise, providing the overexposed shot was at least 3 stops overexposed.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=151026\")
and why no camera manufacturer hasn't implement this technique to happen automaticaly?As far as I know, only one vendor has done this: Fuji on his Fujifilm Super CCD SR sensor, which performs real in-camera HDR. It consists of two separate sensors in one, sharing the same surface. They capture the scene with a relative exposure of 3.6EV (I have calculated this figure which after a good number of tests seems to be a constant parameter), and thanks to this the Fuji S3 Pro and S5 Pro can enhance their DR up to 11 f-stops, about 3 complete f-stops more than any Canon or Nikon around at the moment. The two images are independent from each other, and can be extracted and developed separately from the RAW file.
Welcome back. I discovered this thread just after you left. I hope when you write your program you will include the sigma x3f raws in it.
I am sure there are many of us hanging on to this thread waiting for further news.
Best
Mike
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=151159\")
Thank you. Actually since I am not developing the files myself but using David Coffin's DCRAW (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm), the formats supported will be those supported by DCRAW, which is a wide list (http://cybercom.net/~dcoffin/dcraw/index.html#cameras) (at present I think is the only RAW developer which can deal with Nikon D300 RAW files hehe).
Is your camera one of these?
- Sigma SD9
- Sigma SD10
- Sigma SD14
They are supported.
I would like to write some code this weekend.
[a href=\"index.php?act=findpost&pid=151164\"][{POST_SNAPBACK}][/a]
Well Ray, I have to say what Diapositivo said is right: my routine just generates an image free of noise in the shadows, but with the same exposure, bright, contrast,... and everything as the least exposed shot of the set used. Therefore it is very dark and is the user's choice to choose the best way to lift the shadows where and as much as he wants. I wanted a routine that does not modify the original image's parameters in any way.
[a href=\"index.php?act=findpost&pid=151132\"][{POST_SNAPBACK}][/a]
So the result provided is simply what you would get with a noise-free camera that allows to capture the whole dynamic range in the shadows when setting the exposure to preserve the highlights of the scene, and it is you who decide now your preferred edition method.
[a href=\"index.php?act=findpost&pid=151244\"][{POST_SNAPBACK}][/a]
Guillermo,
So to settle this matter in my own mind, ie. whether the blended image is more noisy in the shadows than the unblended overexposed image, I took a few shots of the window in my rather squalid apartment which I'm renting for $10 a day on a monthly basis with unlimited broadband included. (Saving up for a 5D MkII, see ).
[a href=\"index.php?act=findpost&pid=151346\"][{POST_SNAPBACK}][/a]
I see you have air-con Ray...hope it works!
Julie
[a href=\"index.php?act=findpost&pid=151421\"][{POST_SNAPBACK}][/a]
PS: BTW I always feel a bit embarrashed for showing such large pictures, but don't know how to set the 'click to enlarge' thumbnails from them. How should I do it?
[a href=\"index.php?act=findpost&pid=151364\"][{POST_SNAPBACK}][/a]
1. I am a bit confused about those green colours PS CS3 HDR provided you in my sitting room, is that the only result than can be achieved? I merged the images in CS2 HDR and found non-optimum results and artifacts, but do you mean CS3 HDR has got worse?
2. I must admit I have not read LL method, but looking at your explanations I guess it consists of merging 2 versions of the same image with some difference in exposure, making use of a gaussian blur to make the blending progressive. Right?
Your result is great and natural, but I don't agree with this step: "The dark image has been lightened by +1 EC" since it means you are blowing 1 complete f-stop of information in the highlights.
However it will surely work also by leaving the dark image as is (so no loss of information), and applying a -2EV correction to the light image and then the LL method. Bright could be then controlled using a curve which preserves detail in the highlight while exposure correction doesn't.
PS: BTW I see you reescaled a bit down the image. For noise comparisions, if the image has necessarily to be reescaled it is VERY IMPORTANT to perform a nearest neighbout reescaling (in the Spanish version of PS it is called 'By Aproximation', and is the first option of PS reescaling methods, following next Bilinear and Bicubic).
Nearest neighbour reescaling just selects some unmodified pixels from the original image, therefore it preserves intact the signal to noise ratio as can be seen on a 100% crop, while any interpolation method (bicubic, bilinear,...) reduces noise thanks to pixel averaging.
Also integer 50%, 33.3%, 25%, 20%,... reescalings are recommended when using nearest neighbour to avoid aliasing artifacts.
Jan, just because my post followed yours (I didn't quote yours either) doesn't mean my remarks were addressed solely to you: they were addressed to all those who when offered something for free were ungrateful and wanted more.
Peter
[a href=\"index.php?act=findpost&pid=139871\"][{POST_SNAPBACK}][/a]
Beta v0.9 version of Zero Noise, the program to automate this technique.
Hi all it's said it's better late than never. I took some time these days to finally develop the first beta version of the blending program to minimise noise and expand DR. There are still some things to improve and add, but at the moment its fully functional and performs nicely in the tests.
I will offer it for download very soon from my site with a tutorial on how to make use of it.
Meanwhile have a look at this micro-tutorial with an example. Sorry it's in Spanish (by now) but you can follow the images to find out.
Micro tutorial:
1. OPEN RAW FILES Se indica con la opción '...' el directorio donde están los RAW a fusionar.
Con elegir uno basta, el programa leerá los demás mostrando la imagen seleccionada y la lista total de RAWs (lo he autolimitado a 10 RAWs, pero usar más de 4 deja de tener sentido en cualquier aplicación. Con 3 haciendo bracketing 0,+2,+4 como en este ejemplo los resultados son buenísimos):
(http://img245.imageshack.us/img245/6692/dibuzt9.jpg)
2. WHITE BALANCE Se ajusta el balance de blancos. Como aún no está la opción de temperatura/matiz, y los multiplicadores lineales de DCRAW pueden ser poco intuitivos para el prueba/error, he introducido la posibilidad de seleccionar un parque rectangular o circular sobre la imagen que será balanceado en blancos. Este parche lo dibuja el usuario con tan solo clickear sobre la imagen:
(http://www.guillermoluijk.com/tutorial/zeronoise/gui2.jpg)
si se prefiere un parche circular porque la zona de interés para balancear se parece más basta pulsar el botón que indica un cuadrado:
(http://www.guillermoluijk.com/tutorial/zeronoise/gui3.jpg)
De las 2 posibilidades me quedo con la primera por resultar más natural:
(http://www.guillermoluijk.com/tutorial/zeronoise/compwb.jpg)
3. RAW DEVELOPMENT Una vez tenemos el balance de blancos deseado, no hay más que pulsar 'Develop' y el programa invocará a DCRAW para que revele los RAW. Para ver el progreso de DCRAW es bueno desactivar el checkbox llamado 'Hide MS-DOS' que hay en la parte de abajo:
(http://www.guillermoluijk.com/tutorial/zeronoise/dcraw.jpg)
4. BLENDING El paso anterior ha generado un archivo .tiff por cada RAW suministrado. Ahora solo hay que pulsar la opción 'Blend' (mezcla) y el programa los fusionará en una imagen final con el ruido minimizado pues tomará cada píxel del RAW menos ruidoso. Mostrará al final del proceso las exposiciones relativas entre tomas; este paso es muy importante, el programa calcula numéricamente cuáles fueron, ignorando los EXIF que pueden ser totalmente engañosos (subiré un ejemplo donde se ve esto muy bien):
(http://www.guillermoluijk.com/tutorial/zeronoise/gui4.jpg)
En un bracketing -2,0,+2 podemos ver que la separación en EV entre la primera y segunda toma no fue de 2EV sino menos. Caso de haber usado 2EV en la mezcla se habrían notado las transiciones entre zonas.
5. MANUAL TONE MAPPING El resultado será un .tiff lineal que se podrá leer en PS con tan solo asignarle una versión lineal del espacio de color que se usó como salida. De este enlace (http://stats.sergiodelatorre.com/dlcount.php?id=_GUI_&url=http://www.guillermoluijk.com/download/perfiles.zip) se pueden descargar versiones lineales de sRGB y de Adobe RGB.
Una vez cargada y asignada al espacio de color, con tan solo convertir la imagen al espacio de color destino (que puede ser el mismo en que la generamos, solo que ya no será una versión lineal), ésta se deslinealizará y quedará lista para ser editada como cualquier imagen revelada normal.
Esta imagen final parecerá bastante subexpuesta y sin contraste. La imagen NO ESTÁ SUBEXPUESTA, Y NO TIENE BAJO EL CONTRASTE, los tiene tal cual salen del RAW sin procesar. Es solo que acostumbrados a que ACR y demás reveladores apliquen por su cuenta curvas y ajustes de brillo éstas parecerán sosas.
Basta dos curvas (una de levantamiento de sombras y otra de contraste) para tener una imagen final correcta de alto rango dinámico, con las altas luces preservadas y bajo ruido en las sombras, sin haber hecho ninguna reducción de ruido que nos pudiera hacer perder texturas:
(http://www.guillermoluijk.com/tutorial/zeronoise/resultado.jpg)
Comparando la toma menos expuesta del conjunto inicial (la única en que no aparecía quemado el exterior de la ventana) con la toma resultante:
(http://img245.imageshack.us/img245/9603/compki3.jpg)
[a href=\"index.php?act=findpost&pid=184042\"][{POST_SNAPBACK}][/a]
GLuijk,
the download page has a link to histogrammar instead of zeronoisev0.9.zip
[a href=\"index.php?act=findpost&pid=184940\"][{POST_SNAPBACK}][/a]
Do you have plans to enable those apetizing sliders for Gamma, etc on the application?
Dear Sir,
Congratulations on making yet another one useful program.
Naysayers will always try to tell you they do not need it, that's fine because others do need it.
Most of us who deals with implementing raw conversions will support your point that substitutions (as well as stacking, blending, stitching) are best performed over the raw data. Demosaicing and gamma-correction of noisy data amplifies noise and propagates it through the colour channels. It takes more shots and more effort to get similar results after demosaicing, not to mention artefacts resulting from any demosaicing and caused by noise in raw data.
IMHO your program can write back the same raw file, without demosaicing, allowing to use regular raw convertors over it.
Dear Sir,
Yes, I'm Iliah Borg
Raw formats are complicated, but for the case when you need just to replace original sensor data with manipulated it is not so difficult. Native raw converters like Nikon's (NX), Canon's etc. do not support DNG, hence staying with native raw may have sense. DNG implementation is not too difficult, http://www.adobe.com/support/downloads/dng/dng_sdk.html (http://www.adobe.com/support/downloads/dng/dng_sdk.html)
[a href=\"index.php?act=findpost&pid=185475\"][{POST_SNAPBACK}][/a]
I totally agree, in fact the original idea was to do it all over the undemosaiced RAW data with assistance from a C-coder who managed to reutilise DCRAW code to access the RAW undemosaiced data (in this thread we talked about that and Jonathan Wienke even pointed that a DNG output would be a very good approach). But this guy became too busy with other projects so I went on alone and chose a demosaiced approach.
You give me new ideas for improvement. I have no idea of the DNG format, is it easy to build a DNG file from scratch? I think RAW formats are indeed quite complicated.
Just curious, are you Iliah Borg?
Regards.
[a href=\"index.php?act=findpost&pid=185450\"][{POST_SNAPBACK}][/a]
Regarding this I have a question for you if you don't mind: to apply the gamma to a calculated {R,G,B} linear pixel I plan to do (gamma=2.2 for simplicity, all normalised values):
1. Calculate Y = k1*R + k2*G + k3*B according to some luminance k1+k2+k3=1.0 weighted average model
2. To apply gamma to luminance: Y' = Y^(1/2.2) = (k1*R + k2*G + k3*B )^(1/2.2)=K*Y so K?
K=Y'/Y=Y^(1/2.2-1)=(k1*R + k2*G + k3*B )^(1/2.2-1)
3. So finally perform:
R' = R * K
G' = G * K
B' = B * K
Do you think this simple approach is right? I am fairly sure I am preserving tone (ratio between R, G and B is kept). But can this way to apply the gamma have some undesired consequence or caution? maybe related to the colour profile used...
Best regards.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=185493\")
You can find the equations for converting among color spaces at
http://brucelindbloom.com/ (http://brucelindbloom.com/)
under the "Math" section.
[a href=\"index.php?act=findpost&pid=185496\"][{POST_SNAPBACK}][/a]
Dear Guillermo,
As far as my experience is, in floating point the precision of calculations is quite enough not to bother with gamma. In fact, any unnecessary calculations like gamma effect the resulting image in a negative way.
Right, but I necessarily have to go to 16-bit integer TIFF in the end.
You can find the equations for converting among color spaces at
http://brucelindbloom.com/ (http://brucelindbloom.com/)
under the "Math" section.
[a href=\"index.php?act=findpost&pid=185496\"][{POST_SNAPBACK}][/a]
I fear the meltmaster plastic look too much I am afraid.
The plastic look is the result of noise reduction eliminating detail along with noise.
Note that there are matrices M for converting among color spaces, in particular for converting from XYZ to rgb. Furthermore, it sounds like you want to do a manipulation of the luminosity data in Lab space; the transformation between Lab and XYZ is more complicated than a simple gamma transformation, as you can see from the formulae on the linked site.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=185561\")
Each camera sample may have its own saturation point. More, they depend on ISO settings sometimes. It is better to test both floor and saturation points of any given camera then to rely on the constants hardwired into a programme.
I agree. The problem is that this is a bit advanced for regular users.
What do you think commercial RAW developers such as ACR or Lightroom do? have a huge table of saturation points for each camera/ISO pair, or just trend to clip highlights with a conservative low saturation point?
[a href=\"index.php?act=findpost&pid=186036\"][{POST_SNAPBACK}][/a]
Two shots at each ISO setting, one with a lens cap on, the other - fully blown in each channel allow to make a table. Interesting that points for both green channels are not always the same. Next, camera serial number and a table of floor and saturation points make a nice base for improvement of raw conversion.
Yes the first thing I thought of when I discovered the saturation point was not always 2^N-1, was that a very accurate RAW development could 'recover' a good amount of information in those shots that were erroneously taken with too high exposure values. In that way, commercial developers would not always be optimum.
Regarding the black level, I thought all cameras had hidden pixels so it was best let the developer analyse them to find the more precise black point to be substracted. At least DCRAW always worked fine for me in calculating that figure that varies quite a lot depending on exposure conditions.
Some cameras (like Nikon) unfortunately substract that offset in-camera, right?
BTW a friend of mine who has a Fuji S3 Pro has reported several times magenta casts in the highlights when using ACR to develop Super CCD RAW files. Probably the reason is a bit too high saturation point in ACR for the R captors in that camera model.
[a href=\"index.php?act=findpost&pid=186137\"][{POST_SNAPBACK}][/a]
Wow. Digital photos that look like old-school film. I'm impressed at how the largest three problems - dynamic range, noise, and fringing are gone with your software.
Q2:Could you possibly change it to have "slots" that you put the files into, and then you can select which one is what value? (so it knows what order to properly blend everything if you have lots of pictures) That way you could have 3 or 4 or even 12 exposures to blend together.(why not, digital "film" is not a factor here - most cards will hold 10-12 raw pictures)
P.S. could someone who has a film scanner show an example of this as well? Preferably a Minolta Pro with MF slide film?(probably have to use Silverfast, right?)
You do need a way to manually enter the saturation point of a camera, since I suspect that each camera is a tiny bit different as well due to manufacturing and optical changes.(perhaps slightly different with each lens, even, though I suspect it's a very small value)Lens does not affect camera's saturation point. I am planning to introduce a 'Calibrate' button, so just by providing ZN a saturated picture it would calculate the exact saturation point to optimise RAW dvelopment for each particular camera.
Q: what camera was used? It's impressive to say the least.If you mean in the sitting room sample, my Canon 350D.
Q2:Could you possibly change it to have "slots" that you put the files into, and then you can select which one is what value? (so it knows what order to properly blend everything if you have lots of pictures) That way you could have 3 or 4 or even 12 exposures to blend together.(why not, digital "film" is not a factor here - most cards will hold 10-12 raw pictures)The present version of the program allows for up to 10 RAW files (autolimited, I thought more is simply stupid) and they don't need to be ordered. The program will order them by exposure level and calculate (not read the EXIF) the relative exposure between each pair of images. If the program allowed the user to enter the EV differences between the shots, the result would probably be wrong and transitions would become visible due to exposure differences.
This has honestly made me reconsider whether I should be looking at digital or not.Digital cameras have become really nice devices Plekto. Their Aquiles heel today is DR and they are continuously improving. With this technique you can avoid the limitations in DR, but with the important limitation that it requires a tripod and a static scene.
Jonathan, I remember you would be interested in a blending with a 16-bit DNG output. Many people have shown a lot of interest for this option. To do that is no problem as long as we know how to build a DNG from scratch, so anyone who knows about the DNG format and would like to make a pure RAW blending tool, just contact me. I think it is not that difficult.
Welcome to the real world, Neo. Digital has obviously come a long way since you used it last;Dang. I must have chosen the wrong color...
2 images is more than adequate in the majority of cases, and the more images you have the greater problems you have with alignment. And add a zero or two to your estimate of card capacity; the larger cards available can hold hundreds of RAW frames, possibly over 1000.
Film does not lend itself to DR blending as well as digital. Film has a non-linear response curve, which makes calculating the correct blend values accurately much harder. And if the film isn't perfectly flat when exposed and scanned, aligning multiple frames to blend becomes problematic.
As far as I know, they don't seem to differ too much between units of the same model: I have tested saturated RAW files from two 40D's and both saturated exactly at the same level (13000 something).
If you mean in the sitting room sample, my Canon 350D.
The present version of the program allows for up to 10 RAW files (autolimited, I thought more is simply stupid) and they don't need to be ordered. The program will order them by exposure level and calculate (not read the EXIF) the relative exposure between each pair of images. If the program allowed the user to enter the EV differences between the shots, the result would probably be wrong and transitions would become visible due to exposure differences.
Digital cameras have become really nice devices Plekto. Their Aquiles heel today is DR and they are continuously improving. With this technique you can avoid the limitations in DR, but with the important limitation that it requires a tripod and a static scene.
Then I should do:
sRGB(gamma=1.0) -> XYZ
XYZ -> sRGB(gamma=2.2) ?
but if I am not missing something, in the end this means:
R' = R^(1/2.2)
G' = G^(1/2.2)
B' = B^(1/2.2)
which does not preserve the ratio between the colours, so tones change, don't they?
It would be great if we see a PS plugin for this great tool.
what would be the advantage of having this program in the form of a PS plugin? people not using PS could not enjoy it, and anyway would offer no advantages since it uses DCRAW for the RAW development.
[a href=\"index.php?act=findpost&pid=194602\"][{POST_SNAPBACK}][/a]
what would be the advantage of having this program in the form of a PS plugin? people not using PS could not enjoy it, and anyway would offer no advantages since it uses DCRAW for the RAW development.
[a href=\"index.php?act=findpost&pid=194602\"][{POST_SNAPBACK}][/a]
Because Adobe does its own funky ting when it converts/imports data and then does more "tweaking" with it when you save it back.
It's much easier to just keep it as it is - a stand alone app that does one thing better than any of the multi-tasking ones out there.
[a href=\"index.php?act=findpost&pid=196817\"][{POST_SNAPBACK}][/a]
http://luminous-landscape.com/forum/index....showtopic=25484 (http://luminous-landscape.com/forum/index.php?showtopic=25484)
Specifically this. Yes, adding more options for output is always a good thing.(I'm not a fan of "Photoslop" as you might gather.
[a href=\"index.php?act=findpost&pid=197142\"][{POST_SNAPBACK}][/a]
where can I get ZERO NOISE for MAC?Nowhere, but if you would like to write Zero Noise for Mac, I will share with you all the algorithms.
Can you post the link of the Spanish forum so I can have a look at the ZN rewrite.
could you share your code for the exposure calculation and the blending?
1) Does ZN v0.91 have your previously mentioned corrections for the Canon 40D?Not exactly corrected (since it's a fault of DCRAW's source code), but you can now in v0.91 easily solve the problem just by entering the right saturation value for your camera in the 'Saturation' text box.
Not exactly corrected (since it's a fault of DCRAW's source code), but you can now in v0.91 easily solve the problem just by entering the right saturation value for your camera in the 'Saturation' text box.
I have detected 2 cameras so far that need to be corrected:
- Canon 30D: 3398
- Canon 40D: 13823
I am quitting my job in August and taking some kind of sabbatic year. I will devote a lot of time to getting used to code in C/C# and rewrite Zero Noise in this languages with another two guys introducing some needed improvements as the anti-ghosting and progressive blending. The idea is to produce a 16-bit DNG output file out of Zero Noise instead of a TIFF file so I again ask for help: if any coder is able to generate a DNG file from scratch using the Adobe DNG SDK just contact me.
We are already developing a new RAW developer based on DCRAW but with a powerful graphical interface and other new features for high precision RAW development (just development, no processing). If you want to track that project: Perfect RAW (http://www.guillermoluijk.com/software/perfectraw/index.htm).
BR
[a href=\"index.php?act=findpost&pid=207316\"][{POST_SNAPBACK}][/a]
If possible pleasse continue with the tiff output as well. Sigma cameras do not do dng.Hi Mike, I didn't explain myself well: the RAW files to be fed into Zero Noise will be any vendor supported by DCRAW, just like now.
I have detected 2 cameras so far that need to be corrected:
- Canon 30D: 3398
- Canon 40D: 13823
[a href=\"index.php?act=findpost&pid=207316\"][{POST_SNAPBACK}][/a]
I'd imagine it would be better to make the panoramas first, then do the blending in Zero Noise. My understanding is that ZN only reads RAW files, and since Autopano only produces TIFFs and PSDs, I'd have to find a way to convert those to DNGs. Or is there an easier way to approach this?
Or would you recommend doing the blending beforehand, and stitching afterwards? The potential problem with this is big panoramas where exposures are wildly different at the different edges of the final image.
I wonder why no company has done this before; perhaps market researchs indicate that what people demand are programs where you just click a button to obtain a finished "HDR" image with no extra effort.
Thank you again for your work, Guillermo.
I for one am very appreciative of your interest and dedication to this. I imagine that we will look back one day and recognize the significance of your contribution. I predict that every serious DSLR will soon perform your algorithm in-camera.
Best regards,
Bruce
[a href=\"index.php?act=findpost&pid=215795\"][{POST_SNAPBACK}][/a]
Thank you again for your work, Guillermo.
I for one am very appreciative of your interest and dedication to this. I imagine that we will look back one day and recognize the significance of your contribution. I predict that every serious DSLR will soon perform your algorithm in-camera.
I've been able to reproduce your exposure calculation in C with the ImageMagick API.
I'm just wondering if you can explain the colors min and max values:min = 65536 / pow(2, 6);
max = 65536 * 0.9;
Another thing, is it possible to calculate the best blending ratio? for the time been I'm using 90%.
Ced
estoy dispuesto de ayudarle con traducciones de espanol al ingles cuando lo necesite, ya con el acuerdo de que el trabajo probablemente no sea instantaneo.
Hi Ced, nice to see you are achieving the same things on Linux.
I have gathered my answers to your questions:
1. Exposure correction down, in linear state, is as simple as multiplying each RGB level by a <1 factor. For instance let L be a 16-bit level that has to be corrected 1 f-stop down: OUT = L * 0.5
2. The contrast and bright curves I apply are always made by hand. The proper curves deeply depend on the image's histogram in front of you, and the desired result. I don't think a 100% automatic process is possible here. However some algorithm to get a curve with which to start should be possible (RAW developers calculate this curve).
3. The min = 65536 / pow(2, 6); max = 65536 * 0.9; were just my criteria. I thought values higher than 90% of sat could start to be non-linear in certain sensors. In the low end pow(2,6) ensures the program will not consider values falling in the 7th or lower f-stop (they are surely very noisy).
4. The best blending ratio of course is 100%, or nearly, but depending on how linear is your sensor to allow a lower value is recommended. 99% means any RGB value less or equal to 99% of saturation in a given image will be considered right.
That would be nice, I planned to translate the ZN tutorial this August but found no time. Would you like to translate some part of it? wait first cause a new version of the tutorial (in SP) is coming soon since I added new features.
[a href=\"index.php?act=findpost&pid=216527\"][{POST_SNAPBACK}][/a]
With the mask technique (negate of the over-exposed image to the alpha channel of the negative corrected over-exposed image) which is described in the PS tutorial, I don't get the same amount of perfect gradient in the spot light area in comparison of your ZN software.Surely they will never be 100% the same since there are always differences in implementation and rounding values. The important thing is if the solution works and provides a good result.
My experiment produces the mask and blend the original image with a threshold ratio. So I don't thing it's exactly the same result than your software, or maybe I've done something wrong with the mask generation.
I've been using your "lounge" raw images for my test, so for your sensor which is the best blending ratio?The 350D saturates at 4095 what makes me think (this is just an hypothesis) that its ADC actually clips the analogue output from the ISO amplifier making 350D's RAW files very linear up to saturation (at the cost of losing some highlight information captured by the sensor of course) because they have actually already been clipped to some threshold. That's why we can be very demanding with those sample images where very high thresholds can be set for blending.
Could you shared your new algo for the "relative exposure calculation", so I can update my code.That new alg is still just in my mind, but it consists in calculating an accumulative array of relative exposures. For each pixel pair, the relative exposure is calculated, weighted by the level of exposure of those 2 pixels, and then fed into the array with the index according to the relative exposure calculated. In the end we just calculate the median of the statistical distribution obtained. I think it will work fine.
Are you planing to release the code source when you hit version 1.0?
Hi Guillermo!
I am looking forward to use your promising software! After studying the english article, I have a question:
The white balance should be the same in both shots. Would it be sufficient to adjust this post capture in the raw state? The white balance is the one exposure parameter that I prefer the camera to do automatically, since I can not see how I can do it better.
Despite my Zero Spanish, I also tried to extract some information from the tutorial, based on the pictures. Concerning fig. 4, 5 and 6: Does Zero Noise require to define the white balance based on an area in the actual picture?
Kind regards - Hening.
I can not quite see how this solves the problem. Make the patch cover the whole image - which one? The zero or the +4? The problem (with the camera AWB) as I see it is that light may shift between the 2 shots - So I thought one could adjust the one of them to the other post capture in the raw state before merging?Any of the shots is OK, but the most exposed is recommended to set the patch since it will have less noise and WB calculation will be more accurate. Don't worry about its blown areas since they do not participate in the WB calculation. Once the multipliers have been calculated they will be applied to the two shots so WB will be fine.
PS: BTW we already found someone who can build a DNG RAW file from RAW data. A version of Zero Noise with a 16-bit DNG output free of noise is nearing. I.e. the user puts several RAW files and the program will mix them into a noise free RAW file that everyone will develop and/or tone map using his favourite software.
Any idea when you'll have this new version ready? I can't wait!In the last two days I adapted Zero Noise to work with undemosaiced RAW data, and it worked fine (in fact it's even easier than making it work with demosaiced data). I have also improved the routines to calculate the relative exposure between the shots.
In the last two days I adapted Zero Noise to work with undemosaiced RAW data, and it worked fine (in fact it's even easier than making it work with demosaiced data). I have also improved the routines to calculate the relative exposure between the shots.
I have just sent the resulting RAW blend (18MB TIF) (http://www.guillermoluijk.com/download/rawvirtual.tif) (it's linear data, it will display very dark if not assigned to a linear gamma=1.0 profile in PS) to a colleage to embed it into a 16-bit DNG file just to offer the resulting noiseless high dynamic range RAW file here for download.
It will be a Zero Noise HDR virtual RAW containing a lossless unprocessed blending of two Canon EOS 350D RAW files shot 4 stops apart:
─ Standard non demosaiced DNG
─ Free of noise shadows
─ 16 bits equivalent bitdepth
─ 12 stops real dynamic range
This was the scene:
(http://www.guillermoluijk.com/tutorial/hdr/resultadolite6.jpg)
And these are the RAW histograms of the two original files and the resulting virtual RAW:
(http://www.guillermoluijk.com/article/virtualraw/histos.gif)
The program will take a bit longer to be ready.
BR
Given that the present tiff's need an extremely strong curve, what would the advantage be of making a DNG given that unless you can apply a specific gamma curve (not in ACR at any rate), the image will be far too dark to work with in the raw converter? I'm very excited by the idea of a DNG output from this incredible program but only if the gamma can be programmed to show up in the raw converter so we can use it.The image will not be far too dark to work in the RAW converter. In fact in this example, the exposure control of ACR gives more exposure correction than you really need (in Perfect RAW (http://luminous-landscape.com/forum/index.php?showtopic=28791) we set an exposure correction of up to +8EV, truly usable with real 16-bit RAW files like the ones Zero Noise will produce).
The program will take a bit longer to be ready.
Man, I can't wait! Please let us know when it's readyOK, I have been finally given the RAW file, please download it from: ZERO NOISE VIRTUAL RAW (http://www.guillermoluijk.com/article/virtualraw/index_en.htm) (English). Find the links to the original RAW files so as to the resulting virtual noiseless RAW after Fig. 9.
OK, I have been finally given the RAW file, please download it from: ZERO NOISE VIRTUAL RAW (http://www.guillermoluijk.com/article/virtualraw/index.htm). Find the links to the original RAW files so as to the resulting virtual noiseless RAW under Fig. 9.
(English online translation available, icon left).
The result is outstanding.
You obviously have a strong understanding of digital imaging theory and application. Given your impending release of ZERO NOISE VIRTUAL RAW, do you have any suggestions for tone mapping which will avoid the "HDR cartoon" look? For example, when you open the virtual raw file that you have provided on your website, how do you go about making it aesthetically pleasing to your eye?
Given that the present tiff's need an extremely strong curve, what would the advantage be of making a DNG given that unless you can apply a specific gamma curve (not in ACR at any rate), the image will be far too dark to work with in the raw converter? I'm very excited by the idea of a DNG output from this incredible program but only if the gamma can be programmed to show up in the raw converter so we can use it.
BTW there is a Linux version (http://www.guillermoluijk.com/software/zeronoise/index.htm) working 4 times faster than the original, and a DNG output version is finally on the way.
Are you guys working on updating the windows version as well?Definitively I'd like a DNG output version. I don't like the way DCRAW develops the RAW files from my 5D so I want to be able to fuse them in the undemosaiced domain and then take them noise-free into ACR.
is it working for MAC (64 bits) ?Someone managed to run the Linux version in a Mac under Ubuntu, but there is no native Mac version yet.
I've updated the images link...I think the best is you download this TIFF file (http://www.guillermoluijk.com/download/hdr.tif) from the tutorial and look at the mask layer. What you need to achieve is a mask that is dark in the highlights of the scene, pure white in the deep shadows of the scene, and blurred to preserve local microcontast.
Definitively I'd like a DNG output version. I don't like the way DCRAW develops the RAW files from my 5D so I want to be able to fuse them in the undemosaiced domain and then take them noise-free into ACR.
Someone managed to run the Linux version in a Mac under Ubuntu, but there is no native Mac version yet.
I think the best is you download this TIFF file (http://www.guillermoluijk.com/download/hdr.tif) from the tutorial and look at the mask layer. What you need to achieve is a mask that is dark in the highlights of the scene, pure white in the deep shadows of the scene, and blurred to preserve local microcontast.
Regards
I think the best is you download this TIFF file (http://www.guillermoluijk.com/download/hdr.tif) from the tutorial and look at the mask layer. What you need to achieve is a mask that is dark in the highlights of the scene, pure white in the deep shadows of the scene, and blurred to preserve local microcontast.Hi Gluijk,
If I want to use TuFuse to do the tone mapping. Do I need to have 5 images (1-2-3 2-3-4 3-4-5) and use the ZN technique to produce 3 images? ...or is there any way to produce the 3 images from the 3 catches?
how do you perform/calculate your WB patch in ZN which I think you set the -r parameter of dcraw?
Thanks a lot, I've tried it with enfuse and works perfectly. Do you think, we should get a better result with: 0EV 1EV 2EV 3EV 4EV ? ...difficult to see the difference!If it's difficult to see the difference you already gave the answer.
Can you shed some light on this please?
I had to sign up for this forum in order to say thanks!If your camera is set to the default settings, each click should be 1/3 of a stop, so you'd need 12 clicks.
This is an amazing process and a wonderful applications.
I hope you make a lot off it and share with the world
I do have some questions since I'm still learning.
When you sat 4stops, you mean via your shutter speed? I have a Canon 20D and I stopped it for in AV mode, 4 clicks on my shutter. Is this right?
I'm a little fuzzy on the lingo.
Thanks,
Nathan Soliz.
If your camera is set to the default settings, each click should be 1/3 of a stop, so you'd need 12 clicks.
Is there a link to the current software?
and nothing for MAC :-(Any Mac user who wants to run ZN can do it, it just takes some effort on his part.
Any Mac user who wants to run ZN can do it, it just takes some effort on his part.
In this thread (http://photography-on-the.net/forum/showthread.php?t=775795) a Mac user wrote a mini-tutorial about ZN using the Linux version (much faster and watermark free):
Zero Noise - The Basic Workflow
So here is Zero Noise in Linux, running on my Mac Pro under VMWare Fusion:
(http://kirkt.smugmug.com/Photography/Photo-of-the-Day/ZNWindow/700282284_HwTt8-XL.jpg)
(...)
Regards
Hel ;)o does it work on window 7 64bit? thanks. :)I use it on (gasp!) Vista 64 with no issues - chances are it will work fine on Win7 64.