Pages: « 1 [2] 3 4 ... 10 »
 on: Today at 02:09:50 PM 
Started by Jeff-Grant - Last post by Erland
Andrew, I agree. The iO is, although convenient, yet it does degrade your measurements. I have increased my patches to 9,5x9,5mm.

I did an average from 6 readings, didn't trust just one measuring, but next time I think I will create a chart for the regular i1Pro2 och measure it by hand instead. Think it will have less deltaE than the iO.

 on: Today at 02:07:10 PM 
Started by ErikKaffehr - Last post by shadowblade
Just to make sure, there is a difference between flare (often colorful local reflected hotspots), and veiling glare. It's no just semantics. The veil is omni-present, not as strong everywhere but some of it is. The same happens as our eyes age and we develop some level of glaucoma. The image is not fully formed yet where it is diffused, so it acts as contrast reduction (worse where directly illuminated by a bright lightsource).

From a mathematical point of view, even, sensor-wide glare isn't a problem and can even help you deal with limited dynamic range, so long as the sampling (i.e. bit depth) is great enough that you don't run into problems with posterisation.

Consider this, for argument's sake. You have a scene that, at 1s exposure, gives you 16384 photons in its brightest pixel, and 2 photons in its darkest, for a scene DR of 13 stops. Your sensor has a full well capacity of 18000 photons and a noise floor of 8 pixels, for a sensor DR of 11-and-a-bit stops. Naturally, you can't capture the entire scene in one shot.

Let's say that you have glare that adds 200 photons to each photosite. Your brightest pixel now receives 16584 photons and your darkest one 202 photons. The dynamic range of the scene, as seen by the sensor, is now around 6.5 stops - easily capturable by the sensor. Since your sensor has 14-bit output, the output is now distributed over around 16336 luminosity levels instead of 16535 levels - hardly a significant decrease in levels and unlikely to cause posterisation. This is because the brightest stop contains half the luminosity levels, the next brightest half of the remaining, and so on. The top six luminosity levels, therefore, contain 98.44% of the total levels available; the rest of the levels, the shadows, are all crammed into the remaining 1.46%.

Of course, glare isn't completely even across the frame, which is the problem. But this is considering the hypothetical perfect glare - it wouldn't actually be a problem.

Also, I think you mean cataracts rather than glaucoma.

Really, it is much more robust a method than what you give it credit for. Hans Kruse has also discovered that method and posted results in a number of threads. It is usually only small patches of the lightest areas that need to be blended in, and they only rarely coincide with moving detail. It can happen, but it's more rare that you suggest, it's the exception rather than the rule.

It certainly doesn't happen in every frame. But, when it does happen (which, while not the majority of shots, is certainly common enough to cause problems), it's one of the most annoying things to try to deal with.

That's (fortunately) not how it works. DR is defined as the number of photons at the saturation point, divided by the noise level at a low exposure or even no exposure level, just the read noise. What may seem like the full well capacity at 16000, actually took 4x as many photons if we shoot at base ISO (after all we want to avoid noise, we're not shooting action). Canon cameras can benefit from relatively lower read noise from boosting ISO a bit, but for the lowest noise they too should use base ISO if shutter speed is not an issue.

So that's 64000 photons for each shot we want to average, which stays 64000 on average then. The read noise of e.g. 8 (no photons, just standard deviation of noise) is reduced as we average more and more shots. Two shots have 1/Sqrt(2) of the noise so 8/Sqrt(2)=5.67, 8 shots would have 8/Sqrt(8)=2.8. So that would be log(64000/2.Cool/log(2)=14.5 stops of DR, if we want to go through the trouble of averaging instead of blending (the best parts of) images.

Let's say one shot has a maximum of 16384 photons per photosite, with an average 8 photons added by electronic noise, with a distribution of 8 (i.e. the equivalent of 0-16 photons added per pixel). This puts the saturation point (16384) 11 stops above the noise floor (Cool. Let's just say that the distribution of noise is equal within that range - that is, the same number of pixels receive 1 'photon' of read noise as receives 6, 8 or 16 (in reality, it would approximate a normal distribution curve, but that would just complicate the mathematics and this will serve just as well for argument).

Now, let's say you averaged out 4 frames. You now have a maximum of 65536 photons per photosite. But you've also added an average of 32 photons of noise per photosite, with a distribution of 32 (although the actual distribution curve would be much tighter - there would be far more pixels close to 32 noise in the combined image than there would be pixels close to 8 noise in the single image and you'd have a bell-shaped curve rather than the equal distribution of the single frame; in an actual situation, where the distribution of read noise in the single frame is also a bell curve, you'd have a much tighter bell). Your ceiling is still only 11 stops above the average noise floor.

Of course, this all changes if you set the black point at the average noise floor, i.e.  produce the image based on 'white' being full well capacity, and 'black' being the noise floor. This would mean subtracting 8 from each image, or 32 from the four combined images. In other words, your scale would go from 0 to 16376 for a single image (with noise present from 0-8, with 50% of pixels receiving 0 and the rest evenly distributed between 1-8), or 0 to 65528 in the combined image (with noise present from 0-32, with 50% of pixels receiving 0 and the vast majority receiving just 1-8, with occasional pixels receiving more, due to the tighter bell curve). Therefore, the saturation point in the single image would be around 11 stops above the noise floor, while the saturation point in the combined image would be almost 13 stops above the noise floor, due to the tighter bell curve.

OK, I just shot that part of my own argument. But I was merely speculating whether there would actually be an improvement in DR - hadn't actually done the calculations to prove or disprove it, until forced to! Looks like it comes down to the fact that the 'zero' point is set at the average noise floor rather than an absolute 'zero' signal - when done that way, there is indeed an improvement in DR.

 on: Today at 02:06:26 PM 
Started by ErikKaffehr - Last post by BJL
I got the recent AVSforum newsletter that shows this setup...

I wonder how many can afford that setup and whether there's a sustainable market for such huge screens. ...
And yet the first thing I note is that the comfy chairs are far more than one screen _width_ from the screen, and so barely able to distinguish between HD and 4K, let alone between 4K and 8K!  So it is probably a good thing that they are obsessing about color accuracy rather than definition*.

* At least video people know the difference between definition (number of lines or pixels or such) and resolution (lines per mm, pixels per mm, etc.); I wish still photographers would use that terminology more often.

 on: Today at 01:53:28 PM 
Started by digitalcameraman - Last post by digitalcameraman

This is the prime opportunity to test drive medium format digital camera systems from Phase One, Mamiya Leaf, ALPA, Cambo and Leica in one workshop at an incredible location full of great photographic opportunities. Get hands-on setup and instruction for a variety of digital camera systems, tripods and portable lighting systems from Chris Snipes – who has over 20 years experience with the best digital cameras in the world. The Silo City workshop is a unique learning experience – all attendees will receive advanced training on technical camera systems AND granted access to premier loaner equipment to use during the workshop. The workshop offers a balance of hands-on instruction, location shooting and Capture One Pro training. Don’t miss this unique chance to work and learn from an outstanding team in an exclusive access location.

Read Kevin Raber's review when he and Michael attended last year. He will be back again this year too.

 on: Today at 01:50:35 PM 
Started by seeseephotographer - Last post by seeseephotographer
Hasselblad 55 mm extension tube. Works as new, Great shape. Pls see pics. $45.00 OBO

 on: Today at 01:50:35 PM 
Started by churly - Last post by maddogmurph
Yeah it's a nice shot, but maybe that's because I'm in California dreaming of snow and water... This sun gets old.

 on: Today at 01:42:05 PM 
Started by Chris Calohan - Last post by thierrylegros396
Well done and very good conversion to B/W.

Have a Nice W-E.


 on: Today at 01:34:28 PM 
Started by seeseephotographer - Last post by seeseephotographer
Black 503cx includes rapid winder, clean focusing screen, viewfinder, front and rear body caps. Works as new. Carefully maintained by Pro Camera, calibrated mount, mirror and screen. No scratches, wear or dings on body, removable rapid mount plate has scratches. Pls Look at pics. 750.00 OBO

 on: Today at 01:30:56 PM 
Started by David Grover / Phase One - Last post by BartvanderWolf
I am sorry - which __specific__ stock profile ?

Capture One comes with a 'stock' profile, sometimes several, for each camera that it supports and which isn't all that bad as a basis.

and what delta you expect to achieve for example trying to adjust colors in SG target (just as an example - your real object is not that target of course, it is just we can use that as a test because raws are available) starting from a stock profile ?

so you suggest him to get spectrophotometer and measure the colors of his objects - say he has 20-30-40 colors there - you believe that he can manually tune with color editor in C1 all of them w/o changing one to screw the next one...

I get the impression that you think that spectrally accurate profiling is the only possibility to get 'good' color. Sure it helps to start from a good starting point, but the whole process is too different from how human color perception works to be the end of the work we have to do. Need I mention metamerism and color constancy, not even to mention simultaneous colors and contrast, and illumination levels and spectra?

I 'd assume he has a controlled light if he does something close to reproduction work, no ?

One can assume that, and maybe it's true.


 on: Today at 01:26:37 PM 
Started by dwswager - Last post by NancyP
This tire-kicker usually asks questions in order to learn something. I am at the "meh" stage of a photographer. Beginners can be so amazed that anything turns out that they overestimate their skills at photography. Then they try out various techniques (this includes the dreaded "HDR Phase"). Then reality sets in and they look over their portfolio and decide they don't know much of anything because the photos are "meh".  Tongue

Trying to get out of "meh" into artistic....   Smiley

Pages: « 1 [2] 3 4 ... 10 »