Say I take a Leica S2 with a 70mm f2.5 and compare shots with a Nikon d3x with a 50mm f 1.4, how will the dof compare:
At what f stop will the Nikon dof be approx equal to the Leica @ f 2,5?
Will this change if Nikon comes with a D4x at say 30 mp?
Is there som kind of multiplyer I can use across focalranges to get som idea of the dof similarities/differences?
Christopher
Thanks!
However, I don`t think any of the cameras there are MF?
Christopher
However, I don`t think any of the cameras there are MF?
DOF depends on a large variety of factory including micron size of the sensor, print size and intended viewing distance (if you wish to go based on a print size rather than 100% pixel sharpness), type of sensor (CMOS w/AA filter or CCD without), use of tilt (if any*), and sensor size.
If you work with a good dealer you won't have to trudge through such questions on your own :-).
Doug Peterson (e-mail Me) (doug@captureintegration.com)
__________________
Head of Technical Services, Capture Integration
This is very interesting, especially about how DOF depends on print size. No matter how large or small I print my images, I don't see any change to the DOF. Can you elaborate more on how you find DOF to depend on print size?You can "simulate" this when you downsize your photos on the monitor. Let's say your image has a wide DOF from near distance to infinity but the very foreground is actually a bit soft. Now when you downrez your image to, say, 32% the foregound might appear sharp (the effect is immediately visible when you downsize your 39MP monster to 800x600 pixel for web puposes).
I understand your argument, but I still don't think you can claim the DOF has really changed, only that your perception of it perhaps, and even then it is a stretch of meaning.ah, okay, I get it.
If DOF truly changes with print size, then it should be possible to take any image captured at F1.0 and print it such that it appears to have the DOF as if captured at F22.
This is very interesting, especially about how DOF depends on print size. No matter how large or small I print my images, I don't see any change to the DOF. Can you elaborate more on how you find DOF to depend on print size?
Also, I have taken identical image captures from a 36x48 CCD back with 9 micron pixels (Hasselblad CF22) and when compared to identical images taken with the same camera and lens but with a 36x48 CCD back having 7.2 micron pixels (Sinar e75LV), there is no difference in DOF either. You probably have more experience with many other digital backs. So, can you elaborate on your experience where you find the DOF to depend on the pixel size and not just the sensor size?
From what I can see in my own images, the DOF seems to depend only on the apparent object size relative to the sensor size. It would be great if you could explain better how these other print and sensor issues also affect the DOF.
I understand your argument, but I still don't think you can claim the DOF has really changed, only that your perception of it perhaps, and even then it is a stretch of meaning. If DOF truly changes with print size, then it should be possible to take any image captured at F1.0 and print it such that it appears to have the DOF as if captured at F22.
If there is a moderate chance of having the images re-used for large prints then obviously it needs to be sharp at 100% pixel view.exactly! When you enlarge the image 300% and print it at 300ppi than the actual pixel size (i.e. 100%) of your original (unenlarged) image infact represents the real outcome on your ~100ppi monitor quite good (of course it looks "different", but it gives you a good idea about the apperance of the print re DOF).
Hi,
DoF is not really related to format, there are four parameters Circle of Confusion (CoC), focal length, aperture and focusing distance. The image size may have to do with CoC. With a larger format you would enlarge the sensor image less for a given print size, so you may accept a larger CoC.
My best suggestion is to ignore DoF. Focus on what supposed to be sharp and stop down, hoping for the best. Avoid stopping down beyond f/16 if possible because you start loosing sharpness massively due to diffraction.
You may check:
http://echophoto.dnsalias.net/ekr/index.php/photoarticles/29-handling-the-dof-trap
http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures
Best regards
Erik
As tho_mas says I'm dealing with "perception" as my definition (the ability to - in a practical sense - distinguish between the levels of sharpness of two points).
Depending on how you define it you have
- DOF of the raw file
- DOF of a given print
Getting away from extremes...
With a very high resolution file, such as a four image stitch from a P65+ on a tech camera it is very possible that an 11x14 print (even when viewed close) will show the entire field of view as equally in focus (DOF from front to back) but a 30x40 will show the front of the image is just slightly out of focus compared to the detail at mid-range (DOF does not quite extend front to back).
It is the above scenario that originally piqued my interest in the more-complicated-than-I-was-taught-in-school topic of Depth of Field and Sharpness.
In my experience most photographers define DOF as "where it's sharp at 100%" on the monitor. However, as the size of raw files goes up I beg us to consider moreso the application for which the file is being used. On shoots for the web* it is perfectly acceptable to use the digital loupe in Capture One / Aperture / LR etc at 25% to see if it's "sharp" rather than examining 100% detail. If the subject is slightly soft at 100% in an 80 megapixel raw file it will still appear indistinguishable from something "sharper" when processed at a 800x600 for e-commerce.
*If there is a moderate chance of having the images re-used for large prints then obviously it needs to be sharp at 100% pixel view.
Doug Peterson (e-mail Me) (doug@captureintegration.com)
__________________
Head of Technical Services, Capture Integration
Phase One Partner of the Year
Leaf, Leica, Cambo, Arca Swiss, Canon, Apple, Profoto, Broncolor, Eizo & More
National: 877.217.9870 | Cell: 740.707.2183
Newsletter (http://"http://www.captureintegration.com/our-company/newsletters/") | RSS Feed (http://"http://www.captureintegration.com/2008/08/11/rss-feeds/")
Buy Capture One 6 at 10% off (http://"http://www.captureintegration.com/phase-one/buy-capture-one/")
Hmmmm,
I am not sure I really understand this.
Thing is, I very often use my Nikon at f 9-11 which I understand is aproaching the diffraction limit. Does that make a swap for a leica sort of pointless since it would have to be used at f 22 to get same dof and thus diffraction will eat upp the improvement in number of pixels?
Assume the same print size and viewing distance.
Christopher
To prevent an unhealthy and possibly drawn out thread, I will cut to the heart of the issue. What you are describing as "DOF" is known as "viewing resolution". Viewing resolution is a perceptual quantity that indeed depends on print size, viewing distance, pixel size and sensor size. However, DOF does not depend on any of these things, except the sensor size as it relates to the object size as measured at the sensor.
ah, okay, I get it.
Yes, I was only referring to perception...
Your f1.0 image will look like captured at f22 when you print it at stamp size :-)
Call it what you want, but if you want to present an image with sharp detail from front to back (as the OP does) then pixel size, print size, viewing distance, and sensor size all matter :-).
No, this is all wrong. You cannot arbitrarily change the definition of DOF, just as I can't arbitrarily decide to change the definition of a circle. DOF already has a universally accepted definition that can be found in most textbooks.
To prevent an unhealthy and possibly drawn out thread, I will cut to the heart of the issue. What you are describing as "DOF" is known as "viewing resolution". Viewing resolution is a perceptual quantity that indeed depends on print size, viewing distance, pixel size and sensor size. However, DOF does not depend on any of these things, except the sensor size as it relates to the object size as measured at the sensor.
My only point in this thread is to please not confuse these terms, especially to someone new here. There is already so much confusion on the internet with regard to the understanding of DOF.
DOF as a fundamental calculation requires the choice of a numerical CoC. What number you choose for a CoC is rooted in print size, viewing distance and the degree of enlargement required from the native sensor/format size. To say that DOF does not depend on those things is incorrect, because you cannot calculate DOF without making underlying assumptions as part of your choice of a CoC.
Because the mathematical calculation of DOF requires a CoC which requires underlying assumptions about the nature of the viewing conditions, at it's heart DOF is a perceptual measurement. You can't say that there is some underlying true DOF and that there is a separate "viewing resolution" since essentially they are one in the same.
Viewing Resolution and DOF are NOT the same.
Yes, DOF requires knowing a CoC. Yes, print resolution also requires knowing a CoC. However, these CoC values are not necessarily the same. The CoC of my camera will depend on the pixel size of its sensor, while the CoC of my print will depend on the printer ink drop size and its spreading ability onto whatever substrate I choose to print on. However, saying that the DOF in my photographs are somehow different due to the CoC of my printer ink droplets is ludicrous.
Circle of Confusion is generally defined as as the largest blur spot that will still be perceived by the human eye as a point. We use this CoC number to extrapolate DOF based upon what amount of optical defocus is permissible in a given image for a given print size and viewing condition, to where it will still appear acceptably "sharp".
CoC is based upon a set of environmental factors... What size of print are you viewing? What distance will you be viewing it at? What is the underlying visual acuity of the viewer's eyesight? These factors determine the physical size of a blur spot that will be perceived as being a point on the final print. Once you know that physical size of that blur disc, you can extrapolate it back to the negative/sensor by looking at the degree of enlargement from the sensor to the final print.
The underlying point that I believe you are missing is that the entire concept of DOF and CoC is rooted in your perception of the image in a given set of viewing conditions. Change any of those viewing conditions (print size, viewing distance, whether you're wearing your glasses or not) and you change the CoC because you now perceive a different size blur disk as being a point source. That change in CoC then changes your DOF.
DOF is perceptual, not absolute.
Hi,
I didn't do the math but I guess about f/1.7. DoF will be same on a D4X regardless of MP, albeit you may get a bit more demanding. Anyway the difference between 30 MP and 24.5 MP is quite ignorable, it's about 10% on linear scale.
You can assume something like one stop difference.
Focusing is quite critical, BTW, and you cannot really rely on AF for pinpoint focus, but neither can you rely on your eyes. Live view is probably the best way to achieve dead on focus.
Best regards
Erik
I do not disagree that CoC can be based on a perceptual interpretation of sharpness. My only point is that you cannot mix different CoC values when talking about DOF. The CoC relevant to a print is different from the CoC of a digital image. For example, the DOF of my captured images are solely determined by how much I stop down my lens of a given focal length with a given sensor. The DOF of my captured images do not suddenly and magically change from printer to printer just because the size of the inkjet droplet changes, which is what determines the CoC of the final print. While the CoC of my camera sensor influences the DOF of my image, it is the CoC of my printers that influence their actual viewing resolutions. And yes, both CoC can be perceptually determined. However, the bottom line is that the DOF of my captured images never change; only their viewing resolutions can change based on the size and nature of the final print.
OK, I think you are in agreement with what I have said. Indeed, there involves a CoC in every aspect of viewing. This means that there is an optical CoC associated with your eyes/lens, a pixel based CoC associated with the sensor that captures the image, and the ink nozzle based CoC of the printer. This is exactly what I have been trying to point out. DOF is strictly an optical term associated with the first one, as it involves a 3D context of distance. Once you introduce film or sensor, then the use of DOF is no longer correct as you are referring to the 2D rendering of a 3D scene. Same with the printer. In both of these 2D contexts, there is no distance before/after any plane of focus and the concept of DOF has no meaning. Rather, it is the concept of resolution, whether it be the capture resolution of the sensor or the viewing resolution of the print that is now meaningful.
All I am trying to say is that DOF ( a 3D concept) does not depend on the print (a 2D concept) in any way. It is actually the viewing resolution that is determined by the size and nature of the print.
DOF absolutely depends on the print and the act of viewing the 2D image, it's just that the answer is expressed as a three dimensional measurement of the subject as it existed in the real world at the time you took the photo. Without the print or monitor, DOF does not exist.
Sheldon is absolutely correct, and it's fair to say that many people get a bit baffled by DOF discussions.
DOF is an illusion, it's merely a region of acceptable sharpness, whats actually acceptable is very subjective.
DOF exists, irrespective of whether or not a print or monitor exists. It is entirely an optical phenomena, and only an observer needs to exist.
So, you can take one of three positions here. First, either the Phase One engineers have no clue about DOF, and their DOF Preview button on their 645DF is actually bogus, since you believe that DOF depends on the print. Second, the Phase One engineers have developed incredible technology for their 645 DF camera that somehow lets you see DOF based on your printing size by pushing a button on the camera, i.e., before you actually decide what size print you intend to make. Or third, that the Phase One engineers actually know what they are doing and that DOF is actually an optical phenomena that is determined by an observer independently of print size.
I choose the third option, but I respect your right to believe in either of the other alternatives.
Sheldon, This is just not true. DOF exists, irrespective of whether or not a print or monitor exists. It is entirely an optical phenomena, and only an observer needs to exist.
Nick, The fact that DOF is subjective is not what is under debate here. Sheldon (and Doug) insist that DOF depends on the print sze, which is simply not true. In fact, if we refer to the actual documentation by Phase One for their 645 DF camera (attached here for convenience), Phase One explicitly points out that they provide a DOF Preview Button. They do not call it a stop-down button or even an approximate DOF preview button, but refer to it as a DOF Preview button. Furthermore, they explicitly state what the DOF depends on in the text, and it does not depend in any way on the print.
So, you can take one of three positions here. First, either the Phase One engineers have no clue about DOF, and their DOF Preview button on their 645DF is actually bogus, since you believe that DOF depends on the print. Second, the Phase One engineers have developed incredible technology for their 645 DF camera that somehow lets you see DOF based on your printing size by pushing a button on the camera, i.e., before you actually decide what size print you intend to make. Or third, that the Phase One engineers actually know what they are doing and that DOF is actually an optical phenomena that is determined by an observer independently of print size.
I choose the third option, but I respect your right to believe in either of the other alternatives.
I've laid out a clear, comprehensive, and accurate explanation of the issue. Short of typing out large excerpts of Ansel Adams "The Camera" or other photographic texts, I don't know what else to tell you.
David,
Depth in DOF is a dimension, it requires a definition of its boundary. The COC is that boundary.
Cheers,
Bart
Point 1 is right. But there is a real theory behind it, based on human vision.
You may check this: http://www.betterlight.com/downloads/whitePaper/depth_of_field.pdf
If it were true, how would it even be possible to set proper exposure settings on your camera to get the desired DOF that you wish to capture? Obviously, you must have some ability to establish DOF prior to actually capturing the image, irrespective of whether or not a printer or monitor exists.
So, those guys at Phase One better get their act together. What were they thinking when they created a purely digital camera with a DOF button on it? :)
Without debating the issue further, let me just point out that when you look through the eyepiece of an SLR camera (be it Phase One, Canon, Nikon, etc.), what you are actually seeing is the light projected from the scene, through your lens, off a mirror, and onto a small flat piece of frosted glass or plastic called a focusing screen. You are looking at a live 2 dimensional representation of the original scene on that frosted focusing screen, magnified for viewing by the pentaprism and eyepiece.
Essentially, it's an itty bitty little monitor... and all the rules about DOF, visual acuity and CoC can be applied to your ability to see a blur spot as a point on that 2 dimensional focus screen. A DOF preview button (something that's been on most SLRs long before Phase One was in existence) just lets you see the effect of aperture changes on the focusing screen in real time.
Yes, I agree with this exactly. My only point here has always been that DOF is fixed at point of capture (whether film or digital), and it does not change thereafter. So, the claim that DOF depends on print size is just not true.
If you re-read the Betterlight document I think you will find that it is explained there.
Whilst it is true that the characteristics of the capture are fixed at the time of the exposure, how this subsequently appears to the viewer very much depends on the display size.
Rather than continue to argue, why not try it for yourself? Shoot a pic on a decent res dslr, at say f8 on a medium wide lens set to infinity and then make a 4x6 print from it. It will all look sharp from front to back. The region of acceptable sharpness is vey wide. Now either make a big print or just look at it enlarged on the monitor. What looked sharp on the small print now is revealed to actually be less so.
This is why DOF is display dependent.
I also understand why you are confused. You are trying to distinguish between some sort of absolute camera DOF and the rest of the process. This is a bit of a red herring as the two cannot meaningfully be considered in isolation as one depends very much on the other.
I am trying to correct misunderstood notions regarding DOF here.
My only point here has always been that DOF is fixed at point of capture (whether film or digital), and it does not change thereafter.
OK, so what is Depth of Field? Give me your definition of what you understand this term to mean.
If you examine any correct definition I think you will find that everything that has been explained to you is in fact the case. Any other interpretation must be derived from a different definition and thus we are talking about different things - probably more semantics than anything else.
I am not creating any unique definition for Depth of Field. I am in total agreement with the correct definition, such as the one found here for example:
http://www.normankoren.com/Tutorials/MTF6.html (http://www.normankoren.com/Tutorials/MTF6.html)
Circle of Confusion is generally defined as as the largest blur spot that will still be perceived by the human eye as a point.
Maximum resolution is fixed at the moment of capture, not DOF.
Koren makes no mention of print size one way or the other, but one of his recommended links does:
http://bobatkins.com/photography/technical/digitaldof.html
Once more, DOF is defined and fixed only at the point of capture in the context of rendering a three-dimensional scene onto a two-dimensional medium. All else thereafter is about viewing resolution.
Maximum resolution is fixed at the moment of capture, not DOF.
Bart, You are perpetuating the major confusion between viewing resolution and DOF
Yes, maximum resolution is also fixed at the moment of capture due to the fundamental pixel pitch of the sensor or grain size of the film. However, this does not change the definition of DOF, even according to your own website references.
It is viewing resolution that is impacted by magnification and viewing distance, not DOF.
I think David already said what he believes the proper definition of DOF is, as indicated in Norman Koren's site on the link he posted.
Yet he is contradicting it with his own statements by disregarding the COC
DOF relies on CoCs to have any meaning, just look at any DOF calculator. CoCs are whatever you define them to be, but always based on a viewer and is usually the size at which a point is seen as a disc. This obviously varies slightly depending on the viewer's visual acuity. The viewer also has to view something for DOF to have any real world meaning, therefore the print/screen is entirely relevant because the more you enlarge the image data the more 'points' that were below the threshold of the defined CoC become revealed as discs.
Printers have a CoC that is associated with how finely its dots can be perceived by a viewer. It is simply used to characterize the perceived resolution of whatever it prints.
CoCs and printers have nothing to do with one another. CoCs are merely an arbitrarily defined threshold of visible detail/sharpness and are entirely independent of any device - it's just a number.
Nick, I am afraid it is not me who has misconceptions here.
A CoC can indeed have something to do with printing (as well as DOF). Just take a look at the Wikipedia page on Circle of Confusion where they say
"The common criterion for “acceptable sharpness” in the final image (e.g., print, projection screen, or electronic display) is that the blur spot be indistinguishable from a point." Here is the link:
http://en.wikipedia.org/wiki/Circle_of_confusion
In addition, even Alpa does not agree with you with regard to your claim that CoC is independent of any device. Contrary to your belief, Alpa indeed finds CoC as being dependent on device. For example, just take a look at their spreadsheet here where they associate various CoCs for different devices:
http://www.alpa.ch/dms/products/tools/alpa-comparable-focal-length-calculator/ALPA_CFL_Calc_V217B.xls
If you go back and think a little harder about the previous example I just gave you (with the printing of the 2D resolution chart of zero DOF), you should be able to see how the confusion between DOF and print viewing resolution arises due to the misunderstanding of the different meanings of their respective CoCs.
Your Wikipedia reference confirms everything we (Sheldon, Bart and myself) have been trying to explain, please consider the following section, in particular point two and three: ...
If I had to summarize what I think the crux of David's argument is, I think he is saying that DOF should be defined as a fixed standard by choosing a CoC that represents the smallest possible size of detail that was captured on the native format (ie. one film grain, one pixel, etc) and calling this the native "DOF" of the capture.
What Nick, Bart, myself, and all the industry definitions I've ever come across agree upon is that Depth of Field is a perceptual measurement. It is, in simple terms, the measurement of what looks sharp and what doesn't. Because this is a measurement of what we see, it is constantly in flux. We want to make it a calculable answer, so we try to nail down each one of the variables with assumptions about standard print size, normal viewing distances, average human visual acuity, etc. until we get a final "number" that is the DOF. But in reality if DOF is perceptual and only exists at the moment we are seeing the image in the real world, it's never going to be a constant. We just take a measurement of a point in time and set of viewing conditions, and calculate the answer from that.
I think we could draw this to a close if David would answer two straightforward questions from his perspective.
1) What is the definition of DOF?
2) Assuming DOF is a fixed property (requiring a specific CoC to be chosen), how do you select what specific CoC to use, and why?
I think everyone in this conversation has a good grasp on the fundamentals at hand. Where we are all hanging up is that we can't quite determine what David believes the definition of Depth of Field to be, and because he keeps making pronouncements about what is or isn't true about DOF (often in contradiction to what the rest of us believe to be accurate) the conversation keeps going on and on.
I think we could draw this to a close if David would answer two straightforward questions from his perspective.
1) What is the definition of DOF?
2) Assuming DOF is a fixed property (requiring a specific CoC to be chosen), how do you select what specific CoC to use, and why?
I will try my best ...
1) My definition of photographic DOF is soley based on the classical physics of optics (e.g., the well-known lens equations). These principles and formulas describe the rendering of a three-dimensional scene onto a two dimensional medium via a system of lenses. The CoC of the medium used to record this rendering is what fixes the DOF, along with the other optical parameters.
It is my contention that once this scene is rendered onto this medium in this manner, the DOF is an invariant quantity thereafter. In other words, it does not change when I print it or view it on a monitor or projection screen. What does change when viewing under these different circumstances is strictly limited by the subjective depth perception of the individual viewer, which is typically affected by the resolution of the print itself, but the DOF of the captured image can never be said to have changed.
I do not agree that DOF is a subjective quantity, and I believe that its associated CoC can be established by objective measurements.
The CoC must be chosen as that of the capture medium involved in the rendering, e.g., of the film or digital sensor. Why? Because it is objectively measurable and not subjective, and it is the appropriate quantity that is involved in the rendering process.
I appreciate the reply, but you haven't quite defined anything. There's nothing in the above that says "DOF is.....".
Yes, you've already stated that as your position.
So if I may rephrase your position... DOF is the resolution of the photographic capture regardless of whether is it visible, because it is an objectively quantifiable measurement rather than a subjective measurement.
At this point it is clear that everyone in this discussion disagrees with your position regarding what Depth of Field is, that the general industry consensus disagrees with what your definition of DOF is, and that you are not open minded to reconsider your position or admit the possibility that you are wrong or could learn something from this discussion.
My opinion is that given the above, is that it's best to simply end the conversation rather than continue to go back and forth on this and waste everyone's time.
Thanks fellas, looks like this has been set to rest...
In fact, in the extreme case where one of these persons has lost vision in one eye, such a person will be unable to see any depth at all, since depth perception requires binocular vision.
I appreciate the reply, but you haven't quite defined anything. There's nothing in the above that says "DOF is.....".
At this point it is clear that everyone in this discussion disagrees with your position regarding what Depth of Field is, that the general industry consensus disagrees with what your definition of DOF is, and that you are not open minded to reconsider your position or admit the possibility that you are wrong or could learn something from this discussion.
There is no real depth in a print, just the 2D illusion of the 3D depth of the subject.
Say I take a Leica S2 with a 70mm f2.5 and compare shots with a Nikon d3x with a 50mm f 1.4, how will the dof compare:
At what f stop will the Nikon dof be approx equal to the Leica @ f 2,5?
Will this change if Nikon comes with a D4x at say 30 mp?
Is there som kind of multiplyer I can use across focalranges to get som idea of the dof similarities/differences?
Christopher
Jeremy, Try consulting a textbook on optics. If I am indeed wrong, then the entire scientific community is wrong with regard to their notion of DOF. This is quite a preposterous claim.
Ok ... superfluous given the rest of this thread ... but here goes ...
"Total DOF(s>>f ) ~= 2 faCs2/(( fa)2-(sC)2) = 2 as2( f/C)/(( f/C)2a2 - s2)
The circle of confusion C at the DOF limit is based on the 0.01 inch = 0.25 mm feature in an 8x10 inch print."
http://www.normankoren.com/Tutorials/MTF6.html#DOF_diffraction
That's not very relevant for someone shooting predominately for websites (800 pixel wide product shot), printed catalogs (2" wide product shot), or 2 meter wide prints.
Ok ... superfluous given the rest of this thread ... but here goes ...
"Total DOF(s>>f ) ~= 2 faCs2/(( fa)2-(sC)2) = 2 as2( f/C)/(( f/C)2a2 - s2)
The circle of confusion C at the DOF limit is based on the 0.01 inch = 0.25 mm feature in an 8x10 inch print."
http://www.normankoren.com/Tutorials/MTF6.html#DOF_diffraction
Jeremy, The DOF equations require parameters that are involved in the actual rendering process of the 3D scene (consult the diagrams as well as the equations associated with them). You cannot blindly substitute other CoC values that have nothing to do with the rendering process, such as those associated with viewing a print. As Nick has already pointed out above, the print only contains an illusion of depth in a 2D space. For example, the CoC of the print will depend on things like the size of the inkjet drop, and the absorption characteristics of the paper or other substrate being used at the time of printing, which have nothing to do with the image capture. However, the actual image as captured on film or by the digital sensor would always have a well defined DOF according to its optical definition, and with whatever CoC value of the film or sensor being used. DOF is only defined in optics and only during the rendering of a 3D scene onto a 2D plane via a lens, and only the features of the 2D plane involved in this process is what is involved in these equations. You can manipulate the captured image all you want afterward (viewing, printing, sharpening, blurring, etc.), but that no longer belongs to the realm of optics and the concept of DOF.
However, the actual image as captured on film or by the digital sensor would always have a well defined DOF according to its optical definition, and with whatever CoC value of the film or sensor being used.
David,
The definition of DOF has an assumed COC as parameter. Without the COC parameter, DOF cannot be calculated. Use a different COC, and you'll get a different DOF.
Cheers,
Bart
David,
You are confusing resolution in the imaging plane with DOF. The definition of DOF has an assumed COC as parameter. Without the COC parameter, DOF cannot be calculated. Use a different COC, and you'll get a different DOF.
Cheers,
Bart
Nick, your statement confirms exactly what I have been saying all along. DOF is defined in terms of real depth. It is not something that is defined by any illusion of depth. In other words, the equations of optics that define DOF have nothing to do with the illusion of depth as it would appear on the 2D captured medium.
So, I think you actually must agree with me on my original claim in this thread: The DOF cannot depend on print size, since real depth (DOF) and illusion of depth (print) are two entirely different things.
Do you now see your misunderstanding? The illusion of depth in the print depends on the DOF of the captured image, but the opposite is not true.
We can achieve critical focus for only one plane in front of the camera, and all objects in this plane will be sharp. In addition, there will be an area just in front of and behind this plane that will appear reasonably sharp (according to the standards of sharpness required for the particular photograph and the degree of enlargement of the negative). This total region of adequate focus represents the depth of field.
We must remember that the depth of field relates to an acceptable degree of sharpness; in actuality only the plane focused upon is truly sharp. Acceptable sharpness is also affected by the degree of enlargement of the negative and the distance from which the final print is viewed. An enlargement that looks well at 5 feet might be definitely unsharp at reading distance. Standard depth of field tables and scales are all based on certain assumptions regarding these factors.
Depth of field is based on the acceptable blurriness and is therefore essentially based on arbitrary specifications.
Depth of field is the result of an arbitrary specification, or rather it depends on the viewing conditions. Whether we tolerate a small or large amount of blurriness has no influence on the fundamental characteristics of the depth of field.
The human eye will not perceive any loss of sharpness in an image if the power of the eye is the only thing determining which smallest details can be recognized. On the other hand the eye will perceive an image as blurry if the eye is capable of seeing significantly more than is shown. The resolution that the eye can recognize must be the benchmark.
The depth of field is therefore a rather fuzzy dimension that depends heavily on the viewing conditions.
The depth of field (DOF) is the range of distances between sf and sr, (Dr + Df ), where the circles of confusion, Cf and Cr, are small enough so the image appears to be "in focus."
Let's try to define depth of field. The usual definition runs something like this:
"The region over which objects in an image appear sharp".
While there is some truth in this, there's also some confusion - and some untruth too! Let's try a more accurate definition:
"The depth of field is the range of distances reproduced in a print over which the image is not unacceptably less sharp than the sharpest part of the image".
This definition contains some important points.
* First, DOF relates to a print or other reproduction of an image. It's NOT an intrinsic property of a lens. If you put a lens on an optical bench you can measure focal length, you can measure aperture, but you can't measure depth of field. Depth of field depends on some subjective factors which I'll discuss later.
* Second, note the phrase "not unacceptably less sharp". All parts of an image which come from outside the focal plane of the lens are blurred to some extent. Only one plane is in focus. As you move away from that plane things get less sharp. The depth of field limits are where the loss of sharpness becomes unacceptable - to a "standard" observer.
* Third, note the phrase "..not unacceptably less sharp than the sharpest part of the image...". This covers the case of a pinhole camera. Such a camera has a very, very large depth of field (almost, but not quite infinite). However none of the image is sharp. The depth of field is large because all the image is equally blurred!
You can't understand Depth of Field until you understand COF (Circle of Confusion). The human eye has a finite ability to see fine detail. This is generally accepted as being 1' (minute) of arc. Translating this to the practical world, this means that at a normal reading distance the smallest object that a person with perfect eyesight, under ideal conditions can see is 1/16mm in size. If you place two dots smaller than this next to each other they will appear to be just one dot.
...
Keep in mind as well that viewing distance plays a part in this. We're intimately dealing with the eye's inherent ability to discern detail, and obviously the farther away we are when we view a print, the larger the acceptable COF can be.
...
Depth of Field (DOF)
With an understanding that COF is a human imposed parameter that varies according to the manufacturer's whim and the vagaries of human perception we can now look at what is meant by Depth of Field. This is strictly an optical phenomena; and once a COF is applied no discretion is allowed.
Definition: "The area in front of and behind a focused subject in which the photographed image appears sharp".
Now that we understand what Circle of Confusion means we can see that this definition of Depth of Field means that this is the range in front of and behind the subject focused on that will appear sharp within the limits of the applied COF. In other words, you can't have a DOF number without a COF number, and the COF number is one decided on by you or the lens manufacturer, whomever you trust the most.
In the classical physics of optics, DOF is defined as a distance in the three dimensional world that is determined by near and far points of acceptable sharpness according to some value of CoC in the process of rendering via a lens. If we are talking about the act of human vision, whether looking through a lens or not, then a CoC value would indeed be based on the individual's ability to distinguish acceptable sharpness, and would be a function of his/her subjective ability to do so. If we are instead using a photographic film to render the three dimensional world, then a CoC value must be used that is relevant to the nature of the film (e.g., grain size) and no longer base it on our biological and subjective vision. This CoC will most likely be different than that of our human vision. So, due to the different CoC value, you would indeed get a different DOF as I think we all agree. And, as also pointed out above, a DOF calculator would indeed show this based on the different CoC input values.
The issue at hand regards the notion of whether DOF changes when talking about print size or screen projections or viewing on a monitor. These things cannot change the DOF. How can they, when DOF is measured in a third dimension that is no longer present? I agree that you can simulate perceptual changes to DOF by manipulating the captured image in its two dimensional form, but the only actual DOF of the image is the one that is determined by the film that rendered the original scene from three dimensional space via a lens, and that can never change according to the physical laws of optics. Furthermore, the CoC of this image on film indeed can be determined objectively (i.e., "fact") by measuring its grain size. Whereas, the CoC that may be associated with human vision such as the perceptual interpretation of a print can be entirely subjective as Jeremy points out above. (For the digital equivalent of establishing objective CoC values, you can again consult the spreadsheet from Alpa that I presented earlier as reference, where Alpa has done exactly that.)
Therefore, the ability to simulate perceptual subjective changes to DOF by post-manipulating the actual captured image (i.e., printing, projecting, blurring, etc.) does not amount to claiming that the DOF of the captured image depends on its print size. In much of the popular photographic literature, this distinction is seldom articulated, probably for the same reasons why it is being debated at such length in this thread. However, you would never find any lack of such distinction in a physics textbook on optics.
So essentially, your contention is this... If I shoot an image on Velvia, it has less Depth of Field than if I shoot an equivalent image on Tri-X because Velvia has finer grain. And if I shoot a 21 megapixel full frame image (Canon 1Ds III) it has more depth of field than if I shoot the same photograph as a 25 megapixel full frame digital image (Nikon D3x) because the D3x has more megapixels.
If we are instead using a photographic film to render the three dimensional world, then a CoC value must be used that is relevant to the nature of the film (e.g., grain size) and no longer base it on our biological and subjective vision.
The issue at hand regards the notion of whether DOF changes when talking about print size or screen projections or viewing on a monitor. These things cannot change the DOF. How can they, when DOF is measured in a third dimension that is no longer present?
Ooh boy, I'm going to get into trouble for this but actually, this is the case, and it's not really what David is saying. Higher res systems do have less DOF for all sorts of reasons. You can make bigger enlargements before being resolution limited, this magnifies the capture CoCs even further, hence less DOF.
Running for cover...
;D
Sorry David, I thought you were on to something that might have been correct in certain theoretical circumstances but unfortunately you are wrong in your use of the CoC term and everything follows from that.
I have not invented anything new here, and I have only cited well known DOF formulas. I have not disagreed with any of the DOF formulas that you or anyone else here has linked or referenced. Please show me where you think I presented a DOF formula that is different.
As for the CoC formula I presented, it is the same as also used by authors of a well-known article published here on Lula:
http://www.luminous-landscape.com/tutorials/resolution.shtml (http://www.luminous-landscape.com/tutorials/resolution.shtml)
http://www.luminous-landscape.com/essays/Leica-M8-Perspective.shtml
Same authors of the resolution article you referred to, calculating DOF for a Leica M8.
"The circle of confusion is a conventional value. It depends on the size of the sensor, the size of the print and the particular vision capabilities and subjectivity of the observer."
Note that they are NOT taking the COC as derived solely from the sensor specifications.These guys know what they are talking about, far more than I do. if you won't believe me, how about believing these guys?
Hi,
I may suggest that the question is what is acceptably sharp?
- How large do you want to print? If the picture turns out very good yo perhaps want to print it very large?
- Are you investing in expensive glass and back to make images acceptably sharp
For critical sharpness DoF is very short. With modern sensors and good lenses at large apertures what you have focused on will be sharp and not much else.
My view is that for optimal sharpness the main subject must be in focus, not just within a calculated DoF. Than we try to expand DoF by stopping down.
Best regards
Erik
A very articulately written statement, which can be summarized generally as:
"No, I am not wrong. But yes, I admit that in general the world of photography takes a position contradictory to what I have been saying."
Thank you for the kind and civil discussion David.
Hang on, David is actually right, not generally, but specifically. It suddenly dawned on me what he was on about, and I always had a niggle at the back of my mind that I was missing something.
Look, it's really late here, I'm off to bed but I'll tell you what Davids arguments really mean tomorrow...I don't mean to tease but I need to marshall my thoughts a bit more.
Remembering Christopher original question:
I've had the same question in mind, specially "how will the dof compare?" between 35mm and MF. I've been using a H4D-40 for several months, and I had the D3x (now only the D700) and I still have doubts regarding the DOF.
I have and example for Christopher, please correct me if I'm wrong.
OK, try this for size...
There is only one point of focus, everything else is more or less out of focus and the region that is acceptably sharp is called the DOF. No problems with that, all agreed?
Close to the point of focus there will be points that are only very slightly OOF, ie small discs not actual points.
These discs can be smaller than the ability of the sensor to resolve. The sensor cannot resolve the difference between the true point of focus and those points close by.
Eventually, the further from the point of focus, either away or towards the camera/observer, these discs will be big enough for the sensor to resolve and they will start to appear less sharp than the region closer to the point of focus.
I'm pretty sure that this is what David refers to as the intrinsic DOF that is only related to the sensor and is not affected by print size. This would seems to be true.
The print size only affects the DOF when the print is smaller than the largest that it can be printed (which is a subjective thing, probably around the 200dpi point), at which time that 'intrinsic' DOF is the same as the actual DOF shown by the print. Any smaller sized prints will progressively lower the resolution and effectively increase the DOF as shown in the print. It does not affect the baseline DOF which is sensor/film grain dependant.
David's mistake was not in his knowledge but in the way he explained the point. The rest of us were not listening hard enough. There was always something missing from the exchange of ideas and points of view, some disconnect that I could not put my finger on. It came to me last night out of the blue - must have been my subconscious chewing things over!
We were all right after all.
OK, try this for size...
There is only one point of focus, everything else is more or less out of focus and the region that is acceptably sharp is called the DOF. No problems with that, all agreed?
Close to the point of focus there will be points that are only very slightly OOF, ie small discs not actual points.
These discs can be smaller than the ability of the sensor to resolve. The sensor cannot resolve the difference between the true point of focus and those points close by.
Eventually, the further from the point of focus, either away or towards the camera/observer, these discs will be big enough for the sensor to resolve and they will start to appear less sharp than the region closer to the point of focus.
I'm pretty sure that this is what David refers to as the intrinsic DOF that is only related to the sensor and is not affected by print size. This would seems to be true.
DOF requires a COC to be able and calculate it.
Sorry,
Bart
All in all, just a debate over the semantics of what the definition of "DOF" was, and as David pointed out the definitions of DOF in the scientific community apparently are different than what is widely used in the photographic community.
In other words ... are you saying that David's DoF is simply the "maximum possible" DoF given a specific capture?
No, the minimum possible.Ah ... yes ... that's what I meant to say ... ::)
That's what I mentioned as resolution, but not DOF. The calculation of DOF requires a COC. COC depends on (angular) resolution which involves output magnification and a viewing distance. Different magnification/viewing distance changes DOF, and the circle is round. Even after capture, the COC remains a variable, so DOF cannot be a fixed quantity.
I also think that's what he was thinking of, however DOF is a limiter of Resolution, but then so is diffraction. Resolution, or rather MTF, plays a role, but there is no such thing as an intrinsic DOF (which supposedly is to be unaffected by magnification). DOF requires a COC to be able and calculate it.
Sorry,
Bart
Hi,
The problem with DoF is that we really don't know viewing distance and print size at shooting time. You don't say to customer/buyer, sorry, the 65MPixel image was intended for a maximum print size of 8x10"!
The problem with DoF is that we really don't know viewing distance and print size at shooting time. You don't say to customer/buyer, sorry, the 65MPixel image was intended for a maximum print size of 8x10"!What about a customer who says, "Hi, I am a fine art reproduction photographer. What is the highest resolution theoretically possible from your 65MP back? Assuming I am using an adequate lens and given that my reproduction printer is capable of 400 dpi maximum, how large would I be able to print and still have my printer dots able to resolve the smallest resolvable features that this 65MP back can theoretically provide?"
The problem with DoF calculations is that it may lead to everything being unsharp. My experiments indicate that loss of sharpness is clearly visible at actual pixels with a CoC of 6 microns on a 6 micron sensor. If it would be visible in an A2 print is another question, probably not.It is physically impossible to resolve a feature smaller than the Nyquist limit will allow. For a Bayer sensor having a pixel size of 6 microns, this means any feature smaller than about 12 microns is not practically resolvable.
What about a customer who says, "Hi, I am a fine art reproduction photographer. What is the highest resolution theoretically possible from your 65MP back? Assuming I am using an adequate lens and given that my reproduction printer is capable of 400 dpi maximum, how large would I be able to print and still have my printer dots able to resolve the smallest resolvable features that this 65MP back can theoretically provide?"
It is physically impossible to resolve a feature smaller than the Nyquist limit will allow. For a Bayer sensor having a pixel size of 6 microns, this means any feature smaller than about 12 microns is not practically resolvable.
If you really want to have more accurate values of CoC specific to your camera and lenses (and any raw conversion process), you can empirically determine them by shooting something with measurable length and noting the near-far points of your "acceptable sharpness", and then inverting the DOF equations to compute effective CoCs.
Erik,
The empirical method I gave above for determining effective CoC values will take into account all MTF as well. The only thing you have to do is decide what is acceptable sharp to you.
Actually, yes there is a simple answer. The S2 has a sensor that is 30mm x 45mm in size. This is 1.25x longer in each linear dimension than full frame 35mm digital.
This means that to get a focal length that shows an equivalent field of view, you will need to use a lens that is 1.25 times longer than its 35mm equivalent. So the proper comparison lens to a 50mm prime would be a 62.5mm lens. So it's not exactly "apples-to-apples" to compare the 50 and the 70.
If you match focal lengths based on the 1.25x rule and shoot the exact same picture with the same settings and make similar sized prints (within the resolution limits of the smaller camera) then the Leica will have 1.25x less depth of field. Aperture stops run on a factor of 1.4x, so the rough difference between the two formats will be just under one aperture stop of depth of field. However, this only holds true when you are well inside the hyperfocal distance. If you shoot a subject at or near the hyperfocal distance, the DOF of the Leica will start to be noticeably less than the 35mm shot since the hyperfocal distance for the Leica and the 35mm are not the same (ie. the 35mm will hit hyperfocal distance sooner than the Leica).
This article does a good job of summarizing the differences, just use a factor of 1.25x instead of the 1.6x used to compare FF and APS-C.
http://www.bobatkins.com/photography/technical/digitaldof.html
I made a quite careful experiment on this and the loss of sharpness is quite visible.
The experiment was done by exactly focusing a test target and than moving the camera to induce larger and larger CoC. So the only difference between the images is that the camera was moved backward a few centimeters. Distance 3 meters focal length 150mm sensor pitch about six microns.
I'd suggest that the problem is that you think resolution. This is more about MTF. MTF for sensor is unaffected by moving the camera but MTF for the lens is reduced. So we don't violate Nyquist limit, just get a higher MTF at Nyquist.
[
And, for cameras with Bayer sensors, there is blur introduced from having to interpolate the majority of the image's pixels, since only one-third of the full color image is actually captured. All these things are not taken into account with any online DOF calculator.
Offhand, your CoC values are a little confusing to me. A value of zero for CoC is physically impossible. It would mean that a mathematical point actually exists in nature and that you have found a way to measure it.
One third of the colour information is captured (in a manner of speaking) but 100% of the luminance values are captured and this is where the resolution lies. The pixels or sensels are not interpolated, only the colour information shared between the pixels, and that contains very little 'detail'. Think of the difference between L* and the a* and b* channels in Lab mode.
The AA filter obviously adds blur but the Bayer array? If you took off the coloured filters from the sensor, or used a specialist B+W sensor, your have the same resolution would you not?
One third of the colour information is captured (in a manner of speaking) but 100% of the luminance values are captured and this is where the resolution lies. The pixels or sensels are not interpolated, only the colour information shared between the pixels, and that contains very little 'detail'. Think of the difference between L* and the a* and b* channels in Lab mode.
The AA filter obviously adds blur but the Bayer array? If you took off the coloured filters from the sensor, or used a specialist B+W sensor, your have the same resolution would you not?
Hi Nick,
Indeed, almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture.
Both of you are completely wrong here.
First, of course it is not the Bayer sensor itself that introduces any blur, but rather the interpolation process used to estimate the missing image pixels.
There can be large differences in the resulting image acuity due to various different estimation methods, similar to the wide variation of AA filters found in different cameras. If you really want to get into it here, we can start comparing algorithms, from simple bilinear interpolation to more advanced methods such as adaptive homogeneity or projection onto convex sets, which can show quite a range of blur from an identical raw capture.
Also, it is not true that 100% of the luminance values are captured, nor is it true that it is only within a few percent of a monochrome capture.
Great, we're making progress from the whole scientific world is wrong, to only 2 persons.
So you are claiming that by increasing the sampling interval for each color to every-other-sensel, instead of each sensel position, there is no effect on resolution? So we're back at the whole industry is wrong now? Seems some evidence is due, finally.
By all means, enlighten us.
Luminance is captured at 100% of the sensel positions, bands of color are captured in line with the CFA arrangement. Of course only that part of luminance that penetrates the CFA is captured and contributes to image forming, that's why more exposure is needed than monochrome capture without filters.
The earlier remarks/claims have to do with your claims about demosaicing, just like in the beginning of your reaction, "the interpolation process used to estimate the missing image pixels". I'll throw in some empirical evidence about that, namely that luminance resolution is only impacted by a few percent by the demosaicing process:
http://www.xs4all.nl/~bvdwolf/main/foto/bayer/bayer_cfa.htm (http://www.xs4all.nl/~bvdwolf/main/foto/bayer/bayer_cfa.htm)
Nothing fancy, it's just a simple page I threw together almost 7 years ago, to proof some nonsense statements wrong. Who could have thought it would still be needed what seems like eons later. Oh well.
All bogus claims by you. Luminance is NOT captured at 100% of the sensel positions as you say. To prove you wrong, I cite the words of Bayer as found in the U.S. patent that I referenced above:
under SUMMARY OF INVENTION, 2nd paragraph, lines 28-34,
"By arranging the luminance elements of the color image sensing array to occur at every other array position, a dominance of luminance elements is achieved in a pattern which has …"
And, Bayer goes on to make explicit his claims about luminance and its relation to the green region in column 6, beginning with line 21,
"What is claimed is: 1. A color imaging device comprising an array of light-sensitive elements, which array includes at least (1) a first
type of element sensitive to a spectral region corresponding to luminance, (2) a second type of element sensitive to one spectral region corresponding to chrominance, and (3) a third type of element sensitive to a different spectral region corresponding to chrominance, the three types of elements occurring in repeating patterns which are such that over at least a major portion of said array luminance-type elements occur at every other element position along both of two orthogonal directions of said array.
2. A device in accordance with claim 1 where in said luminance-type elements are sensitive in the green region of the spectrum, and the two types of chrominance elements are sensitive in the red and blue regions of the spectrum, respectively ... "
Good grief, I did not think it was possible to misinterpret something so clearly articulated by Bayer in a legally worded document, but Bart you still manage to do it.
All colors have some luminance, so of course technically there must exist some finite luminance at every location of the Bayer sensor, regardless of how small it may be. However, Bayer clearly delineates between having dominant luminance elements (green) as well as having elements whose luminance values can be relatively negligible in real world images, which he refers to as chrominance elements (red and blue). If Bayer really believed that luminance was being sampled uniformly at all element locations, he would not have any reason to go through the trouble of explicitly saying things like "arranging luminance elements to occur at every other position" or that his sensor contains color vectors "other than luminance".
I believe that the source of your misunderstanding is that you confuse the "ubiquitous presence" of luminance in the image with that of what actually defines its luminance resolution. In a Bayer sensor, the luminance of an image is being sampled independently in three channels, red, green and blue, and the sampling rates are not the same. The maximum sampling resolution of luminance is that of the green channel, since the green channel occupies 50% of the sensor area, whereas red and blue each occupy only 25% of the sensor area. However, in no way can these sampling resolutions be combined to match that of a monochrome sensor as you claim, not closely at all.
As a concrete example, consider the Phase One P45+ back, which has 6.8 micron pixels. The maximum theoretical sampling resolution of luminance in the green channel is roughly 52 lp/mm, while that of the red and blue channels is roughly just under 37 lp/mm.
Next, consider the Phase One Achromatic+ back, which is identical to the P45+, except that it does not have the Bayer CFA and so is monochrome. The maximum theoretical sampling resolution of its luminance is roughly 73.5 lp/mm.
Now, your claim that a Bayer sensor has "almost the same Luminance resolution (only a few percent loss) compared to a monochrome capture", amounts to saying that the luminance resolution of the Phase One P45+ should be that similar to the Phase One Achromatic+ back. And, the only way that can happen is if the missing 78MP of the P45+ color image can be interpolated from its 39MP of actual captured pixels with such precision so as to transform its resolution from 37 lp/mm / 52 lp/mm / 37 lp/mm in R, G, B to within a few percent of 73.5 lp/mm / 73.5 lp/mm / 73.5 lp/mm in R, G, B, or about 71 lp/mm in each color channel.
I believe this to be a hogwash claim by you. Furthermore, if your claim that Bayer sensors have nearly the same luminance resolution as monochrome sensors were true, then there would be no significant resolution advantage with the Achromatic+. And yet, Phase One does not seem to agree with you and claims that it in fact does, and I agree with Phase One.
There is even an article on Luminous Landscape that can be found here:
http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml
In the above article, Mark Dubovoy and Dr. Claus Molgaard (Chief Technology Officer and VP of Research and Development at Phase One) present detailed evidence where they show that the Achromatic+ monochrome sensor clearly has significantly more resolution than that of the equivalent P45+ sensor that uses a Bayer CFA.
Bart, it is only you who holds beliefs about resolution that fly in the face of everyone else. Mark Dubovoy does not believe what you claim, nor Dr. Claus Molgaard and Phase One, nor myself. Please try to produce some evidence where we are all wrong.
The bottom line here is that Bart has made claims here .... that are not supported by anyone in the photographic industry.
Erik,
First, we cannot include the P65+ in any comparison with the Achromatic+, since the sensor is different with different size pixels. It is the direct comparison of the P45+ and the Achromatic+ that proves wrong the claims made by Bart van der Wolf in this thread.
Lloyd Chambers has some good examples of the extra resolution of the achromat backs on his DAP site. It's clearly a step up in res, but not earth shattering, maybe 5-10% better (subjectively). At this end of the market, each few extra percent costs an arm and a leg!
Sheldon,
I know you would love to think that somehow my views are somehow "radical". However, the fact is that none of my claims are inconsistent with that of the field of scientific digital imaging. Different CoC values are simply defined in different ways to suit different objectives. And, this is ground that has already been covered here.
The difference in the current debate is that Bart's claim about luminance resolution is not supported either in the field of scientific digital imaging or in the photographic community. And, in fact specific examples have been provided here where respected people in the photographic community, such as Mark Dubovoy, do not agree with Bart's claim.
On this topic of luminance resolution, no one has come forth to prove wrong the conclusions of Mark, Claus Molgaard and Phase One. If you are also a believer in Bart's claim about luminance resolution here, please show us your evidence and prove us all wrong.