Luminous Landscape Forum

Site & Board Matters => About This Site => Topic started by: JohnBrew on August 14, 2011, 08:44:41 pm

Title: Nick Devlin's article
Post by: JohnBrew on August 14, 2011, 08:44:41 pm
This is a different response than Mark Fredricksons. I think Nick is dead on with example/idea #4. Driving the camera with an iPad or separate computer. Get rid of all the superfluous in-camera computer related items and the LCD and make it compatible with an outside computer source and large or larger HD screen. I'd be all over that.
Title: Re: Nick Devlin's article
Post by: jdemott on August 15, 2011, 12:06:27 am
I'm sure everyone has their own ideas about what new features would be most valuable.  For me, voice commands have very little appeal.  But an iPad connection, particularly with touch screen interface, would be wonderful.
Title: Re: Nick Devlin's article
Post by: wolfnowl on August 15, 2011, 01:54:58 am
Great article, and I agree with the ideas.  BTW, the bird in the 'Church Raven' picture really is a crow.  It's the biologist in me coming through.

Mike.
Title: Re: Nick Devlin's article
Post by: ErikKaffehr on August 15, 2011, 02:20:05 am
Hi,

Not sure about that voice control part. What chatter there would be early morning at Ox Bow bend! "You with the big L-lens using f/8 and IS off, PLEASE SHUT UP!"

Using iPad/iPhone as extension to LCD may be a bright idea.

ETTR? YES!

Zone system is not a bright idea in my humble view. Why, because ETTR is all we need! Zone system was about exposing for a given development, but now we do everything in "post", so even if "zone principles" still apply there is no use for the zone system itself while shooting pictures. Just expose to the right!

Best regards
Erik



This is a different response than Mark Fredricksons. I think Nick is dead on with example/idea #4. Driving the camera with an iPad or separate computer. Get rid of all the superfluous in-camera computer related items and the LCD and make it compatible with an outside computer source and large or larger HD screen. I'd be all over that.
Title: Re: Nick Devlin's article
Post by: DaveCurtis on August 15, 2011, 03:21:53 am
"Get rid of all the superfluous in-camera computer related items and the LCD and make it compatible with an outside computer source and large or larger HD screen. I'd be all over that."

As an option yes as a replacement no. The last thing I need is to have to lug around an ipad when Im out with my DSLR.
Title: Re: Nick Devlin's article
Post by: jani on August 15, 2011, 05:36:04 am
Using iPad/iPhone as extension to LCD may be a bright idea.
And it's a bright enough idea that there are products out there doing just that, already, sort of, such as DSLR Camera Remote (http://www.ononesoftware.com/products/dslr-camera-remote/).

The downside is that you need additional software running on a computer that you also have to lug around.

The upside is that it would be possible to solve certain other user interface desires, such as ETTR and focus masking.

(I'm with you on the voice control thing, as I've been an eager photographer of e.g. pool billiards and snooker, nobody's fond of people chatting too much...)
Title: Re: Nick Devlin's article
Post by: Dave Millier on August 15, 2011, 05:47:05 am
Thom Hogan has been discussing the lack of innovation from the big camera companies for some time. In particular, he laments the fact that they don't understand that cameras could offer connectivity and an integrated platform experience (ie follow in the footsteps of the sort of approach provided by the Kindle or iTunes)...
Title: Re: Nick Devlin's article
Post by: AlexMonro on August 15, 2011, 05:52:56 am
I'm not too sure about voice recognition (imagine an event with dozens of photographers in a small space, all yelling at their cameras, or even a tranquil landscape scene, with one snapper), and the idea of having to buy, and lug about, an iPad, has limited appeal.  But I guess if those are options, I can chose to ignore them.

Auto ETTR would be truly useful, though I'm not quite so sure about the touch screen zone system.  I'm not much of a fan of touch screens, they tend to lack tactile feedback and get smeared with fingerprints and hard to read, but it could be useful - though I tend to prefer to do the tonal mapping in the considered conditions of post processing, rather than in the field.  I think I'd have to use it for a year before I knew if I'd love it or hate it!

Live view focus marking would be great - true DoF preview!  However, what I'd really like to see is auto hyperfocal focussing with aperture priority, perhaps with user selected circle of confusion size, and display of near in focus distance.  A related idea is set AF points on the desired near and far points of the scene (I believe Canon used to offer this, not sure if it's on any current model), but sometimes that wouldn't quite do what you need in the real world, e.g. you want the maximum DoF before diffraction, but don't mind some of the foreground being a little soft).

Looks like there's quite a wide range of feeling about the ideas, with most of them having some people in favour, and some against.
Title: Voice Recognition
Post by: dreed on August 15, 2011, 06:04:09 am
Voice recognition sounds easy and if you're lucky enough to have a voice that sounds like what the computer processing your voice has been trained to recognise, then it just might work.

"Might work?" you say.

Ask any person that speaks English and tries to work through a voice operated menu on any 1-800 number in North America how good "voice recognition" is. Maybe Nick is one of the lucky ones that has a voice profile that matches well with what the automated systems expect.

It is really quite hard and it takes a LOT of work data to implement. It would be necessary to either increase the weight or throw away other things that are inside the camera in order to make room for the storage requirements to support a voice activated system. Voice recognition is not simply a little bit of space on one chip nor just an extra chip.

There are many many other things that would make better photographs than having expensive and burdensome gimmicks like that.

My background here is that I've actually worked on a project where we were rolling out a new system to work with voice commands over the phone in place of using numbers on the handset.
Title: Re: Nick Devlin's article
Post by: Tim Gray on August 15, 2011, 09:24:40 am
Actually voice recognition that has been trained to a specific voice, particularly with a limited vocabulary isn't too bad. 
Title: Re: Nick Devlin's article
Post by: Eric Myrvaagnes on August 15, 2011, 09:36:26 am
As soon as I read the "voice recognition" idea, I visualized sunrise at Zabriskie Point in Death Valley, with 40 or 50 photographers all trying to out-shout each other so their cameras will respond to their own commands instead of someone else's.

No, thank you!

Then again, I'm the kind of curmudgeon who believes that use of cell phones should be illegal not only when driving a car, but whenever you are within earshot of another human being who just might possibly enjoy silence.
Title: Re: Nick Devlin's article
Post by: michael on August 15, 2011, 10:00:16 am
The issue of a camera responding to other voices is a silly red herring. Firstly, the camera is usually held up to ones face with ones mouth just millimeters away from the mike. Secondly, just like voice control in a car, a button would be pressed to activate it.

Come on guys, think outside the box.

Michael
Title: Re: Nick Devlin's article
Post by: dreed on August 15, 2011, 10:58:04 am
The issue of a camera responding to other voices is a silly red herring. Firstly, the camera is usually held up to ones face with ones mouth just millimeters away from the mike. Secondly, just like voice control in a car, a button would be pressed to activate it.

Come on guys, think outside the box.

Nick's thinking about how to use voice recognition with a camera is too tied to the way he uses a camera. Do you tell your phone what numbers to push? No. You say "home" or "office" or your friend's name, etc.

What about if you could get rid of the "A" (Av) and "S" (Tv) positions on your mode dial and only have "C"? One "M" and 6 x "C" ? (B should just be an extension of 'S") And you can program the C's to be like A or S or any combination thereof? And then tag each "C" with a voice command?

Then maybe how you use the camera is:
"in door low light"
"animals"
"tripod landscape"
"normal handheld"

Why do I need to tell it F8, MLU, ISO 50, etc, if I can say "tripod landscape" and it does all of the above anyway?
Title: Re: Nick Devlin's article
Post by: Rhossydd on August 15, 2011, 11:21:27 am
The issue of a camera responding to other voices is a silly red herring. Firstly, the camera is usually held up to ones face with ones mouth just millimeters away from the mike. Secondly, just like voice control in a car, a button would be pressed to activate it.
That's not quite what Nick was proposing in later parts of the article when he starts talking about tethering tablets to cameras.

It's clear that there's no interest in voice control from the folk here and I completely agree, it's just plain daft.
As is some sort of zone system for digital exposure.
Tethering tablets is of dubious and minority use.
Focus masking ? I guess Nick just has no experience of using EVFs for any serious amount of time. I do through my broadcast work and hate the idea of it being on stills cameras.

Title: Re: Nick Devlin's article
Post by: michael on August 15, 2011, 11:26:00 am
The whole article smacks of trolling to me.

That's unnecessarily insulting. Nick is a very serious photographer with many years experience behind the camera. Trolling implies being a provocateur, which Nick most certainly isn't. He's a highly intelligent professional, a very decent person, and he happens to be a good personal friend.

So, be careful with your gratuitous insults.

Michael
Title: Re: Nick Devlin's article
Post by: Rhossydd on August 15, 2011, 11:29:53 am
Yes, probably written in haste and now removed, but still an article written to provoke debate, rather make serious points from what I see.
Title: Re: Nick Devlin's article
Post by: fike on August 15, 2011, 12:12:22 pm
Most of that list didn't resonate with me. Voice control doesn't interest me, though voice control for tagging and keywords would be cool.

Embedded geotagging support would be very good. 

I am surprised that there hasn't been more substantive efforts on in camera HDR. 

 If you had a gigabit ethernet port, you could do all sorts of control and communications through that standard interface.  WiFi is too slow for a great user experience on a real time device.

Perhaps the most useful thing that we will NEVER see would be an open API or scripting language like the CHDK, but with support and more robust options.  This could enable tinkerers to evolve the technology and could cause a surge in transformative innovation much like the those caused by the apple IOS development kit.
Title: Re: Nick Devlin's article
Post by: fike on August 15, 2011, 12:13:49 pm
...and I might add that an open development toolkit might be just the thing to rescue a marginal player like Pentax or Olympus.
Title: Re: Nick Devlin's article
Post by: ndevlin on August 15, 2011, 02:20:46 pm
It's clear that there's no interest in voice control from the folk here and I completely agree, it's just plain daft.
...
Focus masking ? I guess Nick just has no experience of using EVFs for any serious amount of time. I do through my broadcast work and hate the idea of it being on stills cameras.

My apologies for being unclear in the article.  Voice control would/could/should only be driven with the camera *up to the user's eye* -- that is, right up to their face.  It would require only a whisper, all but imperceptible to anyone around.  Basically, one would think out loud what the camera should do.  My description of this has obviously caused some confusion on this. Not sure why.

As re:  focus masking, my suggestions has little or nothing to do with EVFs, which generally suck and presently have zero application in serious photography.  This would be on the rear LCD initially, and on a wirelessly tethered larger screen, ideally. 

As for not carrying a tablet....if you're doing anything serious in the landscape realm, you are carrying a tripod. You can't tell me that adding an iPad or playbook adds any significant load.  Rather, it would be a return to working a la view camera - the real origin of landscape work, only in a much superior and user-friendly fashion.

Lastly,if there are 40 other photographers at a location, why the fuck would you want to be there???? I can't think of anything less pleasurable than doing nature/landscape photography in the company of the masses. Indeed, it's rather antithetical to the experience.

- N.
Title: Re: Nick Devlin's article
Post by: Rhossydd on August 15, 2011, 03:47:11 pm
My apologies for being unclear in the article.  Voice control would/could/should only be driven with the camera *up to the user's eye* -- that is, right up to their face.  It would require only a whisper, all but imperceptible to anyone around. 
Well thanks for that clarification.
Do you think whispering into a mic works well for voice recognition ? It doesn't. I'll add the issue of wind noise in microphones used outdoors and how that diminishes the possibilities of successful voice recognition.
Quote
As re:  focus masking, my suggestions has little or nothing to do with EVFs, which generally suck and presently have zero application in serious photography.  This would be on the rear LCD initially, and on a wirelessly tethered larger screen, ideally. 
A tablet becomes an EVF(electronic viewfinder) as soon as you use it as such. I regularly use the Sony HD100VF which is a 10"OLED viewfinder, even at £22k it leaves a lot to be desired.
Quote
.if you're doing anything serious in the landscape realm,
You might be walking and need to carry a light weight kit, some us like to photograph more than 100m from a car. One of the key benefits of the high end DSLRs is that we can now get LF performance from handheld kit.
Lugging kit needing tablets and tripods is a retrograde step, not progress.
Title: Re: Nick Devlin's article
Post by: mtomalty on August 15, 2011, 03:47:40 pm
"if you're doing anything serious in the landscape realm, you are carrying a tripod. You can't tell me that adding an iPad or playbook adds any significant load."


Then why not go all the way and have a voice activated tripod ?
Sample dialog,  " C'mon baby. Just a little lower.A little more. That's good.  Now, a little left.  Oh Ya!!. That's the spot !!"

Then to camera with zoom , "  Open up. Wider!  Good. Good. f 8. That's the ticket. OK.   Now let's frame this thing. In a bit! In. Out! Out! No!! Still too tight. No!  Out! Out! In a bit! ......"

All performed, of course, in a husky whisper  :)


Mark
www.marktomalty.com
Title: Re: Nick Devlin's article
Post by: allenmacaulay on August 15, 2011, 04:07:42 pm
Perhaps the most useful thing that we will NEVER see would be an open API or scripting language like the CHDK, but with support and more robust options.  This could enable tinkerers to evolve the technology and could cause a surge in transformative innovation much like the those caused by the apple IOS development kit.

Agreed.  The CHDK software has enabled me to get so much more use out of my Canon compact; multiple histogram display modes, clipping warnings, better flash control, and countless other useful things which allow me to take more & better pictures.  If you require a new function you can just write it up yourself or if it's beyond you, suggest it to the development community and someone will whip it up for you.
Title: Re: Nick Devlin's article
Post by: michael on August 15, 2011, 04:31:00 pm
Always amusing to read the reason's why something won't work.

I went back in the LuLa archives and found some of Nick's earlier suggestions. Here's what people said about them back then...

1955 - Have the mirror in a SLR return right after the picture is taken... and how pray tell would that work? You'd need a giant spring or a motor inside the camera to cock it. Forgedabout it.

1960 - Put a light meter inside the camera behind the lens... Ya, right! How on earth would you get a large selenium honeycomb inside a camera? And when you did, you'd also need a battery to move the needle, a scale in the viewfinder etc, etc. It would make a camera as big as a shoebox.

1970 - Automatic Focusing Lenses.. Oh sure, I can see it now; you mount a robotic hand on the hot shoe and it turns the lens until the rangefinder windows overlap, at which point you flip a switch to stop it focusing.

1985 - Replace film with silicon..  The largest CCDs anyone knows how to make are 300K, nowhere near big enough for anything useful; Maybe by 2000 we'll have 1MB sensors. What a waste of time. Film will be with us forever.

2002 - Shooting video with a DSLR...Why on earth would anyone want to do that? Camcorders are the right tool for the job. DSLRs are for taking picture not movies. Next thing you'l;; tell me is that one day million dollar TV shows and Hollywood movies with be shot with consumer Canons. Right. Pull the the one, why doncha.

Anyone else care to do some research in the archives? Best entry by this coming weekend wins a free download video of their choice.

Title: Re: Nick Devlin's article
Post by: theguywitha645d on August 15, 2011, 04:51:21 pm
Well, Minolta made a talking camera. That was pretty much a bomb. I can't see talking to your camera any more of a hit. Besides, with everyone talking to their smart phones, talking to your camera is just more noise. I am not sure having photographers mumbling to their cameras are going to make them anymore endearing to the public.

What I find funny about bashing Asian camera manufacturers is that that have been the most innovative. It is also interesting and fairly common that those on the outside of an industry think they know more than those on the inside. Being a photographer qualifies you as a camera designer just as much as a smart phone customer qualifies you as being a communication engineer.

I don't mind folks dreaming about cameras. But why do you have to bash others to do it?
Title: Re: Nick Devlin's article
Post by: Rhossydd on August 15, 2011, 05:17:18 pm
Always amusing to read the reason's why something won't work.

I went back in the LuLa archives
1955 ?? so even you're starting to troll now ;-)

Quote
2002 - Shooting video with a DSLR...Why on earth would anyone want to do that? Camcorders are the right tool for the job. DSLRs are for taking picture not movies. Next thing you'l;; tell me is that one day million dollar TV shows and Hollywood movies with be shot with consumer Canons. Right. Pull the the one, why doncha.
There's a good piece in the latest Guild of Television Cameraman's journal pointing out the emperor has no clothes on about this issue. DSLR vidoegraphy is just a passing fad that will laughable in ten years time.
A couple of fashionable trials and interminable clips on the internet don't alter the facts.
There's too many pretentious pundits claiming to be "cinematographers" when they've never shot anything that been commercially shown in a cinema....or on TV...or even got further than a few hundred views on the net. The remarkable thing is that some gullible folk pay money for their advice.
Title: Re: Nick Devlin's article
Post by: michael on August 15, 2011, 05:24:03 pm
I don't necessarily disagree, but you might find a few members of the ASC who do.

Michael
Title: Re: Nick Devlin's article
Post by: Rhossydd on August 15, 2011, 05:32:17 pm
you might find a few members of the ASC who do.
It doesn't seem to be the guys with really successful careers who act as pundits though.
Title: Re: Nick Devlin's article
Post by: Wayne Fox on August 15, 2011, 05:52:10 pm

As re:  focus masking, my suggestions has little or nothing to do with EVFs, which generally suck and presently have zero application in serious photography. 
Personally I thought you made this pretty clear in your article by comparing it to Sony and PhaseOne who already provide this.  Nothing innovative here, for those using live view for focusing the concept is proven so your point was more that Canon and Nikon could certainly do it.  No innovation here, just an "it's about time" for those two.
Title: Re: Nick Devlin's article
Post by: alban on August 15, 2011, 06:04:58 pm
More of  a Coffee corner subject then an article .....
Title: Re: Nick Devlin's article
Post by: dreed on August 15, 2011, 06:26:50 pm
Lastly,if there are 40 other photographers at a location, why the fuck would you want to be there???? I can't think of anything less pleasurable than doing nature/landscape photography in the company of the masses. Indeed, it's rather antithetical to the experience.

Sometimes (and this is one of them) I really wish that this website had some sort of "+1" for the forum posts
Title: Re: Nick Devlin's article
Post by: John.Murray on August 15, 2011, 06:41:25 pm
make that a +2....

I'd love nothing more than a clean live view i/f to say an iPad or iPhone.  As another mentioned, dSLR remote is a *great* app for tethered shooting, the technical chops are there.

A funny thought came to mind..... Canon offering a voice activated mirror lockup function but *still* no dedicated button  ;
Title: Re: Nick Devlin's article
Post by: LesPalenik on August 15, 2011, 06:45:58 pm
Quote
Then why not go all the way and have a voice activated tripod ?
Sample dialog,  " C'mon baby. Just a little lower.A little more. That's good.  Now, a little left.  Oh Ya!!. That's the spot !!"

Then to camera with zoom , "  Open up. Wider!  Good. Good. f 8. That's the ticket. OK.   Now let's frame this thing. In a bit! In. Out! Out! No!! Still too tight. No!  Out! Out! In a bit! ......"

The nice thing about such a command set is that it would work also in low-light situations.
Title: Re: Nick Devlin's article
Post by: tom b on August 15, 2011, 06:51:08 pm
With 6 billion images uploaded to Facebook (http://www.luminous-landscape.com/forum/index.php?topic=56802.0) every 2 months I have a good guess as to what market camera manufactures are concentrating on. It's certainly not esoteric features for a small group of people who know about the zone system.

I got an email early February from a battery company saying something like there was 120 new cameras launched in January alone. I'm sure that's the market where R&D is going.

Cheers,
Title: Re: Nick Devlin's article
Post by: John Camp on August 15, 2011, 07:16:57 pm
The thing about an iPad connection, if I understand what Nick is saying, is that all the camera would necessarily have is the *connection.* You don't want it, don't use it. It's like having a nearly invisible hot shoe. Just because you've got a hot shoe, doesn't mean you have to walk around with a strobe stuck on top of the camera. On the other hand, for some people, like landscape or wildlife photographers (or even people doing surveillance, to get a little esoteric) an iPad connection could be really useful. And if it's just another little plug in, so what? As I've been sitting here typing this, I've been thinking of all kinds of other possibilities, especially for phone-connected iPads...I'm sure you can think of some on your own.

Voice control...ehh. I've tried using speech-controlled word processors, and guess what -- keyboards are better. I doubt many photos are missed because of button-pushing problems, especially among those who are familiar with their menus. Could be useful for the physically challenged, though, I guess.

I think the ETTR thing is obvious...I think. I'm still not too clear about whether ETTR benefits are unalloyed, and apparently this is not going to be cleared up for me, here.

The other stuff, I don't care about. But, even in the course of typing this, I've gotten more enthusiastic about the iPad thing. It's so *obvious.* But then, so is mirror lock-up.

Maybe the place to look for (a non iPad) version of this would be Sony, with its ambitious but deeply third-place DSLRs...and it's really good flat-panel and computer capabilities.

JC

Edit: I was thinking about the replies to Nick's article, and what they most reminded me of is the arguments on a Leica forum, where the traditionalists don't want *anything* that they feel might be unnecessary to them. They don't even want the capability buried in the camera, accessible only on a hidden menu, that they'd never find unless they went looking. They just simply don't want the camera to have the capability, because it break from tradition.    


Title: ETTR: time to retire conventional thinking about shutter speeds
Post by: dreed on August 15, 2011, 09:00:48 pm
To follow up further, if a shutter speed of 1/212 of a second gives optimal exposure, why shouldn't we be able to choose that instead of something close to it?

If we can have 14bits of colour precision, why can't we have similar granularity in our shutter speeds?

Because of some old preconceived ideas of how fast a shutter can move?

The "standard" shutter speeds are 1/30, 1/60, 1/125, etc. Isn't it time to abandon the idea of shutter speeds only being meaningful in terms of what fraction of light they let through relative to each other, and introduce the idea of the shortest time required to fill any photo site on a sensor with photons?

If we want to use ETTR to the maximum of the camera's capability then for a given aperture, the camera needs full control over the shutter speed so that it can choose the one that delivers the most amount of light without clipping (in the raw data) rather than just a handful of fractions.

Without this, I don't believe that any "ETTR mode" on the camera can achieve 100% of the camera's potential - except for those lucky  situations where one of the small number of selected fractions does indeed provide optimal light setting.
Title: Re: Nick Devlin's article
Post by: Wayne Fox on August 15, 2011, 09:13:01 pm
With 6 billion images uploaded to Facebook (http://www.luminous-landscape.com/forum/index.php?topic=56802.0) every 2 months I have a good guess as to what market camera manufactures are concentrating on. It's certainly not esoteric features for a small group of people who know about the zone system.

Considering that 99% of those 6 billion came from camera phones, and considering that high end cameras are extremely profitable compared to the cut rate market of point and shoots, I think Canon and Nikon are still very much interested and indeed have been putting a substantial investment into R&D.  Personally I think one reason it's been longer than normal for new models is a 30+mp sensor isn't just enough, they're trying to get to 36+, and at least Canon realizes they need better glass so several lenses have seen some nice improvements in the meantime.

But for the past 5 to 7 years, they are selling more dSLR's than they ever have ... how many more dSLR's are sold each year as compared to 35mm film cameras only 15 years ago?  It's a big number. Additionally digital has created a pretty nice upgrade path, something also not typical with film cameras for them.  This is slowing down, but they are still working on ways to outdo each other.

A lot of intriguing ideas, but lets be honest ... there are always threads about what we have is good enough so does it really matter what they do?  New things would be nice, and I'm guessing someday the whole dSLR thing is going to implode on itself with some of the amazing technologies being researched today, but in the meantime what a time to be alive and seeing all of this happen.ne hand, then long for the simple throwback camera
Title: Re: ETTR: time to retire conventional thinking about shutter speeds
Post by: Wayne Fox on August 15, 2011, 09:21:53 pm
To follow up further, if a shutter speed of 1/212 of a second gives optimal exposure, why shouldn't we be able to choose that instead of something close to it?

If we can have 14bits of colour precision, why can't we have similar granularity in our shutter speeds?

Because of some old preconceived ideas of how fast a shutter can move?

The "standard" shutter speeds are 1/30, 1/60, 1/125, etc. Isn't it time to abandon the idea of shutter speeds only being meaningful in terms of what fraction of light they let through relative to each other, and introduce the idea of the shortest time required to fill any photo site on a sensor with photons?

If we want to use ETTR to the maximum of the camera's capability then for a given aperture, the camera needs full control over the shutter speed so that it can choose the one that delivers the most amount of light without clipping (in the raw data) rather than just a handful of fractions.

Without this, I don't believe that any "ETTR mode" on the camera can achieve 100% of the camera's potential - except for those lucky  situations where one of the small number of selected fractions does indeed provide optimal light setting.

I've thought for a long time how nice it would be to have an option that based exposure on length of time per site to fill to a set saturation point.  For those occasions where shutter speeds really aren't critical this would be sweet. 

Current shutters are still partially mechanical (only the front "curtain" of canons is electronic (simulated), so accuracy is limited. Currently cameras offer shutter speeds equal to 1/3 stop increments, so there are two other choices in between each of the speeds you mention, which seems to be enough granularity for about any shooting condition except high contrast subjects where an HDR mode based on the the this idea would really be sweet.

I don't think it's possible with current sensors, based on how they charge each pixel then read the charges, but here's hoping some bright person somewhere is thinking outside the box and developing a sensor that might be able to do this.
Title: Re: Nick Devlin's article
Post by: lenelg on August 16, 2011, 04:04:09 am
Well, Minolta made a talking camera. That was pretty much a bomb.
Don't forget that whether an innovation succeeds or not usually has more to do with the niggly details of how it is implemented than with the basic concept.
"It ain ´t what you do, it´s the way that you do it"..
Title: Re: ETTR: time to retire conventional thinking about shutter speeds
Post by: dreed on August 16, 2011, 04:09:16 am
I've thought for a long time how nice it would be to have an option that based exposure on length of time per site to fill to a set saturation point.  For those occasions where shutter speeds really aren't critical this would be sweet. 

Current shutters are still partially mechanical (only the front "curtain" of canons is electronic (simulated), so accuracy is limited.

I was thinking this myself, but then consider this.

If you've chosen 1/1000 for the time a shutter is open, what is the acceptable error in exposure length? 1% 10%? I've got to believe that it is less than 1%. A 10% error margin would mean 1/60 was anywhere from 1/54 to 1/66 - unacceptable. At 1%, 1/100 is from 1/99 to 1/101. So whilst I agree, there is likely some error in the precision, it's also got to be very small or else it would be a very big problem. So thus, I put that out of my head as a concern.

Quote
Currently cameras offer shutter speeds equal to 1/3 stop increments, so there are two other choices in between each of the speeds you mention, which seems to be enough granularity for about any shooting condition except high contrast subjects where an HDR mode based on the the this idea would really be sweet.

To think about this differently, if 1/50 does not give me a histogram that is far enough to the right, I've got to allow in 20% more light and shoot at 1/40. 20% is relatively huge. What if 1/40 clips your red and blue channels but 1/50 is still not close to the maximum? What if the best exposure would be 1/48?

At the very least, every digital camera should allow both 1/3 and 1/2 stop selection as an available choice, so that I get 1/50, 1/45 and 1/40. This is currently not the case. But even when it is possible, there is a 10% drop (or 11% increase) between 1/50 and 1/45.

Or to think about it differently, the accuracy with which a camera using 1/3 stops to meter a scene is really rather small - with a fixed aperture and 1/3 stops in use, the camera has an error margin of 10%. Anything that is properly exposed with 1/46 to 1/55 will be exposed at 1/50. If the camera can use with 1/3 and 1/2 stop shutter speeds, the accuracy improves to 5%. Is that good enough?

If your camera can provide you with a highly accurate 1/1000 of a second exposure with the shutter, why can't it provide you with an exposure of the scene with just as much accuracy?

Or to put this another way, a medium format digital back from Leaf or Phase One that costs $40,000 and is only able to meter a scene with an accuracy of 5% (assuming it can use both 1/3 and 1/2 stops conventional stops.) Is that acceptable?

Quote
I don't think it's possible with current sensors, based on how they charge each pixel then read the charges, but here's hoping some bright person somewhere is thinking outside the box and developing a sensor that might be able to do this.

The sensor collects charge and there's some circuitry somewhere that drains each pixel to read a value through a DAC. Somewhere there is a "timer" that expires, triggering that to happen. That is now going to be a byte or two stored in the cameras memory that are loaded somewhere as a count down for something to expire. The only part of the circuit that is digital is the memory in which those numbers are stored. Capacitors that are used to hold charge are all analogue. It's not the sensor that is the problem but the circuitry around it.

Consider that the sensor can be read in 1/2000th of a second or 2 seconds. The sensor is just a bucket. To think of it differently, imagine trying to fill a bucket with water from water falling over a waterfall. Is it the bucket itself that determines whether or not it can be filled or is it the decision about how long to leave it under the water?
Title: Re: Nick Devlin's article
Post by: MarkL on August 16, 2011, 05:40:04 am
The only one in the list that would interest me is ETTR, the rest - meh, I guess it depends on what you shoot. I have no desire to carry a bulky tablet into the field and even less desire to draw zone system stuff all over it while shooting.

1) Better manufacturing tolerances on bodies and lenses, no more 'fine tune' and less sample variation issues
2) Failing (1), bodies and lenses are profiled before release down to the aperture and zoom setting (lenses) which are electronically readable and can be compensated for when focusing
4) Built in electronic module for radio flash triggering (think radio CLS)
5) Histogram based on RAW data
6) Live histogram in the viewfinder (a la Fuji X100, it's so useful)
7) Wireless settings sync between bodies for multiple camera shooters
Title: Re: Nick Devlin's article
Post by: telmorrf on August 16, 2011, 06:07:54 am
I would really like to see DNG raw support.
Instead of having the stupid "Portrait", "Landscape", "Netral", etc... modes, cameras should have the ability to be uploaded with lrpresets or XMPs, so that we could shoot with our favourite presets.
Title: Re: Nick Devlin's article
Post by: Hans Kruse on August 16, 2011, 07:47:48 am
Interesting article and a little overstated at times, but in general I have also been wondering why the powerful computer in the camera have not been used more and why "advanced" features were not available for those who wanted them and not by default. It's reasonable to expect that camera makers don't want to make the cameras (even professional ones) frightening to use because of all the options, but then let the rest of use them and enable them at our hearts content.

A couple of things in the article that I'd like to comment.

Why can’t Canon even master the art of mirror lock-up?

In the later cameras the SET button can be assigned to MLU and you can place the MLU command in mymenu. But the most important is that Canon has made the MLU redundant in the later cameras via live view and silent mode. With silent mode the first curtain is electronic and therefore no shutter movement which even with MLU can create vibrations. Alas on my 1Ds mkIII live view does not have electronic first curtain .... However live view is a god send for using TS-E lenses and for DOF preview. And this came only 4 years ago.

The megapixel race is largely over.

Really? Since the expectation is that the next round of full frame cameras will be in the mid 30MP range and what we have now is 21-24MP can we say that the MP race is over? Already in 2002 the 1Ds came with 11MP and now 10 years later Canon 1Ds mkIII has 21MP and we expect 30+MP. Isn't that a continuation of the MP race? The Medium format has moved from less than 20MP (cropped sensor) to now 80MP (full frame) in a few years. The pixel density of the APS-C cameras are now close to 20MP (18MP on Canon 7D, 60D, 550D and 600D) which is equivalent to 46MP on full frame. This likely will come in a full frame from Canon (Nikon and Sony) in 5-6 years time.

Voice control.

It would be a good option to have, but I'm not sure this appeals that much to me. I never used it with any mobil phone. But what I would like to have is customization such that I can program the camera much more than I can now. The 1Ds mkIII can save all camera settings on the CF card given a name. E.g. "tripodmlu" (note the limit of 8 characters!!) However what a thought saving them on a CF card! This means that I have to synchronize all my CF cards with these camera settings files and never format the card in the camera as otherwise my settings have gone. Also loading the settings is slow like h*** and not suited for a quick change when a new situation occurs. So in essence not thought from the photographers needs. The 5D mkII, 7D etc. is much better in that regard having custom settings on the program wheel (C1, C2 and C3), but then of course you need to remember what settings is on e.g. C1! So yes, I agree that this sucks.

Live View Focus Masking

This would be a cool feature and especially in live view. But in all fairness live view is pretty good already to check focus and to check DOF as long as there is light enough. I find AF on the 1Ds mkIII to be very precise as I hardly ever have to change focus when checking in live view if I have a suitable subject to focus on with AF. Of course this requires a single AF point for focussing and notice if the AF got the focus where you wanted it (if at all possible like e.g. an animal behind some vegetation). I have always been wondering why a simple thing like a DOF calculator with live DOF limits were not displayed in the viewfinder (if enabled) since the camera knows the aperture and focal length and where the focus is (e.g. focussed with the AF-ON button or manually as long as this is transmitted to the camera).

Expose to the Right Exposure Mode

Yes, this would be really great to have as long as it is based on analysis of the RAW file and with thresholds that can be set by the user. It's clear that any camera will have the minimum noise at base ISO and exposed optimally. The fact that newer cameras have a linear curve of signal to noise as a function of ISO does not mean that ETTR is irrelevant. The imprecision of the histogram (although slightly helped by setting color space to Adobe RGB) makes it difficult to ETTR without bracketing. In difficult scenes I will bracket with one stop between and choose 2, 3 or 5 pictures in a bracket sequence. A challenge to step through these with MLU and a cable release at low shutter speeds that may cause vibrations from the shutter. Fortunately exposure blending have become more practical than the painting on layers method in Photoshop via updates to Photomatix Pro and other HDR programs where you can choose a relatively natural look and work from there in e.g. Lightroom on the blended image. In many cases I find an optimal exposure on a high DR scene can even be tweaked in Lightroom to show the entire dynamic range (1Ds mkIII and some are better like D3X).

Now…..give it all to me on my iPad

Well certainly this would be great to have, but I would prefer this on an iPhone or even better having the LCD on the camera act like an iPhone wrt. zooming in and out of live view and picture already taken and with histograms that could be based on the part visible in the zoom so one could investigate an area of highlights and indicators for how many stops over or underexposure. The reason I don't like the iPAD is yet another gadget to bring when I always already have the iPhone in my pocket so why not use the iPhone over Bluetooth or WiFi next to the camera for control. But the basic idea is the same.

Touch-Pad Based Zone System

If we had the proposed type of ETTR then it is simple to bracket e.g. 5 stops down from perfect exposure using todays camera technology and then blend the pictures in programs like Photomatix Pro using the exposure blending option. I'm the opinion that what we need is even more refined post processing options that makes it easy to do the blending without having to paint layers in Photoshop. I love to take the pictures but have no aversion what so ever about post processing at the computer. In my view that's when the picture emerges and the real potential comes out and even in ways that never was imagined at the time of capture. I do realize that some people never want to sit behind a computer screen and for them this proposal may make a lot of sense. So for me this proposal is not high on the list.

we have gotten for the last five years have been unimaginative products produced by a creatively stunted industry

Within the last 5 years we have got live view on almost all DSLR's. I don't know how many asked for this feature before it came, but I do remember how many (including myself ;) ) who didn't think this was something special. All DSLR's now also have auto ISO which is a great feature and even Canon can now spell to auto ISO! It amused me reading the manual for the 1Ds mkIII and didn't find the word auto ISO anywhere although this camera does have a decent implementation of auto ISO (although not in manual mode). The "not invented here" syndrome seems to be powerful and alive :) We also got micro adjustment within the last 4 years so we could fine tune the AF precision on the given camera body and lens combination. Although the comment above about the industry seems to be related only the camera manufacturers, the tools we have on our computers have been vastly improved within the last 5 years. So yes, we would like to see more innovation and I would hope that the camera manufacturers have prototypes in their labs that contains some of what is being discussed here and hopefully also some that we haven't even thought about yet.

Title: Re: Nick Devlin's article
Post by: ErikKaffehr on August 16, 2011, 12:50:10 pm
Hi,

A lot of good points.

I'd just add that I see lot of good reason to increase megapixels as this can give us true resolution without aliasing, so we can say good bye to both Optical Low Pass filters and Moiré at the same time. The disadvantage is some loss of DR (in the DxO sense) and larger files sizes.

Best regards
Erik

Interesting article and a little overstated at times, but in general I have also been wondering why the powerful computer in the camera have not been used more and why "advanced" features were not available for those who wanted them and not by default. It's reasonable to expect that camera makers don't want to make the cameras (even professional ones) frightening to use because of all the options, but then let the rest of use them and enable them at our hearts content.

A couple of things in the article that I'd like to comment.

Why can’t Canon even master the art of mirror lock-up?

In the later cameras the SET button can be assigned to MLU and you can place the MLU command in mymenu. But the most important is that Canon has made the MLU redundant in the later cameras via live view and silent mode. With silent mode the first curtain is electronic and therefore no shutter movement which even with MLU can create vibrations. Alas on my 1Ds mkIII live view does not have electronic first curtain .... However live view is a god send for using TS-E lenses and for DOF preview. And this came only 4 years ago.

The megapixel race is largely over.

Really? Since the expectation is that the next round of full frame cameras will be in the mid 30MP range and what we have now is 21-24MP can we say that the MP race is over? Already in 2002 the 1Ds came with 11MP and now 10 years later Canon 1Ds mkIII has 21MP and we expect 30+MP. Isn't that a continuation of the MP race? The Medium format has moved from less than 20MP (cropped sensor) to now 80MP (full frame) in a few years. The pixel density of the APS-C cameras are now close to 20MP (18MP on Canon 7D, 60D, 550D and 600D) which is equivalent to 46MP on full frame. This likely will come in a full frame from Canon (Nikon and Sony) in 5-6 years time.

Voice control.

It would be a good option to have, but I'm not sure this appeals that much to me. I never used it with any mobil phone. But what I would like to have is customization such that I can program the camera much more than I can now. The 1Ds mkIII can save all camera settings on the CF card given a name. E.g. "tripodmlu" (note the limit of 8 characters!!) However what a thought saving them on a CF card! This means that I have to synchronize all my CF cards with these camera settings files and never format the card in the camera as otherwise my settings have gone. Also loading the settings is slow like h*** and not suited for a quick change when a new situation occurs. So in essence not thought from the photographers needs. The 5D mkII, 7D etc. is much better in that regard having custom settings on the program wheel (C1, C2 and C3), but then of course you need to remember what settings is on e.g. C1! So yes, I agree that this sucks.

Live View Focus Masking

This would be a cool feature and especially in live view. But in all fairness live view is pretty good already to check focus and to check DOF as long as there is light enough. I find AF on the 1Ds mkIII to be very precise as I hardly ever have to change focus when checking in live view if I have a suitable subject to focus on with AF. Of course this requires a single AF point for focussing and notice if the AF got the focus where you wanted it (if at all possible like e.g. an animal behind some vegetation). I have always been wondering why a simple thing like a DOF calculator with live DOF limits were not displayed in the viewfinder (if enabled) since the camera knows the aperture and focal length and where the focus is (e.g. focussed with the AF-ON button or manually as long as this is transmitted to the camera).

Expose to the Right Exposure Mode

Yes, this would be really great to have as long as it is based on analysis of the RAW file and with thresholds that can be set by the user. It's clear that any camera will have the minimum noise at base ISO and exposed optimally. The fact that newer cameras have a linear curve of signal to noise as a function of ISO does not mean that ETTR is irrelevant. The imprecision of the histogram (although slightly helped by setting color space to Adobe RGB) makes it difficult to ETTR without bracketing. In difficult scenes I will bracket with one stop between and choose 2, 3 or 5 pictures in a bracket sequence. A challenge to step through these with MLU and a cable release at low shutter speeds that may cause vibrations from the shutter. Fortunately exposure blending have become more practical than the painting on layers method in Photoshop via updates to Photomatix Pro and other HDR programs where you can choose a relatively natural look and work from there in e.g. Lightroom on the blended image. In many cases I find an optimal exposure on a high DR scene can even be tweaked in Lightroom to show the entire dynamic range (1Ds mkIII and some are better like D3X).

Now…..give it all to me on my iPad

Well certainly this would be great to have, but I would prefer this on an iPhone or even better having the LCD on the camera act like an iPhone wrt. zooming in and out of live view and picture already taken and with histograms that could be based on the part visible in the zoom so one could investigate an area of highlights and indicators for how many stops over or underexposure. The reason I don't like the iPAD is yet another gadget to bring when I always already have the iPhone in my pocket so why not use the iPhone over Bluetooth or WiFi next to the camera for control. But the basic idea is the same.

Touch-Pad Based Zone System

If we had the proposed type of ETTR then it is simple to bracket e.g. 5 stops down from perfect exposure using todays camera technology and then blend the pictures in programs like Photomatix Pro using the exposure blending option. I'm the opinion that what we need is even more refined post processing options that makes it easy to do the blending without having to paint layers in Photoshop. I love to take the pictures but have no aversion what so ever about post processing at the computer. In my view that's when the picture emerges and the real potential comes out and even in ways that never was imagined at the time of capture. I do realize that some people never want to sit behind a computer screen and for them this proposal may make a lot of sense. So for me this proposal is not high on the list.

we have gotten for the last five years have been unimaginative products produced by a creatively stunted industry

Within the last 5 years we have got live view on almost all DSLR's. I don't know how many asked for this feature before it came, but I do remember how many (including myself ;) ) who didn't think this was something special. All DSLR's now also have auto ISO which is a great feature and even Canon can now spell to auto ISO! It amused me reading the manual for the 1Ds mkIII and didn't find the word auto ISO anywhere although this camera does have a decent implementation of auto ISO (although not in manual mode). The "not invented here" syndrome seems to be powerful and alive :) We also got micro adjustment within the last 4 years so we could fine tune the AF precision on the given camera body and lens combination. Although the comment above about the industry seems to be related only the camera manufacturers, the tools we have on our computers have been vastly improved within the last 5 years. So yes, we would like to see more innovation and I would hope that the camera manufacturers have prototypes in their labs that contains some of what is being discussed here and hopefully also some that we haven't even thought about yet.


Title: Re: Nick Devlin's article
Post by: Ben Rubinstein on August 16, 2011, 01:12:45 pm
Um, if you aren't a landscape photographer that article is rather reduntant and if you are then most here are thinking the technology of the new IQ P1 backs is so cutting edge they're spending tens of thousands of dollars to upgrade!  :P :P :P

Seriously though, iphone look technology is all very nice but it's hard to moan about lack of funky features when we're still stuck with a single useable focus point, horrific shutter lag and mirror blackout, etc on a 5DII or a stone age screen and max useable iso 800 on a 1Ds III. The FF lineup at least from Canon wasn't even up to the current technology when they were released so I'm doubting much will be different when the new models are released never mind exotic goodies like focus peaking. Expect 'smart' technology on these cameras around 5 years after they became normal on phones and p&s cameras, it's been the model until now...
Title: Re: Nick Devlin's article
Post by: image66 on August 16, 2011, 02:46:40 pm
I like Nick's suggestions. The iPad like control device is specifically useful to me since I occasionally set up remote controlled cameras. What I am currently doing is running the camera tethered to a laptop and then using a screen sharing app, I'm using my iPad to control the camera from wherever I am. This comes in handy for event shoots where I have a lockdown camera up on stage someplace hidden or when photographing birds. With my own Wifi hotspot tied in, I can shoot from hundreds of feet away. I would greatly welcome not needing the laptop.

I recently wrote a dissenting opinion on zone-10.com about ETTR. It is a metering method for some limited applications. Is it useful? Absolutely, but in the end it still comes down to skillful usage of the tools at hand. No different than having a spot meter--you still gotta know not only where to point the thing but know when to disregard the suggested setting. It also comes down to the noise patterns and color mapping techniques of the particular camera. No two types of cameras will respond to pulling the exposure the same way.

For me, is it too much to ask for a focus screen which you can actually use for manual focusing?

Is it also too much to ask for a digitized analog display such as was perfected in the Olympus OM-4Ti? I don't have a clue what F7.1 1/387 means, but I know what it means when the bar graph moves halfway across the display from where I'm expecting it to be. The eye detects movement in the perepheral, but reading a digital display requires looking away from the focus screen--even for a quick glance and then the brain has to interpret it.
Title: Re: Nick Devlin's article
Post by: Jim Pascoe on August 16, 2011, 04:22:28 pm
Seriously though, iphone look technology is all very nice but it's hard to moan about lack of funky features when we're still stuck with a single useable focus point, horrific shutter lag and mirror blackout, etc on a 5DII or a stone age screen and max useable iso 800 on a 1Ds III. The FF lineup at least from Canon wasn't even up to the current technology when they were released so I'm doubting much will be different when the new models are released never mind exotic goodies like focus peaking. Expect 'smart' technology on these cameras around 5 years after they became normal on phones and p&s cameras, it's been the model until now...

Are we talking about the same 1Ds 111 that I use almost daily and have done for the last three and a half years.  Its a brilliant camera and if I had to use it for the next 20 years I'm sure it would continue to produce excellent pictures.  Do some newer cameras have better features?  Sure they do, but to claim it has a maximum useable ISO of 800 is just plain wrong.  I photograph a lot of weddings with mine, and if I had to shoot the whole day at 1600 I doubt anyone would notice in the pictures.  Would I like it to be better? Of course I would, but camera technology is a moving target and I for one cannot afford to upgrade yearly for some incremental improvements.  In fact I seriously doubt a 1Ds 1V would tempt me because the existing camera does what I want to a good enough standard. Ok, I might be tempted if it had voiced control.  But then again it's bad enough with guests shooting over my shoulder without giving them my camera settings too! :)
Title: Re: Nick Devlin's article
Post by: Hans Kruse on August 16, 2011, 04:58:47 pm
I agree, that this statement about the 1Ds mkIII makes little sense unless the conditions and requirements are clear. I have taken numerous pictures at ISO 1600 that had very good details and sharpness with quite a small amount of noise. Especially with Lightroom version 3 pictures come to life in a new way. In fact this is also true for pictures from my older cameras. A real advantage of shooting RAW that you can be really pleased by going back and redevelop old pictures at a new standard. Especially it is important to understand how capture sharpening and noise reduction goes hand in hand.
Title: Re: Nick Devlin's article
Post by: ErikKaffehr on August 17, 2011, 12:55:32 am
Hi,

A very good point about RAW-processing!

Regarding the Canon sensor I have the impression that it is lacking a bit at low ISO dynamic range, but works well with higher ISOs. So I'd suggest that a revision of the camera electronics is needed to keep up with the new Sony sensors also used by Nikon and Pentax. But from the data I seen the Canons are very competitive at high ISO and most pictures are not about extended dynamic range.

My assumption is that we are going to see more and more of mirror-less designs, that is cameras optimized for live view. With live view, focus masking makes a lot of sense. Using an auxiliary device like iPhone or iPad as advanced remote control may also be a good idea.

Best regards
Erik

I agree, that this statement about the 1Ds mkIII makes little sense unless the conditions and requirements are clear. I have taken numerous pictures at ISO 1600 that had very good details and sharpness with quite a small amount of noise. Especially with Lightroom version 3 pictures come to life in a new way. In fact this is also true for pictures from my older cameras. A real advantage of shooting RAW that you can be really pleased by going back and redevelop old pictures at a new standard. Especially it is important to understand how capture sharpening and noise reduction goes hand in hand.
Title: Re: Nick Devlin's article
Post by: Ben Rubinstein on August 17, 2011, 04:56:10 am
Guys, I owned and shot weddings with a 1DsIII, went back to 5Dc's where I really can shoot iso 1600 all day. Why don't you look at my point rather than argue the details. The point is that before we complain that we aren't having 'modern' technology how about we get current technology such as decent screens and iso which is better than a camera 3 years older than it. It just seems silly to me that we're complaining about not having cutting edge technology when we have never once gotten even current technology in a Canon camera.
Title: Re: Nick Devlin's article
Post by: msmsql on August 17, 2011, 05:12:54 am
Just found this interesting thread ... from Devlin's list I would seriously consider ETTR - the other points are of lesser interest for me.

I'd put a greater priority on some other enhancements:
1) An automatic hyperfocal focusing mode - one should be able to set CoC, aperture and focal length and the camera should autofocus the lens to the correct distance
2) An automatic focus stacking mode - one should be able to set nearest and farthest focusing distances, the increment and the camera should expose the correct number of frames
3) True, mechanical MLU - what is the problem with this?
4) An automatic HDR mode - one sets the EV delta and the camera shoots as many frames as are necessary to record both shadow and highlight details

All the above should be quite easy to implement IMHO.

What do you think?
Title: Re: Nick Devlin's article
Post by: Hans Kruse on August 17, 2011, 05:59:43 am
Guys, I owned and shot weddings with a 1DsIII, went back to 5Dc's where I really can shoot iso 1600 all day. Why don't you look at my point rather than argue the details. The point is that before we complain that we aren't having 'modern' technology how about we get current technology such as decent screens and iso which is better than a camera 3 years older than it. It just seems silly to me that we're complaining about not having cutting edge technology when we have never once gotten even current technology in a Canon camera.

Sorry, but there is really no difference between a 1Ds mkIII and a 5D mkII at ISO 1600 as long as you shoot RAW and they are exposed the same. The 1Ds mkIII in my experience is a bit more conservative, but if you normalize exposure there should not be a difference since the ISO SN curve is pretty much linear at that point. At low ISO values the 1Ds mkIII (and 5D mkII) has more noise and lower DR than desired, however good technique can pretty much compensate for that. This is not an excuse for not making it better, but it's always good to put things in perspective.

You are of course right that some aspects of the 1Ds mkIII shows off the age and it is up for a renewal which we all expect to be likely this fall. This is not an argument, but just a natural part of the product cycle.
Title: Re: Nick Devlin's article
Post by: Ben Rubinstein on August 17, 2011, 09:21:52 am
My point was that the screen on the 1DsIII was a disgrace at the time of release, ditto the AF of the 5DII. When these companies stop crippling their bodies then perhaps we can look to a future possibilities but we're still waiting for basic useabilty features by modern standards at the time of release! In 2011 that will mean a touchscreen as standard and superior LV focus like in a Panasonic or Olympus but don't expect it....
Title: Re: Nick Devlin's article
Post by: Hans Kruse on August 17, 2011, 10:40:24 am
The LCD on the 1Ds mkIII was not the higher resolution that came out around the time when the 1Ds mkIII was released. This we can argue about if this was a good move from Canon or not, but don't forget how much time goes from the time when a new product is signed off and goes into testing and during that time you will change such things. What is perhaps a bigger issue is that during the life time of the camera no change is made even to the LCD screen when you buy a brand new one e.g. 2011! Also that very little is/was improved through firmware upgrades.

The 5D mkII has a rather good AF system as long as we speak about the central AF point and the assist points around it. Using only a single of the outer AF points is less precise. I used the 5D for two years from 2006 until I bought the 1Ds mkIII in 2008 and found the AF system in the 5D much better than many claimed it be! However there were missed opportunities since the AF in the 5D requires a good focus to allow you to shoot the picture.
Title: Re: Nick Devlin's article
Post by: dreed on August 19, 2011, 03:22:20 pm
Something that used to be present on film SLRs from Canon was an "A-DEP" selection on the mode dial.

Why not take that a step further and with touch panel displaying a live view of the scene, allow the photographer to select 1 or more points on the screen that need to be "in focus" and then have the camera work out what aperture is required to deliver that?

To that end, I'd promote the idea that the aperture should be treated as a continuous field of real numbers, rather than a small set of integers.

To follow on from that idea, when it comes to "spot metering", when tripod mounted, a touch anywhere on the screen should either be able to focus the camera there or use it as the location for spot metering or both. I'm not sure that multiple spots would deliver any benefit.

I want to suggest the idea that you can use a touch screen showing a live image to select a particular object to always be in focus (for example, something swinging in the wind or an animal/child) but I'm not sure if that makes sense for tripod mounted photography. I'm not sure how useful that type of focus tracking would be if you're not tripod mounted.
Title: Re: Nick Devlin's article
Post by: aduke on August 19, 2011, 04:34:17 pm
Something that used to be present on film SLRs from Canon was an "A-DEP" selection on the mode dial.

Why not take that a step further and with touch panel displaying a live view of the scene, allow the photographer to select 1 or more points on the screen that need to be "in focus" and then have the camera work out what aperture is required to deliver that?

...

Why compromise on the focus? Allow the user to select 2 or more points and have the camera set the focus point to each of the points, producing the requisite number of images to be combined via focus stacking?

Alan
Title: Re: Nick Devlin's article
Post by: dreed on August 19, 2011, 05:37:33 pm
Why compromise on the focus? Allow the user to select 2 or more points and have the camera set the focus point to each of the points, producing the requisite number of images to be combined via focus stacking?

Alan

Because this only works for photographs where there is almost no motion in the frame.

I've had enough issues with HDR and motion to know that having to use multiple frames is always a compromise.

Title: Re: Nick Devlin's article
Post by: mikefulton on August 23, 2011, 04:12:33 pm
The article was interesting, but most of these five items he suggested are nowhere near the top of my list.

1) Voice control - interesting but many problems.  First and most obvious is use in noisy environments.  Also a problem in QUIET environments -- how many wedding photographers are going to want to be talking to their cameras when they're already concerned about the noise from the shutter?  This feature may not be completely useless, but there are enough problems with the idea that it doesn't make the top 5.


2) Live View Focus Masking -- An interesting idea but I don't know if the capability to do this really exists.  Part of the logic behind this request is that autofocus isn't good enough, but any focus masking system would by necessity be based on that same exact system. I'm also not convinced that a camera's focusing system really knows specifically what areas are in focus or not, not on a pixel-by-pixel basis.  Really, all the electronics can ultimately measure is image contrast.  Even in a camera with multiple focus points, it's really only weighting it's contrast measurements towards the desired focus point.


3) Expose to the right mode -- This one I like.  But as with focus masking I'm not sure if the idea is really supported by the hardware at this point.

First, keep in mind that the "blink the overexposed areas" trick we see on the LIVE VIEW mode of many cameras is something that's done AFTER the frame has been captured.  The only way we can apply this idea to auto-exposure is going to require multiple captures of the sensor.

Second, also keep in mind that while the camera can tell if certain pixels are maxxed out exposure-wise, it has no way to know by how much.  It could be half a stop or it could be 4 stops.  The only way the camera could bring those areas back down would be to take another exposure that's a little less than the previous one, and then check the results.  If the areas were still blown out, then it would iterate the process until the blown-out highlights are no longer blown out.

Keep in mind that any exposure system that relies on processing LIVE VIEW-style frames is going to introduce a certain amount of shutter lag, because the camera is going to have to capture 1 or more frames and process them to determine the exposure, before actually taking the picture.


4) I like the idea of controlling everything from my iPad.  No reason why this can't be done now.  But it's something one would use only when the camera is locked down on a tripod.


5) I like the feature idea, and there's no reason it can't be done in-camera, or via iPad control, but also not much reason why it needs to be done in-camera.  Combine this idea with #4 or just do it in post.



My own list would look like this:

1) Built-in WiFi -- If I can buy a little USB adapter for $15, there's certainly no reason at all why this needs to be a $600 add-on.  Anything beyond the basic entry-level DSLR should come with built-in WiFi and software to automatically stream off your images as you shoot.

2) High pixel density display - No more 60 DPI displays, please!  Something along the lines of the iPhone 4's retina display is what we really need.

3) Automatic HDR -- I think some cameras have already started to do this, but it's something that should be as standard as automatic bracketing.  In fact... it *IS* automatic bracketing, just with an extra post-processing step.

4) Built-in radio flash sync -- Ideally, this should be programmable to trigger PocketWizard or other existing systems.  But even if it's proprietary, you could still connect your receiver to another radio trigger if needed.  Should support whatever's needed for E-TTL flash.

5) Built-in GPS -- This comes in just about every $30 cheapo cellphone, so just like WiFi it should be a standard feature in anything but the most basic cameras.

6) Autofocus range limiting -- When shooting low-contrast subjects in low light, one frequently has to deal with the camera's autofocus system seeking from one end of its range to the other, trying to focus.  But if you know that your subject is always going to be within a certain range, you should be able to program that into the camera so that it doesn't waste time trying to focus out of that range.
Title: Re: Nick Devlin's article
Post by: dreed on August 29, 2011, 06:58:32 pm
Come on guys, think outside the box.

Innovation, R&D into the camera is only half the solution space for digital photography. The other half is post processing in tools such as LR.

At present, the post step is very ... manual, even when you're applying lens profiles, colour profiles, etc.

What do I mean by manual?

We have to sit there and adjust sliders until a picture "looks right."

If I've taken a picture of a building and that building is rectangular, why can't I tell LR that this trapezoid is actually a rectangle, please solve the equations necessary to "fix" the photograph. Or maybe a simpler approach is to say that these two edges that are curved should be straight and parallel to each other. Or pick 4 edges and say that those four should make up the top, bottom, left and right of a rectangle/square? Why can't the computer solve the equation instead of me fiddling with sliders?

With respect to colour, why can't digital cameras have an inbuilt grey card?
Put a small window on the front or back of the body that acts as a dedicated "white balance" meter?
Or a USB dongle that supplies such information to the camera digitally, so that I don't need to shoot a "grey card" photograph?
Title: Re: Nick Devlin's article
Post by: jani on August 30, 2011, 04:44:07 am
If I've taken a picture of a building and that building is rectangular, why can't I tell LR that this trapezoid is actually a rectangle, please solve the equations necessary to "fix" the photograph. Or maybe a simpler approach is to say that these two edges that are curved should be straight and parallel to each other. Or pick 4 edges and say that those four should make up the top, bottom, left and right of a rectangle/square? Why can't the computer solve the equation instead of me fiddling with sliders?
That's because it's hard for a computer to see which lines are the relevant ones, and which are not.

Our brains perform a lot of near-magical shortcuts to build effective illusions about reality.

A computer, however, does not  have the luxury of seeing things as they should be.

Your first example, telling the computer to convert a specifiz trapezoid to a rectangle, is something which is easy enough in itself. You find that in the perspective correction tool in Photoshop.

More complex shapes are far from trivial.

I'm not saying that we won't get any of the features you yearn for sometime in the near or distant future, just that these things are not easy to do.

Quote
With respect to colour, why can't digital cameras have an inbuilt grey card?
Put a small window on the front or back of the body that acts as a dedicated "white balance" meter?

Most DSLR cameras have such a meter working with the main sensor, and have had for a long time (photos courtesy of Canon of Nikon), the meter is between the grip and the lens mount:

(http://shop.usa.canon.com/wcsstore/eStore/images/5dmarkii_24-105kit_3_xl.jpg)
(http://cdn-4.nikon-cdn.com/en_INC/IMG/Assets/Digital-SLR/2010/25432-Nikon-D300/Views/25432_D300_front.png)

What you want is one that magically "just works", I suppose.

But in real life, you have mixed lighting conditions, and need to pick what you think look natural or good.
Title: Re: Nick Devlin's article
Post by: dreed on August 30, 2011, 10:34:33 am
That's because it's hard for a computer to see which lines are the relevant ones, and which are not.

Our brains perform a lot of near-magical shortcuts to build effective illusions about reality.

A computer, however, does not  have the luxury of seeing things as they should be.

Your first example, telling the computer to convert a specifiz trapezoid to a rectangle, is something which is easy enough in itself. You find that in the perspective correction tool in Photoshop.

More complex shapes are far from trivial.

I'm not saying that we won't get any of the features you yearn for sometime in the near or distant future, just that these things are not easy to do.

The  perspective correction tool is what I was referring to in "sliders".
It is a clumsy method to correct distortion.
As is the "distortion slider".

I'm aware that complex shapes are trivial, but the behaviour of light through a camera lens is not random.

I suppose what I'm saying is rather than try and play with a bunch of knobs to get the picture looking right, I'd rather tell LR what it should look like and have LR work out what it needs to do in order for the picture to look that way.

Including colour.
Title: Re: Nick Devlin's article
Post by: jani on August 31, 2011, 03:39:55 am
The  perspective correction tool is what I was referring to in "sliders".
It is a clumsy method to correct distortion.
As is the "distortion slider".
Yes, they are fairly clumsy, I'm not disagreeing with that.

Quote
I'm aware that complex shapes are trivial,
Surely you mean "non-trivial"?

Quote
but the behaviour of light through a camera lens is not random.
It's not quite random (quantum physics temporarily ignored ;)), but if you pick an arbitrary camera lens, the behaviour of light appears arbitrary.

Even if e.g. Lightroom has a profile for a Nikon AF-S 50mm f/1.8G on a Nikon D3s, that does not mean that your sample of the same lens is identical. Optically speaking the light will take a different path through your lens, as compared to the profile.

The differences may be miniscule and invisible to even pixel peepers, or it may be easily identifiable.

Even so, correcting for distortions is not the same as correcting perspectives to match your artistic vision of what's "looking right". Only you can make that decision.
Quote
I suppose what I'm saying is rather than try and play with a bunch of knobs to get the picture looking right, I'd rather tell LR what it should look like and have LR work out what it needs to do in order for the picture to look that way.

Including colour.
Unfortunately, at this stage of technological development, that means that you have to tell LR by using "a bunch of knobs", or have someone else do it for you. :)
Title: Re: Nick Devlin's article
Post by: dreed on September 01, 2011, 09:21:25 am
Even so, correcting for distortions is not the same as correcting perspectives to match your artistic vision of what's "looking right". Only you can make that decision.Unfortunately, at this stage of technological development, that means that you have to tell LR by using "a bunch of knobs", or have someone else do it for you. :)

Right, but this thread is not about what we can do now, but "what if's".

What can the camera do to make taking photographs better?

And I extended that to the post-processing software with the talk about LR.

At the very least, I want to tell LR that a set of edges should be parallel or that a set of four edges should make a rectangle/square and for it to "work it out."


Something that I wonder about from time to time is sensor based auto-ISO.

What's that, you might ask?

It's the ability of the sensor to have some parts operate at ISO 100 and other parts at ISO 200, thereby pulling all of the "darker tones" up. I don't know if that's worthwhile as the introduced noise may mean that you may as well do it post.

Something else that has popped up in my mind of late is that if a camera can do face recognition, why can't it do object recognition? So if you're trying to take a picture of your cat chasing a laser pointer, the camera can keep track of the object (cat) that you initially focused on, as long as it stays in frame, and keep the lens focused on it. It would also apply to birding as well. This may be substantially harder than facial recognition and tracking faces because faces have a rather typical set of objects that make them up - and colours too.
Title: Re: Nick Devlin's article
Post by: dreed on September 01, 2011, 11:50:31 pm
What use is image stabilisation to landscape shooters that are always tripod mounted?

Wouldn't it be more useful to have "wind stabilisation" so that the lens compensated for camera movement due to wind and not hands?

And so that the lens doesn't need to guess about how much movement is required, why not attach a USB device to the camera that monitors wind speed and direction relative to the camera, feeds that information into the camera so that it can then direct the lens to apply correction?

Through experimentation today, it would seem that IS can be of use like this (although it is somewhat limited), but given that wind can actually be measured, why not?
Title: Re: Nick Devlin's article
Post by: duane_bolland on September 02, 2011, 01:54:20 am
I have no interest in any of these five technologies. 
Title: Re: Nick Devlin's article
Post by: stamper on September 02, 2011, 04:15:30 am
Quote

It's the ability of the sensor to have some parts operate at ISO 100 and other parts at ISO 200, thereby pulling all of the "darker tones" up. I don't know if that's worthwhile as the introduced noise may mean that you may as well do it post.

Unquote

In a good quality full frame camera this shouldn't be a issue. The difference of 1 stop should not be noticeable if you consider that noise isn't a problem at 800 iso in say for instance a D700. This issue of noisy shadows at base iso seems to be a hot one in various forums. It is overblown by posters with a scientific bent rather than a practical one.  ::)
Title: Re: Nick Devlin's article
Post by: stamper on September 02, 2011, 04:16:10 am
I have no interest in any of these five technologies. 

And?
Title: Re: Nick Devlin's article
Post by: jani on September 05, 2011, 06:02:43 am
Right, but this thread is not about what we can do now, but "what if's".
What I'm trying to say is that that particular "what if" may be seriously out of reach for many, many years to come. Mind-reading is really, really hard, and we've only just scratched the surface of it.

Look at how far your average powerful PC or Mac has come on this road, and then consider that processing power in a camera is far, far less because of power requirements.

Wishful thinking is nice, though, and I want a camera with dynamically adapting, floating lenses, so that I don't need to carry a huge backpack, and a neural fourth-generation interface. ;)
Title: Re: Nick Devlin's article
Post by: dreed on September 05, 2011, 07:34:05 pm
Some ways in which LR could "enable" better digital photography...

Introduce the focus equivalent of HDR - call it HFR. The idea being that you can take two (for example) photographs of subjects shot at f/2.8 (say), that are 50 meters apart and have LR take the in-focus parts of both photographs and merge them into one that has the in-focus bits from both plus background from one or the other. This would allow us to shoot at small f-stops to reduce the impact of diffraction on image quality without having to sacrifice depth of field.

If that could be made to work and work in-camera (like HDR is now), then you could pick out two boundary points you want to be in focus on the LCD screen and the camera could calculate the distance, take into account the lens and f-stop being used and then step the focus from one point to the next, taking a photograph for each distance interval that would be in focus until a continuum of photographs is obtained that spans the distance of the two selected points in the field of view.

Introduce a new method to modify colour. Rather than move around a bunch of sliders that change hue, saturation, exposure, etc, give the user a palette of colours from which the currently selected pixel/region can be transformed into using the above. Maybe allow the user to select which transformations can or cannot be used. This then allows me to pick the colour that I want the leaves or sky to be rather than try and work out which particular sliders will give me the look that I want. Maybe this requires multiple colour choices to be made so that a proper transformation equation can be built? I don't know if this would work but it seems like an interesting idea to play with...
Title: Re: Nick Devlin's article
Post by: dreed on September 14, 2011, 09:53:53 pm
Whilst shooting over the weekend, it occurred to me that ETTR is rather tricky in situations where the light is constantly changing.

At both sunrise and sunset, it would seem that exposure times are better thought of as calculus equations because the the light level at the start of a (say) 30 second capture can be different than at the end.

But rather than try and build into cameras a method to simulate pixel exposure, maybe the pixels themselves should drive the exposure.

That is, when a pixel on a sensor reaches a given "fullness", lets say 95%, the sensor self-triggers the exposure.

So rather than the user telling the camera how long to keep the shutter open for, the camera tells the user how long the shutter was open for in order for the brightest pixel to fill to a given percentage.

Being ignorant of the physics involved at a microscopic level, I have no idea if the above is at all feasible/possible. But as a photographer that's keen on the ETTR idea, it sounds nice :)
Title: Re: Nick Devlin's article
Post by: ErikKaffehr on September 17, 2011, 09:27:52 am
Hi,

Not very feasible! There are about 20-80 MPixels to be scanned, several thousands time a second, on battery power. Would be feasible to design circuitry for the feature but you would perhaps prefer to use the silicon are to collect photons?

Don't want to be negative, just try to put things in another perspective...
Best regards
Erik

Whilst shooting over the weekend, it occurred to me that ETTR is rather tricky in situations where the light is constantly changing.

At both sunrise and sunset, it would seem that exposure times are better thought of as calculus equations because the the light level at the start of a (say) 30 second capture can be different than at the end.

But rather than try and build into cameras a method to simulate pixel exposure, maybe the pixels themselves should drive the exposure.

That is, when a pixel on a sensor reaches a given "fullness", lets say 95%, the sensor self-triggers the exposure.

So rather than the user telling the camera how long to keep the shutter open for, the camera tells the user how long the shutter was open for in order for the brightest pixel to fill to a given percentage.

Being ignorant of the physics involved at a microscopic level, I have no idea if the above is at all feasible/possible. But as a photographer that's keen on the ETTR idea, it sounds nice :)
Title: Re: Nick Devlin's article
Post by: dreed on September 17, 2011, 12:42:33 pm
Hi,

Not very feasible! There are about 20-80 MPixels to be scanned, several thousands time a second, on battery power. Would be feasible to design circuitry for the feature but you would perhaps prefer to use the silicon are to collect photons?

Don't want to be negative, just try to put things in another perspective...

I wasn't thinking of scanning the pixels rather that the pixel itself can trigger the sensor to expose by sending a signal to whatever when the photon well reaches a certain level of "fullness".

To put it in more ordinary terms, many bathroom sinks have an "overflow hole" to prevent the sink from overflowing. If water running down that overflow hole could cause the sink to unblock and empty (and for all that to be automatic), then that's closer to what I'm thinking. Now you would just need to arrange 20 million sinks and be able to empty (and measure the amount of water) in each one of them when any one of them starts to drip water through the overflow hole. As an analogy, it isn't perfect and is just meant to be illustrative of the idea.

And the idea being that the sensor itself decides on when to close the shutter based on when any of its pixels reaches a certain threshold in terms of photon capacity.

Undoubtedly this requires a completely different pixel and grid design than what is used today. But then what we use today is more or less designed to fit in with how we've used film rather than starting afresh...