Another point I've made here before, that underlines what I wrote above, is that I really doubt that had I been born into the era of digital photography, I would have made it a career. I have neither a natural aptitude for thinking in digital science, nor any urge to know more about it; I have learned enough to do what I do, and only do that because of the love for photography nurtured long ago.
In essence, jumping in now has worked for me because of my early and very long experiences with film and darkrooms, and because of those years I have been able to carry forward a set of personal conceptions of what images can look like and appear "natural" or, if you prefer, convincingly un-effed to death with PS or whatever, even if some recent ones have stretched my imagination quite far into manipulation. But, I believe they have also looked realistic enough to work.
But none of that is thanks to a camera's set of tricks.
I guess I have the opposite experience, having started photography and computing at about the same time: the late 1970's, as a child. My parents were academics- my father an Electrical Engineer, my mother a Pure Mathematician who made a living solving other university researchers' crazy hard-to-solve computer program bugs. Both also photographed as a hobby. I played with my first cameras around the same time I started playing with logic gates and oscilloscopes; the university film development labs and the electronics labs.
I become a professional photographer the same year Canon introduced their first dSLR. I realised on a photographic trip I'd done to LA that if I'd bought the D30 before the shoot, it would have paid for itself in film processing costs alone. And it saved the god-awful flog of getting film scanned.
My business model relies on the low cost and relatively high volumes that digital facilitates. Digital and the web are the reason I could go pro at all.
In the interim, I was a professional scientist- a particle physicist, working in experiments where digital data were everything and everywhere. It's probably not surprising that I just see the camera as the first step in the data processing chain.
I was also a hobbyist landscape and people photographer who was never really satisfied with the results. (Film's over-rated. I hate grain, and even Provia 100 had too much for my tastes. Heresy, but for my particular photographic style and business, true).
It's great that our current cameras support traditional working methods.
But what excites me more is that they've also democritized and opened up whole areas of photographic art that were previously prohibitively slow and painstaking for all but a few obsessives to really contemplate. And some which were flat out impossible.
A great example of this is multi-image capture. It was always possible to do image stitching, even in the darkroom. But making big panoramas was tough- matching the exposure to compensate for lens vignetting, for example. People used to do HDR by selective exposure of prints from different negatives: on relatively easy-to-cut-out subjects like the full moon and a moonlit landscape, say. They did the sort of tone mapping that selective dodging and burning represents to reduce the dynamic range to that which a print can comfortably hold.
Cameras have facilitated this sort of set-up for years. Marking the nodal point and film plane, for example. Auto-exposure bracketing. These have been around so long that you don't tend to hear people complain about them. They just use them or not, depending on whether they are useful for them.
Now we have genuinely new fields of photography which permit us to make images which were nigh-on impossible to get with film cameras before the digital post processing era. Like focus stacking for macro shots.
You probably could have done it in an analogue way combining a small number of exposures for a landscape, or using split dioptres. But now it is possible to stack tens or hundreds of images to render front-to-back sharpness of extreme macro shots. That's new.
That may not be the sort of photography that you are interested in.
But it is a great example of a place where a digital camera can make the whole thing a hell of a lot easier to do. This doesn't involve any crazy AI or anything else that people seem to be fretting about. It's a purely mechanical thing- automate the process of taking 20 or 30 or 50 shots making small changes to the focus point for each shot. This in no way compromises the photographer's skill or vision. It just automates a really dull and fiddly mechanical process.
And if you're going to do that, why not tag the shots in the metadata to facilitate the post-processing as well? And if the processing power is there in camera, why not build a JPEG preview of the focus stacked shot while you are at it?
I remember the complaints about being able to look at shots on the back of the camera, that it would erode the traditional skill of waiting with panic for the shots to come back from the lab to make sure you'd not made an unwitting technical cock-up on an expensive shoot. That was
nonsense; we're now rightfully expecting better and better screens, the better to judge the shots as soon as we've taken them. For the vast majority of photographers, instant review is a significantly better way to work.
I remember the complaints about video features on dSLR, when video is a feature that you'd be hard pushed NOT to implement the moment you have live view (which lots of trad manual focus photographers were calling out for).
Sure, there's an argument that going too heavy on the video side might compromise the ergonomics on the stills side. Ergonomics is the thread that always runs parallel to new functionality. It takes time to get right, and it can be intrusive in the meantime.
But I'd have to say that my Panasonic GH4 is pretty nice ergonomically for stills and video. For sure not as nice as a full-blown digital cine camera, but also a whole lot nicer to carry up a mountain. The Sony video ergonomics are fine, and don't really impinge much on the stills ergonomics either.
Plenty of people said OIS and IBIS was a gimmick, who needed that when you've got a good solid tripod, after all? The answer is anyone who doesn't like shooting with a tripod, or whose photographic subject makes that difficult. A photojournalist moving fast, or a wedding photographer moving with the bride and groom, say.
The algorithms behind IBIS are pretty fearsome- but they don't impinge on the photo-taking experience for most photographers. At least not most of the time. There are a few edge cases like panning in video or using lenses which don't transmit focal length data, for which it is easy to turn off or control more manually. Many photographers, myself included, can just leave the IBIS on for 95% of shots... and get sharper results handheld as a result. And with each generation the algorithm improves to the point where it now deals acceptably well with panning shots for most practical purposes, and it doesn't seem to do much harm leaving it on for shortish exposures on a tripod these days, even. Can help with residual vibrations or wind, I find. Especially if it means I can take a 1 kg travel tripod up the mountain rather than a 5 kg monster.
It's the same with autofocus, especially eye AF. Ferociously complicated algorithms to implement a basic photographic task- focus on the closer eye. It sounds like a gimmick until you've experienced how well it works. But conceptually it's dead easy, and actually using it is so simple (push a button) that my regular assistants prefer using the Sonys to the Hasselblad these days.
So I guess for me I like the idea of the camera offering us higher level abstractions, facilitating manual control over the things that matter (like getting a complete focus stack with all the necessary shots) rather than insisting on the old abstractions as the only way to go (forcing you to adjust focus manually by the correct amount for each of those 30-ish shots in a focus stack).
I think it is really exciting that stuff which was out of reach or plain impossible 20 years ago is now within the realm of jobbing photographers and amateurs. Astro-landscape, for example. Sure, it gets trendy, then overused, then stale, then naff. But 90% of everything is crap, and the 10% can be AWESOME.
So honestly- I think you're mistaken about a CAMERA's bag of tricks. The camera's bag of tricks is there to facilitate getting the result that the photographer wants. I just don't see why that's a bad thing, so long as the ergonomics are acceptable. Finding the right control metaphors and streamlining the UI takes time, and this stuff is new. But the "tricks" themselves are facilitators, not obstacles.
You can't walk on to a sports field with a new dSLR and a 400mm lens and expect to outsell the 20 year veteran next to you. The photographer is still key, and he probably COULD shoot with a manual focus lens and still get more saleable shots than Joe Newbie.
But you can also bet that the 20 year veteran is, in fact, using a top-flight camera with the best autofocus that money can buy. Because that automation is the right facilitator for their shots.
Like eye AF is the right facilitator for many people photographers, and focus stacking is for focus-stacking-extreme-macro photographers. Your first focus stacked macro shot will still be shit. And yet when you figure out what you are doing, having the camera automate some of the drudgery will become a feature that might decide your whole choice of camera system.
Cheers, Hywel Phillips