Missing the forest for the trees.
There's a pretty standard spiel that says that somehow metadata is going to change everything. The most laughable variation on this I ever saw was in Wired, about shipping containers. "One day the information ABOUT a container will be worth more than then contents OF the container" -- think about that a little bit. How much would you pay for the location of a $100 bill? A thousand dollars? I wouldn't.
Anyways. I seem to recall this same basic thesis, that somehow the interlinking of photos with other data is going to change things. The GPS information, the time and date information, blah blah blah.
It's partly right, but it's missing the forest for the trees. A photo with a bunch of metadata attached is super aweZOM and COOL but it's not new. What's new is a billion photos (with, optionally, GPS information attached).
The challenge we face today is what it means to have a billion new photos an hour, with or without GPS data, time and date stamps, facial recognition. How to do we deal with this in our own lives? What does it mean for us? Even simple things like "I want to make a photo book out of the 2000 photos I have taken of my family over the last five years" are murderously difficult. Solving that problem seamlessly, easily, rendering that into a one-click solution is definitely going to require all that metadata, all that inter-app linkage. Pull the children's birth dates oout of the calender, correlate with time and date in the EXIF, and do some object recognition to find the cake and the pinata, pull together a nice little photo essay of the kid's 3rd birthday. There's chapter 3.
And that's just the tip of the iceberg.
The essential point about revolutions is that you can't predict them. It'll be a year before we even notice that the new thing has taken over, and it'll be totally unexpected. Lytro is on to something, trying to make a new thing, but they missed the mark. Nobody wants an interactive still image that they can fiddle the DoF in. That's stupid. But there's *something* there, something new. And maybe it's a piece of the puzzle.
My thought is that the Lytro image is actually a meta-photo, used to generate contextually appropriate photos, as we struggle with the mass of pictures.
"Find me all the pictures of grandma" searches your archives using facial recognition. When it finds a Lytro file with grandma in it, it renders a picture with grandma in focus. In the context of another search, another large-scale multiple-image operation, it might render a totally different image.
The point, though, that the prognosticators are missing, is that it's about large scale multi image operations. Searches. Books. Albums. Slideshows. It's about sifting down a huge archive of picture data and generating the right ones used in the right way.
http://photothunk.blogspot.com/2015/05/the-future-of-imaging.htmlhttp://photothunk.blogspot.com/2015/06/the-future-of-imaging-ii.htmlhttp://photothunk.blogspot.com/2015/06/future-of-imaging-iii.htmlfor some disconnected and inconsistent thoughts along these lines.