Luminous Landscape Forum
Equipment & Techniques => Medium Format / Film / Digital Backs – and Large Sensor Photography => Topic started by: asf on August 23, 2011, 11:21:50 am
-
http://www.aphotoeditor.com/2011/08/23/mitchell-feinbergs-8x10-digital-capture-back/
-
Interesting that it appears he is using the back to save the cost of polaroids (to check for lighting etc) but still is using sheet film for final capture. I would like to know more about the back eq. what sensor was used and software etc.
-
I'd love to see a sample 'polaroid' from that device. Very interesting!
-
I would as well love to know more technical details, about this sensor: pixel size (12 micro?), resolution, colour (which pattern?), etc ...
Interesting.
Thierry
-
Next up: How about a digital back for the Polaroid 20x24" camera? That could have real mass-market appeal. ;D
-
From the man,himself.
"A number of people have emailed me regarding the back’s specs. The device creates images a bit over 10MP; when cropped to correspond to 8×10 the final image size is 3285 x 2611. The image is 16bit, in RGB. Quality is excellent, due in part to the large pixel pitch. I am currently on vacation, so I can not post any examples. The image quality is not exemplary, but similar to a very high quality amateur camera of similar resolution. I do not use the back for final art; it simply does not have enough pixels to go to print."
-
a pixel size of 77,3 microns, wow!
"when cropped to correspond to 8×10 the final image size is 3285 x 2611."
-
I obviously get that developing this back for the owner was more than explanatory from economical point of view bu still... 10 mp on 10 x 8 sensor.. damn, such a waste
-
Wow - only 10mp. But maybe fantastic looking images? I'll bet the sensor was originally developed for astronomy use.
-
Wow - only 10mp. But maybe fantastic looking images? I'll bet the sensor was originally developed for astronomy use.
I very much doubt it. We astronomers like relatively big pixels, but that would be say 16 micron, not 77 micron! And there's no way a research instrument would use a Bayer array - it is described in the article as a "color capture back". Amateur astrophotographers (and I love to do that too) get wonderful pretty pictures using DSLRs with Bayer sensors, but for research we always want the full spectral response at every pixel, so that we can choose the colour filtration to be whatever the science requires; Sloan broadband filters, Stromgren intermediate band, nebular narrowband, or whatever.
Ray
-
Good info - thanks. The only large sensors I've seen listed were for astronomy so that's why I made that bet. What other scientific applications would need such larger sensors? Hopefully when Mr. Feinberg returns he'll provide more info.
-
He can afford to go on vacation after buying 2 of those?! Go Mitch!
From the man,himself.
"... I am currently on vacation, so I can not post any examples"
-
He can afford to go on vacation after buying 2 of those?! Go Mitch!
i like
-
"The image quality is not exemplary"
Shame. But goes to show it can be done which the best part I think.
-
... I'll bet the sensor was originally developed for astronomy use.
Much more likely military surveillance.
I've long wondered how and when such a back might be created. There would never be a mass market for such a product. Not even a micro-market, really. So this thing may be the only sample we see.
I'd like to see a file from this back. But I bet that it's a real mess to use in any practical terms.
-
I'm absolutely not a tech guy, but shouldn't there be a way to stitch smaller sized sensors to get whatever sensor size you want? What technical difficulties would such a solution introduce? Again, don't scold me for my lack of technical expertise :)
-
That is absolutely possible, and done since the begin of digital backs. It needs a bit of experience, the right lens(es) to allow enough movement within the IC, or a pano head, a computer with a minimum of memory and a stitching SW, and done it is.
However, subjects with movement are one of the limitations, obviously.
Thierry
-
Hi Thierry,
No no, I'm fully aware of stitching as a photo teqchnique :) What I meant was physically stitching the sensors of a smaller size to create a bigger sensor and was wondering why is this task impossible. Some bigger MFDB sensors, when you look at them under right angle, seem to be stitched from smaller sensors.
-
Hi Thierry,
No no, I'm fully aware of stitching as a photo teqchnique :) What I meant was physically stitching the sensors of a smaller size to create a bigger sensor and was wondering why is this task impossible. Some bigger MFDB sensors, when you look at them under right angle, seem to be stitched from smaller sensors.
Indeed, you're not a technical guy...
-
Citing someone from this forum:
Aside: Dalsa does makes some large CMOS sensors for X-rays and maybe telescopes, by "mosaicing" smaller sensor chips, but that adds visible join lines, unacceptable in high end MF photography. This mosaicing is NOT the same as "stitching", which produces a single large sensor chip. Canon has also designed some large sensors, but again suitable for uses like telescopes but not MF cameras.
So, there's no way to stitch sensors physically? And if it is impossible, can someone enlighten my why?
-
Indeed, you're not a technical guy...
Are you a technical guy, design_freak? If so, why don't you answer Mr. Rib's question?
-
Are you a technical guy, design_freak? If so, why don't you answer Mr. Rib's question?
I'm so technical that I know how to use "google" . Unfortunately people are lazy and ask questions instead of searching for answers. Just answer: it is too expensive
-
Actually, mosaicing would work for what is essentially a test strip that can have lacunae. I never understood why wafer scale tech isnt used more, i think it has more to do with common limitations of manufacturing equipment and intellectual laziness rather than real difficulty or expense. A direct write on wafer (ion implantation) system could probably have done it even back in my day, but these were little used, except by the chinese who built their own to circumvent the embargo on steppers. Disclaimer: I only wrote one Cad system and designed only one chip in my student days, so I know nothing whatsoever about *today's* technology.
Edmund
-
I'm so technical that I know how to use "google" . Unfortunately people are lazy and ask questions instead of searching for answers. Just answer: it is too expensive
And there I was, actually spending time to teach my students, when all along I should have been telling them: "Don't be so lazy! Just use google and leave me alone!"
You know, with this approach, we could eliminate entire expensive educational systems. No-one with knowledge, skills and experience need ever be employed again to impart them. ::)
Ray
-
And there I was, actually spending time to teach my students, when all along I should have been telling them: "Don't be so lazy! Just use google and leave me alone!"
You know, with this approach, we could eliminate entire expensive educational systems. No-one with knowledge, skills and experience need ever be employed again to impart them. ::)
Ray
Quietly, maybe no one will hear :-)
So seriously, teaching the students is another matter. Their knowledge must be thorough, this can not be replaced by the Internet. But it was good if your students know how to use google. Unfortunately, according to recent studies, even students in engineering have a problem with finding information via Google....
Do not suppose that MR. Rib wanted to be a student and explore the mysteries of CCD sensor technology, construction and in the future to build them. He is too old ::) But he can still learn a trick to use google.
-
It's not lazyness, it's depending on your best information source, which for me is Lula when it comes to technical matters. I'm not sure what's wrong with you but I hope you'll get over it.
-
It's not lazyness, it's depending on your best information source, which for me is Lula when it comes to technical matters. I'm not sure what's wrong with you but I hope you'll get over it.
Don't worry, it's his MO, just take a glimpse at his commenting history.
-
Don't worry, it's his MO, just take a glimpse at his commenting history.
a man who thinks that Nokia is still the market leader ...
I know that someone knows better it becomes annoying. But what can I do? With this should be born ;D
-
a man who thinks that Nokia is still the market leader ...
I know that someone knows better it becomes annoying. But what can I do? With this should be born Grin
I know they are: IDC CQ1 (http://www.dailytech.com/IDC+Nokia+Remains+Top+Smartphone+Vendor+Worldwide/article21565.htm), Gartner CQ2 (http://www.macrumors.com/2011/08/11/gartner-nokia-held-off-apple-in-smartphone-sales-in-2q-2011/).
I don't know what your second sentence means. Nor do I care.
-
It's not lazyness, it's depending on your best information source, which for me is Lula when it comes to technical matters. I'm not sure what's wrong with you but I hope you'll get over it.
If you feel offended, I apologize. That I have a way of being. Such chip is done for the Army. Hence they are very expensive toys. So you will not find such sensors in a long time in MF. Contrary to appearances, is not built with several ready-made sensors.
-
If you feel offended, I apologize. That I have a way of being. Such chip is done for the Army. Hence they are very expensive toys. So you will not find such sensors in a long time in MF. Contrary to appearances, is not built with several ready-made sensors.
Off the top of my head, I'd expect $50-100K in mask creation costs for an old process, and then each chip costs you a wafer. For a current process you need $300K or so for masks, but who needs a current process to make HUGE sensor cells?
On a related note, I would speculate that the fab lines and processes used to make the iPad3 display could easily be modded to make a large sensor with very large cells.
I can keep making up fake numbers and smart sentences and general BS, while you people talk about Google. But in fact, the marginal costs per sensor for *LO-REZ* 8x10 these days are probably in line with making them for what an H4D60 is sold for today, apart from the initial design and masking costs.
Actually, all of that is engineering, it's not as hard as photography :) once someone has already done it the first time, any idiot like me can do it again by following the recipe with a bit of work. Look up the circuits, get process details, spend a night on the simulator, iterate until exhausted ... the chip topography is very repetitive (just like memory, in fact in a way it is memory) and so you probably only have a handful of base cells to deal with.
I'd be surprised if they have more than a couple of people doing the layout on each sensor they push out at DALSA; of course they have the house design libraries to back them up, but so did I when I was young.
Edmund
-
Here (http://golembewski.awardspace.com/photographyGallery/portraits/index.html)'s another way to make digital 8x10 (or so) sized pictures on the cheap.
-
Off the top of my head, I'd expect $50-100K in mask creation costs for an old process, and then each chip costs you a wafer. For a current process you need $300K or so for masks, but who needs a current process to make HUGE sensor cells?
On a related note, I would speculate that the fab lines and processes used to make the iPad3 display could easily be modded to make a large sensor with very large cells.
I can keep making up fake numbers and smart sentences and general BS, while you people talk about Google. But in fact, the marginal costs per sensor for *LO-REZ* 8x10 these days are probably in line with making them for what an H4D60 is sold for today, apart from the initial design and masking costs.
Actually, all of that is engineering, it's not as hard as photography :) once someone has already done it the first time, any idiot like me can do it again by following the recipe with a bit of work. Look up the circuits, get process details, spend a night on the simulator, iterate until exhausted ...
I'd be surprised if they have more than a couple of people doing the layout on each sensor they push out at DALSA; of course they have the house design libraries to back them up, but so did I when I was young.
Edmund
The question is, who really needs it. Is this the right way forward ... Whether the investment will pay for itself? And even if it's not better to squeeze as much as possible with current technology. Chip costs some $ 500-2000, and the camera 20k - 40k. If I were a manufacturer of squeeze as much as possible from what I have.
-
Here (http://golembewski.awardspace.com/photographyGallery/portraits/index.html)'s another way to make digital 8x10 (or so) sized pictures on the cheap.
You can...
but not simpler just to use the film? Nicer, more pleasant ...
-
You can...
but not simpler just to use the film? Nicer, more pleasant ...
Indeed, you are not a tinkerer.
-
The question is, who really needs it. Is this the right way forward ... Whether the investment will pay for itself? And even if it's not better to squeeze as much as possible with current technology. Chip costs some $ 500-2000, and the camera 20k - 40k. If I were a manufacturer of squeeze as much as possible from what I have.
It's a question of perspective - I mean as the OP notes, it just takes one guy who wants one for the base investment to be amortized; after that you could imagine a cottage industry - if someone offered me a high quality 8x10 back for $1O K I think I'd go for it, even at the cost of selling a lot f my lenses.
Actually, I'm quite astonished that the guy who had the 8x10 polaroid made didn't have one made that was good enough for photographic purposes.
Edmund
-
Yeah, it kind of blows my mind that he'd go to all that trouble and expense with the idea that he would not replace both the polaraoids and the film costs by getting something higher res. Admittedly, I've never shot 8x10 film so can't say firsthand how wonderful it is, but it must be.....
as and aside, I am wondering how much capital went into the 'impossible project' to bring back polaroids anyhow?
-
of course one wonders: if you could get about the same level of quality with a chip that was 4 x 5 cm, gave 80 mp resolution, was portable, already battle tested, would you rather carry around an 8x10 view camera?
-
of course one wonders: if you could get about the same level of quality with a chip that was 4 x 5 cm, gave 80 mp resolution, was portable, already battle tested, would you rather carry around an 8x10 view camera?
Either would be nice :)
Edmund
-
Of course it would be great to have 80 - 150 megapixel at 8x10 Or 6x7. Only if someone is aware of what it costs, because unfortunately the same matrix is not the end. What is needed is a body, lenses, pretty damn accurate AF systems. (In this format, it is quite a big problem, depth of field is very small). Since I am professionally associated with the determination of what will be put on the market and what does not and why. My intuition tells me that the sooner we will see MF camera that will make films and it will be sooner than later. The reason is quite simple, such are the expectations of customers. Film industry is attractive, gives security to the manufacturer for the failure of the market as a result of technological transformation.
-
Indeed, you are not a tinkerer.
Not at all, I like to build, construct, but I must see in this sense. Perhaps if I had nothing to do with free time ...
-
My intuition tells me that the sooner we will see MF camera that will make films and it will be sooner than later.
What would be the point of that? HD video can be made with pretty much any current camera. Absolutely nobody has 2k/4k projecting capability outside of high-end screening rooms and a handful of movie theaters. Even when they become more popular, 4k has only about 7 megapixels AFAIK, which could be pulled for a fraction of the cost from 35mm form factor cameras of the near future.
Perhaps more importantly, MF would mean even shallower DOF, which is already a major problem when shooting with a FF camera. Can you imagine pulling focus on paper-thin DOF of MF, or the floodlights required to light a scene properly with the poor high ISO performance of MF sensors?
Not to mention the ergonomics of still cameras, which are not suited for motion.
-
What would be the point of that? HD video can be made with pretty much any current camera. Absolutely nobody has 2k/4k projecting capability outside of high-end screening rooms and a handful of movie theaters. Even when they become more popular, 4k has only about 7 megapixels AFAIK, which could be pulled for a fraction of the cost from 35mm form factor cameras of the near future.
Perhaps more importantly, MF would mean even shallower DOF, which is already a major problem when shooting with a FF camera. Can you imagine pulling focus on paper-thin DOF of MF, or the floodlights required to light a scene properly with the poor high ISO performance of MF sensors?
Not to mention the ergonomics of still cameras, which are not suited for motion.
I think that in the future will not need 100MP, especially since you'll be watching it on the iPad. "MF" I mean the producers and the concept of the camera (modularity). As for ergonomics - smart you are :-) the answer is simple so you assume DB to another body which is more for the moviemaker - if you want to make films. Why? to enter the market more professional, where Arri and RED
-
Not to regress...
So, there's no way to stitch sensors physically? And if it is impossible, can someone enlighten my why?
The vast majority of sensors have a surrounding housing (typically 1/8-1/4" around the outside of the sensor itself) in order to collect and distribute all the signals needed (this makes them flatter - rather than having a dense backing structure = thinner cameras. So combining existing sensors (Dalsa/Kodak) means 1/4 - 1/2" gaps between them. There is not a good work around for this problem, until chips specifically designed to be combined are developed.
I also would expect that the difficulty of combining the data captured in a meaningful way (many individual high resolution chips - combined in firmware or software, into one cohesive image) would take some serious horsepower.
-
but again suitable for uses like telescopes but not MF cameras.
So, there's no way to stitch sensors physically? And if it is impossible, can someone enlighten my why?
This is a bit like placing televisions side by side to get a bigger picture...
It might be possible to get it to work for some applications, if you design a chip specifically for the job, with pixels right up to the edge of the device.
Fiber optics might work.
-
A lot of video was made with groundglass adapters, one could use the same trick for stills.
Edmund
-
I think that in the future will not need 100MP, especially since you'll be watching it on the iPad.
I guess you've never heard of UHDTV. When even TVs are 33 megapixel, our medium format shots will at last have a worthy screen.
http://en.wikipedia.org/wiki/Ultra_High_Definition_Television
-
I think that in the future will not need 100MP, especially since you'll be watching it on the iPad.
You mean the iPad X with the 100MP screen?
-
I guess you've never heard of UHDTV. When even TVs are 33 megapixel, our medium format shots will at least have a worthy screen.
http://en.wikipedia.org/wiki/Ultra_High_Definition_Television
HDTV penetration is 50+ in only a few western countries now. UHDTVs are years and years away - and they require much more than just the display.
-
HDTV penetration is 50+ in only a few western countries now. UHDTVs are years and years away - and they require much more than just the display.
Yes, but, while we are waiting for the broadcasters to catch up, it would be a good format for a stills projector or specialist video... but the format is similar to IMAX? ¿why a different standard?
-
You mean the iPad X with the 100MP screen?
No :)
base is to read with understanding ... You do not need a matrix of 100 megapixels to deliver the image to the content to be published on ipad. newspapers on paper disappear, what is certain. it becomes so sooner than we think
-
The quality of a well shot blue ray image from a good player on a good plasma screen (Panasonic typically) is nothing short of breathtaking at 46-50 inch.
I mean pure fall of your chair amazing even for someone used to high quality large prints.
4K has close to zero value for household applications.
Cheers,
Bernard
-
4K has close to zero value for household applications.
Hi Bernard,
The requirements for typical household viewing distances, combined with moving images instead of stationary ones, and lack of side by side comparison explains it all.
Cheers,
Bart
-
The requirements for typical household viewing distances, combined with moving images instead of stationary ones, and lack of side by side comparison explains it all.
That, combined with people's reluctance to drop €€€ on a new TV just so they can get their football fix in HD is not exactly a winning proposition in today's economy. Convincing the rest of us who did drop the €€€ on a 1080p setup to move up to 2k/4k is not going to happen in the foreseeable future.
-
The requirements for typical household viewing distances, combined with moving images instead of stationary ones, and lack of side by side comparison explains it all.
I agree, but since movie theatres are not doing that great... the market for 4K as a whole is tiny at best and not likely to grow for many years. Realistically, most people would benefit a lot more from correctly optimized 1080p with less compression.
So the value/need to have larger sensor cameras support 4K is very low IMHO but I understand that some people (think Red) will try to convince us that we need 4K at least in the processing pipe to generate 1080p down the road... while Canon is a lot more realistic with their pure RGB 1080p offering.
MFDB are focusing on surviving in the tiny niche they have burried themselves in with crazy pricing, I don't expect them to ever be able to procude video coming close to Panasonic G2 quality, even at... 100 times the price.
Cheers,
Bernard
-
I agree, but since movie theatres are not doing that great... the market for 4K as a whole is tiny at best and not likely to grow for many years. Realistically, most people would benefit a lot more from correctly optimized 1080p with less compression.
Even that is ignoring the realities of the marketplace. 1080p content is still not near 100% penetration of "HD" content delivered to homes via cable, satellite or online - so it's either 1080i or 720p. And when it is 1080p, it's compressed to oblivion to fit more channels in the fiber, less bw cost to deliver the video, etc. But you don't hear many people complain, it's only the movie buffs and early adopters.
What I'm saying is that we're stuck with HD (and "HD") into the foreseeable future - 5 years, more likely 10 or even more.
Oh, and let's not forget that you won't get benefit from higher-than-1080p resolutions in a home setting until you get a BIG screen - and that's even more costly, as it requires not only the expensive set/projector, but also a big living room (costly).
Remember, these are the people who think 128kbps MP3 is just fine.
48 fps on the other hand is another matter. Not sure how many current HD TVs are compatible with those streams, but it might be the next big thing rather than more resolution.
-
Even that is ignoring the realities of the marketplace. 1080p content is still not near 100% penetration of "HD" content delivered to homes via cable, satellite or online - so it's either 1080i or 720p. Adn when it is 1080p, it's compressed to oblivion to fit more channels in the fiber, less bw cost to deliver the video, etc. But you don't hear many people complain, it's only the movie buffs and early adopters.
What I'm saying is that we're stuck with HD (and "HD") into the foreseeable future - 5 years, more likely 10 or even more.
We are saying exactly the same thing.
Cheers,
Bernard
-
There are x-ray detectors corresponding to the required sizes
http://www.diraxray.com/en/structure/x-ray-detectors/ccd-x-ray-detectors/
Now you develop the colour mask add decoding chips, and you have a low resolution large area detectors
or even this
http://www.rayonix.com/products/mx-he-series/
mind you these have between 10 and 36MP resolution at around 40x40 cm plate, so you can think of 15"x15" Polaroids!
-
Yes, Bernard... I have a 50 inch Pioneer and the (2 megapixel) and some broadcast HD pictures look as good as ff 35mm... but, if you are using a Bayer-interpolated sensor with an anti-aliasing filter you need to down-sample from 30 or 40 Mpx to get an optimum 2Mpx file.
The quality of a well shot blue ray image from a good player on a good plasma screen (Panasonic typically) is nothing short of breathtaking at 46-50 inch.
I mean pure fall of your chair amazing even for someone used to high quality large prints.
4K has close to zero value for household applications.
Cheers,
Bernard
-
Yes, Bernard... I have a 50 inch Pioneer and the (2 megapixel) and some broadcast HD pictures look as good as ff 35mm... but, if you are using a Bayer-interpolated sensor with an anti-aliasing filter you need to down-sample from 30 or 40 Mpx to get an optimum 2Mpx file.
I am not sure where you got this 30-40mp figure from.
A 12 MP image that is critical sharp and well sharpened already looks very good at pixel level, it looks amazing after downsizing to 1080p.
There might be a tiny difference between 30mp and 12mp downsize, but I doubt anyone would be able to see it on screen at 1080p.
Cheers,
Bernard
-
Imagine now that you will read the newspaper to be displayed on your contact lenses 8)
-
if you are using a Bayer-interpolated sensor with an anti-aliasing filter you need to down-sample from 30 or 40 Mpx to get an optimum 2Mpx file.
An anti-aliasing filter spreads the light destined for each pixel over the adjacent 8 pixels, so you could argue that you need a 10 time down-sample to compensate, and Bayer interpolation interpenetrates one pixel from 4 real pixels, so, theroetically, if these two factors effectively multiplied, you would need 40 pixels to get one optimal pixel.
So, according to that theory, a 1MPx crop from a 4 shot MF picture would be as good as a 40 Mpx AA Bayer picture, which is clearly not the case.
addition P.S.:
When I re-sampled for my wife's website:
http://rosalindcaplisacademy.co.uk/
I expected there to be no perceivable difference between the 15ish Mpx GH2 pictures and the 60Mpx H4D-60 picture (which did not make it to the web site, as it was the wrong shape)... but the difference was easy to see at the res I uploaded, but they further down-sampled them for the website.
I am not sure where you got this 30-40mp figure from.
A 12 MP image that is critical sharp and well sharpened already looks very good at pixel level, it looks amazing after downsizing to 1080p.
There might be a tiny difference between 30mp and 12mp downsize, but I doubt anyone would be able to see it on screen at 1080p.
Cheers,
Bernard
-
An anti-aliasing filter spreads the light destined for each pixel over the adjacent 8 pixels, so you could argue that you need a 10 time down-sample to compensate, and Bayer interpolation interpenetrates one pixel from 4 real pixels, so, theroetically, if these two factors effectively multiplied, you would need 40 pixels to get one optimal pixel.
So, according to that theory, a 1MPx crop from a 4 shot MF picture would be as good as a 40 Mpx AA Bayer picture, which is clearly not the case.
Dick, I think I can explain this. The Bayer interpolation happens afterwards and is decoupled from the convolution of the AA filter; interpolation (informed guessing of a missing value) is not the same type of process as convolution (spreading out of signal). So the area affected, when the two processes act on any given pixel location, is not the product of the number of adjoining pixels that they each individually operate on.
Also, the AA filter acts as a convolution kernel which is tapered, not a top-hat block; so the way that it distributes light is not nearly as severe as dividing it equally over 8 pixels. The vast majority of the light ends up in the central 4 pixels.
So if you took each block of 2x2 pixels as a "super-pixel", and instead of de-Bayering them by interpolation, directly assigned a real RGB colour from their R/G/B/G information, you'd have no Bayer guesswork artefacts; and the same 2x2 superpixel would also "suck in" almost all the AA-distributed light at its location.
Ray
-
Mr. Rib,
I think that was me you quoted, so here are some of the problems I know of.
1. The sensor chips have to output along at least one edge, so even when you stick several together, each one has to have an outside edge ... Else there has to be a substantial gap between chips for the output wiring. So some X-ray sensors use 2x2 arrays of chips, but that is a natural limit.
2. For most photographic purposes, the pixels have to be about 10 microns or less, which is 1/2500 inch, so getting two sensor chips to fit that snugly is tricky, and failing that snug fit there will be lines. (No big problem for X-rays though, or for this guy's "Polaroid")
By the way, these huge 77 micron pixels are about the size of the pixels on the iPhone's so-called Retina display, and so I guess that this sensor was manufactured using the same equipment used to make this new generation of high resolution LED panels. This LED panel making gear is clearly designed to handle far larger sizes than the gear used to make IC's and normal camera sensors. So maybe soon the same tools used to make the 8K resolution TV screens that are being aimed at in Japan could indeed make that 24” x 20" "digital Polaroid" sensor!
-
As for the device in the original post, I emailed the owner again about a sample but no reply. Last time he replied that he was on holiday. Either he really doesn't want to share, or perhaps the whole thing is a hoax?
-
if you are using a Bayer-interpolated sensor with an anti-aliasing filter you need to down-sample from 30 or 40 Mpx to get an optimum 2Mpx file.
An anti-aliasing filter spreads the light destined for each pixel over the adjacent 8 pixels, so you could argue that you need a 10 time down-sample to compensate, and Bayer interpolation interpenetrates one pixel from 4 real pixels, so, theroetically, if these two factors effectively multiplied, you would need 40 pixels to get one optimal pixel.
So, according to that theory, a 1MPx crop from a 4 shot MF picture would be as good as a 40 Mpx AA Bayer picture, which is clearly not the case.
What is "an optimum 2Mpx file", and why would I want it? I want "an optimum image", either hanging on my wall or shown on my computer display.
I think that in order to state "theoretically....", you should have a clearly stated, widely accepted theory. I dont think that you have.
An AA-filter acts to smooth/blur the image optically/continously prior to sampling, not totally unlike diffraction blurring. The exact kernel (smoothing function) is somewhat different from that of diffraction, and it is (hopefully) not dependent on camera/lense settings.
If the scene was flat spectrum, and the AA filter convolved with sensel coverage was a "perfect" sin(x)/x function (and the sensel itself was a point-sampler) and the sensor had no CFA, I believe that we could apply Shannon-Nyquist theory rather easily. In that case, an AA-filtered sensor could accurately capture any pattern of light that was bandlimited to N/2 maxima and N/2 minima either vertically or horizontally, if the sensor had N sensels in that dimension. Any light patterns that changed quicker than that (such as stepped edges) would be band limited.
What happens if we, say, change that sin(x)/x function with a rectangular integration corresponding to the sensel spacing (i.e. simulating a AA-filter less idealized sensor)? The Fourier transform of a rectangular function is a sin(x)/x function, so you would get some attenuation of "passband" (desired signal) and bleed-through of aliasing-causing frequencies. This can be easily seen by letting those rectangular integrators slide by an image of hard edges/impulses: the output can have relatively large changes for small changes in sensel/image alignement. For other spots, the expected output image could change exactly zero, even though the camera/scene have changed alignmnt by 1/2 sensel. In other words, it is not possible to recreate accurately the original scene (not even a bandlimited version). That is not to say that an inaccurate representation cannot be visually pleasing (or even more pleasing than the accurate version).
So what happens if we replace the AA filter with a realistic filter like what Canikon use? I dont know. Anyone know their spatial function?
So what happens if we allow the scene to actually have colors, and the sensor to have a CFA and demanding the use of demosaicing? Demosaicing is application specific, and usually proprietary, so I wont comment on that. But scene colors and CFA is interesting. If we assume an improbably narrow spectrally scene that only gets sensed by one of the CFA primaries, I believe that we can use the same analysis as for the color-less case, only that the sensels will be reduced to 1/4 (r,b) or 1/2 (g) while the AA filter stays the same. Clearly, this would make the (up until now) perfekt AA-filter less perfect (too high cutoff frequency), and we would have more spatial aliasing.
So what happens if we allow the scene to realistic spectrally? (most of the information/variation in the luminance)? I believe that this reduce the influence of the CFA spectral selectivity on spatial capture, and that the "monochrome" analysis iturns out to be quite relevant. Quite but not perfect. There will always be corner cases or nitty-grittys where the trade-offs present in Bayer-type sensors are made visible. I think that those trade-offs tend to be good ones for most applications
-h
edit:
most of my post is considering 1-d versions of the problem.
-
What is "an optimum 2Mpx file", and why would I want it? I want "an optimum image", either hanging on my wall or shown on my computer display.
I think that in order to state "theoretically....", you should have a clearly stated, widely accepted theory.
An optimal file is one that contains the maximum data per pixel, with minimum sampling or interpolation-created noise... to a point... the more you downsample, the better the per pixel quality of the file, when you get no more increase in quality with more downsizing, the per-pixel quality is as good as you can get... or optimal. This does, of course, depend on the down-sampling software... ¿and how do non-Bayer interpolated files form cameras without AA filters compare (pixel to pixel) to down-sampled CaNikon files?
Optimal per pixel quality is particularly important for web images, where the pixel dimensions are the limitation... Where the limitation is file size, or download time the problem is different, and the solution can be compression.
My theory uses simple arithmetic, which I have clearly explained. I explained the (false) assumptions I made and these have been clarified above by Ray.
-
Hi. Just editing my post.