Pages: « 1 2 3 [4] 5 »   Bottom of Page
Print
Author Topic: A true 6x7 CMOS low light sensor camera, can it exist?  (Read 22084 times)
Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 74


« Reply #60 on: January 09, 2013, 04:57:02 AM »
ReplyReply

Audio sampling and photon capture are not directly commensurable.  The best that audio has managed to do is approximately 21 bits (according to Dan Lavry) while advertising 24.  But they are using an electron stream, rather than converting photoelectrons.  And there is no parity in signal levels between these two phenomena.
Ah, well I didn't know that, the extent of my knowledge on this subject is that a voltage generated from either a photosite or microphone membrane gets digitized and that's it lol.

Quote
If your point is that A-D converters can convert 21 bits very well, that is true.  But in photographic applications, there are not that many electrons to go around.  And there is read error, and shot noise in addition.  Erik or Emil would know better, but it otherwise seems Red is claiming a sensor that uses or exceeds single electron ADUs!
There are problems audio faces too, the limit of dynamic range capture in audio, even assuming perfect equipment performance, is limited by the room noise of even an extremely quiet studio.

Quote
But gain does not multiply out the amount of information.  And it introduces noise.  And you can't do HDR with gain, only pseudo-HDR.  

The pseudo-step wedge is suggestive, but not genuinely informative.  I'd like to see a frame from the Dragon that has that much DR.  I'd really like a detailed technical explanation.  Perhaps there is some innovation going on here, but it needs an explanation.  
Honestly I'm not sure how it works myself exactly, just trying to figure it out from deduction, since this is a technology previously limited to labs. If anything, here it is from the horse's mouth: http://www.arri.com/camera/digital_cameras/technology/arri_imaging_technology/alexas_sensor.html
Edit: It looks like I forgot the specifics, it says the exact opposite, the highlights are derived from the lower gain signal, and the shadows from the high gain. Sorry bout that, I'll change my previous post.

But as I've said before, it's only pseudo-HDR if the different gain levels are derived from one converter, not two converters calibrated to different gain levels. The Native ISO of cinema cameras is around 800-1250 but they still manage to get such extreme amounts of DR, this means that DR is not tied in any way to a camera's gain.

As for HDRx, the Red team says that the Dragon makes HDRx obsolete, and it likely won't be supported by Dragon. There are some members who still want the feature in because it makes still extraction easier, though.
« Last Edit: January 09, 2013, 05:53:51 AM by Kolor-Pikker » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 982


« Reply #61 on: January 09, 2013, 09:21:44 AM »
ReplyReply

Hi Kolor-Pikker,

If you see my exchange with Erik just before this, we figured out that there are two separate exposures being made to produce HDR-x.  And that solves the puzzle.  The sensor doesn't deliver that many bits in a single exposure, but in a combination of two.  And the added dynamic range comes from the highlight end and not the shadow end. 

As I said, it's easier to expand dynamic range into the highlights by effectively expanding the full-well capacity of the sensor than it is to expand dynamic range in the shadows by increasing quantum efficiency and reducing read noise.  Even in the Nikon D4, the physical full well capacity is doubled over its predecessor in a single exposure, making for a base of ISO100 and a wider dynamic range.  Multiple exposures are another way of doing this.
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 74


« Reply #62 on: January 09, 2013, 11:28:54 AM »
ReplyReply

And if you read the last line of my post, then you'll see that the puzzle isn't solved because the Dragon is not blending two exposures via HDRx, which is being dropped from the camera entirely as a feature. HDRx already exists on the Epic, but it has it's own problems, since the shutter speed between the two exposures is different, and may create ghosting during motion. It was a neat work-around while it lasted.
This sensor is claimed to capture 20 stops natively, which I don't particularly dismiss, but the real question is how they're reading that data off of the sensor. With a 16-bit ADC you're technically limited to 16 stops of dynamic range, so how are they getting another 4?
« Last Edit: January 09, 2013, 11:31:33 AM by Kolor-Pikker » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 982


« Reply #63 on: January 09, 2013, 04:03:34 PM »
ReplyReply

Thanks for the correction.  And I apologize if you weren't referring to shadow DR in the first place, but highlight DR.

I would still guess that any additional dynamic range is being added at the highlight end through effective increase in well capacity.  With several sensors yielding over 50% quantum efficiency, there isn't more than a theoretical stop to be gained at the low end.  And with the noise floor as low as it is, we aren't /that/ far from counting photons singly. 

But the additional headroom would still be great news for filmmakers.  As they say, like "film DR."  Lots of room at the top.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 8381


WWW
« Reply #64 on: January 09, 2013, 04:25:46 PM »
ReplyReply

Hi,

They probably use on chip converters like Sony.

The main problem I see with 20 stops is that it would need very large pixels, having a full well capacity of about 1e6 electron charges. A normal camera sensor pixel is usually in the 30000 - 60000 range, so the pixels would need to be much larger then still camera pixels like 20 microns. Would they fit on the chip?

Or could they have extra pixels with ND filters?

Best regards
Erik


And if you read the last line of my post, then you'll see that the puzzle isn't solved because the Dragon is not blending two exposures via HDRx, which is being dropped from the camera entirely as a feature. HDRx already exists on the Epic, but it has it's own problems, since the shutter speed between the two exposures is different, and may create ghosting during motion. It was a neat work-around while it lasted.
This sensor is claimed to capture 20 stops natively, which I don't particularly dismiss, but the real question is how they're reading that data off of the sensor. With a 16-bit ADC you're technically limited to 16 stops of dynamic range, so how are they getting another 4?
Logged

LKaven
Sr. Member
****
Offline Offline

Posts: 982


« Reply #65 on: January 09, 2013, 04:42:38 PM »
ReplyReply

They probably use on chip converters like Sony.

The main problem I see with 20 stops is that it would need very large pixels, having a full well capacity of about 1e6 electron charges. A normal camera sensor pixel is usually in the 30000 - 60000 range, so the pixels would need to be much larger then still camera pixels like 20 microns. Would they fit on the chip?

The D4 captures 120k photoelectrons at ISO100, which gives one more stop of headroom.  But there might be other ways to increase "effective capacity." 

I'm interested to see if they use on-chip converters and how well that works.  These things run very hot.  Using live view on the D800 almost doubles the amount of thermal noise to my eye. 
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 74


« Reply #66 on: January 09, 2013, 05:05:06 PM »
ReplyReply

Don't on-chip converters reduce pixel fill-factor?



The Aaton delta uses full frame CCDs for just this purpose, and as such, has massive highlight headroom.
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 8381


WWW
« Reply #67 on: January 09, 2013, 11:50:09 PM »
ReplyReply

Hi,

They say so. Here are real world SEM pictures of a pair of CMOS sensels.

The main problem with CCD seems to be readout noise to get 20 stops of DR you need to have a Full Well Capacity (FWC) of 1000000, and a readout noise of 1 electron charge.

CCDs used in MFDBs used to have readout noise like 12-17 EC.

I'm somewhat skeptical of the FWC figures given by "sensorgen" as they give different values for cameras using the same chip. Sony Alpha and Nikon D3X both uses a very similar sensor by Sony. Sensorgen gives FWC = 48975 for the Nikon and FWC = 26843 for the Sony. But, chip geometry is the same. Nikon D3X makes much better use of the Exmoor sensor, but I'm pretty sure the FWC is same on both.

Best regards
Erik


Don't on-chip converters reduce pixel fill-factor?



The Aaton delta uses full frame CCDs for just this purpose, and as such, has massive highlight headroom.
Logged

Kolor-Pikker
Jr. Member
**
Offline Offline

Posts: 74


« Reply #68 on: January 10, 2013, 06:39:17 AM »
ReplyReply

If I am correct in memory, the CCDs used on many MFDBs are interline, which is why some backs use microlenses; if they had used full-frame photogates instead, microlenses would make no sense as the fill-factor would already be 100%

In any case, I'm downloading some Raw files from the Aaton to see how the claimed DR handles on my own computer, at $90k for just the camera, it had better be good  Grin
Logged
unlearny
Newbie
*
Offline Offline

Posts: 2


« Reply #69 on: March 08, 2015, 12:01:32 AM »
ReplyReply

These people are all morons  Angry

Look, for 100k a guy got two 4x5 sensors custom made, so the answer is yes.

ask these guys

http://www.specinst.com/
Logged
Phil Indeblanc
Sr. Member
****
Offline Offline

Posts: 1613


« Reply #70 on: March 08, 2015, 01:29:33 AM »
ReplyReply

I didn't read all the replies, but I remember a photographer who had a custom sensor made. I remember it being very large, but maybe BW. I forget the details. maybe mentioned already, I'll have to read this thread later :-)
Logged

If you buy a camera, you're a photographer...
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 8381


WWW
« Reply #71 on: March 08, 2015, 01:59:17 AM »
ReplyReply

Hi,

One thing to keep in mind is that features essentially come free. The expensive thing is sensor surface area. Megapixels are free, square inches are expensive. Designing a 20-40 MP sensor at 6x7 would be much more expensive than just upscaling an existing sensor. A sensor based on the one used in the Sony A7s would fill the bill for you, that would come in at around 50 MP.

Keep in mind that such a sensor would really need an OLP-filter. The larger the pixels the more artefacts will they produce.

Sensor costs scale much higher than sensor area. Doubling sensor area may raise cost 4-8 times (I guess), and producing in small series is more expensive than producing in large series.

Another expensive development is the signal processing chip, Bionz, Expeed, Digic and it's programming. Deactivating the motion stuff is in all probability just changing a byte from true to false, but it may or may not reduce licensing costs.

Well, it seems that Sony can make large sensors at a reasonable cost, it may just happen that your dream comes true. But, in all honesty, I wouldn't bet on it.

Best regards
Erik


Alright, I'm just a photographer, I'm not a pixel peeper or techie so don't jump down my throat for my simple minded question.

If Canon/Nikon can make low light cameras and sensors (or be it Sony's), why can they (any manufacturer) not make a medium format version that is a full frame sensor?
This is what I'm thinking; a low light CMOS, 6x7 sensor that is around 20-40mp for under $20k, live view would be nice but it doesn't have to do video.
My logic is that if Canon/Nikon can build a body with all the extra goodies in it (mount, titanium body, extra electronics etc etc), for under $5k why can they not simply cut a larger sensor out of the original wafer and throw it in a digital back for $20k?
Why are we locked into this 36x48 format or even 40x54?

thanks
be gentle please
R
Logged

unlearny
Newbie
*
Offline Offline

Posts: 2


« Reply #72 on: March 08, 2015, 04:16:42 AM »
ReplyReply

who would have guessed a brand new 50mp medium sensor was going to sell for 8k, attached to a quite wonderful MF body with a storied Photographic heritage no less?  I mean, the Pentax 645D is now 4800... and that is not a lot of money for a weather-sealed MF digital camera loaded with features for making life easier.  I wouldn't have guessed a 65% drop in price for the 645D in 3 years.  The 5d MII hasn't dropped that far yet!  So long as you have competition in the marketplace, and we as consumers support game-changing technologies, it will happen, sooner than you think.  Sony Sigma and Pentax prove that it is a pretty cool thing that there's no monopoly in the camera game. 

When the D800 came out did you think, "I'll just wait until Sony makes a mirrorless full frame camera using the same sensor in two years."  I didn't.  You would think at least Hasselblad would have guessed it and used those as the basis for their rich man's NEX line, and not the 7n or whatever. 

Hang on to your hats people, it's going to be a bumpy ride!  That spectravision company makes seamless ccd combination sensors can make seamless ones, and in 2008 some LF shooter had a 4x5 sensor custom made for 50k.  These poo poo-ers have a very consumer-based idea of where the tech is.  I'm sure there are a number of players in the sensor game who wouldn't kick you out if you offered 20k.  You may have to use a color wheel to make color photographs, but they would look awesome.


Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5322


« Reply #73 on: March 08, 2015, 08:13:37 PM »
ReplyReply

I didn't read all the replies, but I remember a photographer who had a custom sensor made. I remember it being very large, but maybe BW. I forget the details.
It was a pair 10" x 8" sensors of very low resolution, using LCD panel fabrication technology which works at these large sizes. The buyer uses these for test shots in lieu of polaroids, before taking the final images on 10" x 8" film.

But cost is the only barrier: wafer-sized CMOS sensors are already offered on a custom-order basis.  The new Pentax 645 with its 44x33mm CMS sensor has a "sensor cost increment" of about $6000 (on the basis that the rest of the body is comparable to a $2000 Pentax 645 AF film camera) compared to sensor cost increment of about $1000-$1200 for the least expensive 36x24mm bodies and roughly $200 or less for APC-C format.
Logged
Petrus
Sr. Member
****
Offline Offline

Posts: 619


« Reply #74 on: March 09, 2015, 03:58:27 AM »
ReplyReply


The main problem I see with 20 stops is that it would need very large pixels,

Bigger problem is the fact that no lens can resolve more than about 14 stops of DR, hugely complicated cinema lenses probably even less due to internal reflections.
Logged
hjulenissen
Sr. Member
****
Online Online

Posts: 1771


« Reply #75 on: March 09, 2015, 04:45:56 AM »
ReplyReply

Bigger problem is the fact that no lens can resolve more than about 14 stops of DR
What are you basing this claim on?

-h
Logged
Petrus
Sr. Member
****
Offline Offline

Posts: 619


« Reply #76 on: March 09, 2015, 04:57:43 AM »
ReplyReply

What are you basing this claim on?

-h

Read it on the Internet, of course!  Grin

I believe it myself.
Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5322


« Reply #77 on: March 09, 2015, 10:13:42 AM »
ReplyReply

Bigger problem is the fact that no lens can resolve more than about 14 stops of DR, hugely complicated cinema lenses probably even less due to internal reflections.
What are you basing this claim on?
Read it on the Internet, of course!  Grin

I believe it myself.

Here is something that I read on the internet about dynamic range limits due to veiling flare (or glare) from lenses, with many references to earlier data.  It suggests that even 14 stops is optimistic with typical scenes, but that with a completely stationary camera and subject, there might be techniques to overcome it, like one involving taking multiple images though a "mesh" mask that is carefully moved between frames, and then deconvolution processing based on analysis of the "glare spread function".

https://graphics.stanford.edu/papers/glare_removal/glare_removal.pdf

P. S. An earlier paper with more flare/glare quantification:
http://www.mccannimaging.com/Retinex/Publications_files/07HDR2Exp.pdf
« Last Edit: March 09, 2015, 10:21:30 AM by BJL » Logged
yaya
Sr. Member
****
Offline Offline

Posts: 1170



WWW
« Reply #78 on: March 09, 2015, 01:38:47 PM »
ReplyReply

If I am correct in memory, the CCDs used on many MFDBs are interline, which is why some backs use microlenses; if they had used full-frame photogates instead, microlenses would make no sense as the fill-factor would already be 100%

Think you've got this backwards...I'm not aware of ANY digital back ever made using an interline CCD...those where popular in compact cameras and are still popular in small industrial cameras...
DB's use full frame chips.

BR

Yair
Logged

Yair Shahar | Phase One - Mamiya Leaf
e: ysh@mamiyaleaf.com | m: +44(0)77 8992 8199 | yaya's blog
hjulenissen
Sr. Member
****
Online Online

Posts: 1771


« Reply #79 on: March 09, 2015, 05:03:55 PM »
ReplyReply

Here is something that I read on the internet about dynamic range limits due to veiling flare (or glare) from lenses, with many references to earlier data.  
Thanks. I also found this:
http://www.dpreview.com/forums/thread/3737215
http://www.dpreview.com/forums/post/39269250
http://www.dpreview.com/forums/post/55288242
"Based on simulations, the Canon 20D can record nearly 20 stops of dynamic range using HDR imaging if only a point light source is present. If half of the field of view is covered by an extended source, then only 9 stops of dynamic range can be recorded by the 20D..."

-h
Logged
Pages: « 1 2 3 [4] 5 »   Top of Page
Print
Jump to: