BJL,
Yes the glass is half full
The debate about the max stepper size is an old one, and in fact in the past Canon always publicly acknowledged that they had some humongous stepper of their own manufacture, but usually for an old process feature size.
My feeling is that there are some larger custom steppers floating around, self constructed at Canon, or made to order for some like Sony (maybe by Nikon?), but that this is a non-problem because by now the industry so frequently needs large chips that there are accepted workarounds.
As regards research costs etc, I had a talk some years ago with the CEO of Aptina, when that company was sill around. I guess he had an understanding of the industry. He told me that basically everything is now driven by cellphones, and the research is done for that purpose and amortized there with hundreds of millions of sensors; in his view its is the innovation made in cellphones that then gets recycled upstream.
As we are having a tech discussion, I guess I'm allowed to say that sensors are a bit like (maybe even derived from) the RAM I was taught about in school, a type of IC that can be made by a few good designers and very very careful process control. Because the actual design is actually all about the basic cell and some control circuits on the periphery, and a small standout design team of a few people can create good cells in comparatively little time, and I guess when it works it really works (no hard bugs). The process control is where you get your yield, where the real battle is fought.
I don't think the incremental design cost of a new sensor that iterates an existing cell structure for a tested process is much more than 3 people 3-4 months. Call it 1 million dollars.
Assume this is about "Sony" sensors, with Sony control logic, from the "Sony" internal cell library (not a Nikon special or such). We've all been there, at least in the hacker movies, right? You have your cell library, you lay out the core with the sensor cell array, and decide what readouts and control logic to fit where on the edges, and as long as you're careful to stay within the functions with which your colleagues populated the library, I guess you're ok you get your base functionality. Then -I'm guessing- you have to figure out clocking and frame rate issues, and change the layout for better clock distribution and cleaner signal readouts and noise immunity, and all that analog stuff (I sure don't understand al those buzzwords but they sound good). I guess the simulator will tell you how well the chip is working, and testing will tell you more, but by the time your team has done this a few times you don't get many surprises - after all this is what your company does for a living! If something isn't working as well as it should, and it's a bottleneck, you try and locate the engineer who designed that cell in the library, and maybe ask for a budget and tune the cell. There are probably regular test wafers run through a running process, so the designers can get fairly quick ground truth on what they're doing.
I think that large run-of-the-mill inhouse sensors at Sony are not now any more painful or more expensive to design than any other type of ASIC, at least for their emloyees who do this on a regular basis. The "research cost" is actually derived from the time of the engineers who use the cell library, engineering workstations and simulators, and testing facilities. I believe that the cell library and process are already almost completely determined by the cellphone trade. There is little R&D going on here, just a commercial decision on what products can be economically offered to best capitalise on base development costs that have already been written off.
In summary as I see it -possibly wrongly- a new large chip is *today* a specs sheet fleshed out by a few months work by a small team. The cost of the chip is basically "just" fab and testing cost. It didn't use to be that way, before cellphones came along, the camera industry had to pay for basic camera chip R&D in those days, and even worse, camera designers had to deal with a host of analog design issues which couldn't be resolved quickly. Sony's drop-in sensor technology has done away with those days when engineers really needed to engineer.
Edmund
PS. I don't know why the 40Mp cameras are more expensive than the 20MP class, but my feeling is that this is partly because of a lower volume part, partly because a newer process needs to be used, partly because of yield issues involving the CFA and more recent equipment being necessary for the CFA, but above all because of a historical industry mantra that cameras are priced by the Megapixels, just as computers used to be priced by the basic clock rate, and now by the number of cores. A price signal has been adopted because the consumer is willing to accept it. And also companies need a high-priced model made of unobtanium to get consumers to value the midrange, and also to demonstrate and market-test new features - think of the motor drive in film-camera days. This is my feeling, maybe someone in the know will speak.
About the cost of the 100MP 44x33 sensor to Fujifilm, I think Edmund is being very optimistic (as usual).
One point is that a few years ago Canon introduced a stepper/scanner that can make sensors up to just over 36x24mm without on-wafer stitching, so the big cost barrier that used to separate APS-C (and Canon’s intermediate “APS-H” format) sensors from 36x24 is now instead between 24x36 and 33x44.
Update; a reference: https://global.canon/en/product/indtech/semicon/fpa6300esw.html
The size limit is 33x42.2; was it wickedly chosen to exclude 33x44, which in turn was perhaps chosen as the biggest 4:3 that can be made on the former largest field size of 26x33mm with a single stitch, avoiding the 2D stitching needed for 53.4x40 sensors? Will one-stitch 42x56 sensors (true 645 full frame) be the next big thing!?
Also, the volume will optimistically be in the tens of thousands, whereas the entry-level 24 MP 24x36 sensors could sell about a million. If R&D costs are a few tens of millions, the cost recovery part of the factory door sensor price could be in the tens of dollars for the 24 MP 24x36 but in the thousands for any new 33x44 (and many thousands for any new 54x40, in an even lower volume market).
For a hint how factors like sales volume effect price, the big price premium for 24x36 cameras with higher pixel counts over those in the higher volume 24MP sector is suggestive. Edmund might suggest that cameras like the Nikon Z7 are only lower volume sellers due to “incomprehensible and absurd overpricing”, but on price-volume, I think he gets it back-to-front.