I just worked an example for you where I took the samples as acquired by whatever "ADC" and still be able to accomplish the linear approximation in L2 sense. Please note that I have worked out this example very quickly, it is not the best, and there are certain hacks in there, and it can be made better, but it is just to illustrate that you don't need to change "ADC's".
The L2 approximated one looks sharper than straight linear interpolation with a lot of ringing artifacts. But as I said it was just done in a "shortcut" way and there is hope to make it better. But the point is that I did not change any ADCs.
OK, I'll bite!
So, let's forget the L2-norm and say, instead, that we assume the (continuous) image on the sensor (yeah, Lenna was scanned, work with me here) is piecewise linear and a remarkable piece of luck would have it that the pieces are all between the nearest pixel centers :-)
So, what would that look like, if we assume that the acquired pixels represents the average of the function? That is we assume pixels are noise-free, the fill-factor is 100% etc.?
As attached I would think. I hope you can see which is which ;-)
Here is the code which should work in Matlab or any halfway decent clone (the above was an ancient Octave):
% Load image.
img1 = double(imread('lena512.png'));
% Truncate an inverse (use a window to suck less).
F = 1024;
t = [zeros(1,100), [1, 6, 1]/8, zeros(1,F - 103)];
T = real(ifft(1./fft(t)));
m = F - 100;
f = T(m-5:m+5);
% Apply inverse.
img2 = conv2(img1, f, 'same');
img2 = conv2(img2, f', 'same');
% Upscale N times (original image in the example).
N = 4;
up = img1;
[H,W] = size(up);
up = reshape(up, [1, H, 1, W]);
up(N,1,N,1) = 0;
up = reshape(up, [H*N, W*N]);
% Filter using desired kernel (bilinear kernel in example).
k = [1, 2, 3, 4, 3, 2, 1]/4; % Bilinear
%k = [1, 1, 1, 1]; % Nearest neigbour
up = conv2(up, k'*k, 'same');
% Write result back.
imwrite('lena-out1.png', uint8(up));
[attachment=19971:lena_test.png]