I've been suspicious of these computational imaging systems since dabbling with the Lytro Illum.
Anyone who's tried focus stacking or HDR knows how subtle and involved the process of combining multiple exposures into one image can be, and anything which tries to do it automatically is going to run into problems.
On the Illum, the auto-generated depth map worked OK some of the time, and the computationally generated blur on out-of-focus areas looked OK but nothing like actual lens bokeh. But when it went wrong, which was often, it could be an absolute black hole of time consuming fiddling to try to fix. Stray hairs, fur, see-through fabrics, anything even slightly subtle and it all went to pot.
Now the L16 is not trying to do light-field imaging, but it's still trying to stitch together a hell of a lot of different lenses and sensors, and I just don't believe it will be reliably able to do so in real-world imaging situations. The best that's likely to happen is like iPhone panoramas- great when they work, but if they screw up, you're done. I wouldn't use such a new-fangled thing for any critical paid work for several generations, at least.
So it might be interesting, but I'll confidently predict it will be no dSLR killer. A few generations on with multi-lens phones and the whole software development weight of Apple and Google and this route might be viable; even then, how many of us pro photographers would rely SOLELY on an iPhone for any mission-critical work? I dunno about you but I always have more than one system with me on any critical shoot (at the very least a spare body!)
Hywel