This could well be accurate - from a certain point of view. It all depends how you count pixels.
A 36MP pixel has around 36 million photosites, each one reading a single R, G or B value. But a single value doesn't give you a pixel in the final image. For this, you need information from at least 3 photosites, and the classic Bayer array uses information from 4 photosites (RGBG) because it fits nicely into a square array and because human eyes are more sensitive to green wavelengths. So, a 36MP sensor has 18 million green, 9 million red and 9 million blue photosites. But how do you get 36 million distinc pixels out of that, if each pixel requires data from four photosites? Simple - each photosite actually belongs to four different Bayer arrays (or 9, or 16, depending on the exact interpolation being used) and you have 36 million 2x2 Bayer arrays on the sensor, which overlap each other. This is the 'traditional' way to count pixels.
But it's not the only possible way. Monochrome sensors are simple - one photosite equals one pixel. Sigma's Foveon sensors count pixels differently, counting each layer of each pixel once, thus giving a 45MP pixel count (15 million each of R/G/B) even if the final image resolution is 15MP.
There are other ways to reach a nominal 150MP resolution without actually producing a final 150MP image. For instance, Canon is known to have been working on their DR problem. Are they using a more complex interpolation array to combine photosites of different sensitivity to produce a higher DR final image? For instance, 75 million photosites at ISO 100 and another 75 million at ISO 800, or 50 million each at 25/200/1600. 150 million nominal sensor-pixels, interpolated to produce a final 75 or 50 million image-pixels. (Despite its dual pixel AF, and the future potential to use a quad pixel setup, turning every pixel into a cross-type AF sensor, Canon has never counted each half of a dual-pixel pixel as a separate pixel in the final pixel count.)
In any case, even if you had a 150MP full-frame sensor, there are probably better ways to use all those photosites than simply increasing the raw spatial resolution of a standard Bayer array. You can capture an image at multiple ISOs, increasing dynamic range. You can use colour arrays containing more than just RGB, increasing colour accuracy and providing a way to distinguish, say, pure green light from a light that's actually a mixture of yellow and blue, although that would mostly be of technical/scientific interest. You can have pixels set at different polarisations - again, mostly of technical interest, although potentially useful for waterfall photography. You could even use a setup similar to the defunct Lytro Light Field camera, allowing post-exposure manipulation of the focal plane (or planes), which would be immensely useful in photography. Lytro was certainly ahead of its time in the days of 10MP sensors, since the final image would have been far too low resolution and noisy, but, with 150MP sensors available, one could end up with a very useful 36MP-or-so final image, which you could then refocus as needed in post-processing - think landscapes at 400mm with foreground and background details all in perfect focus, or group portraits with every subject in perfect focus, but the background and other distracting elements all blurred to oblivion, as if the lens were wide open. Canon certainly has the ability to manufacture such a high-resolution sensor - they demonstrated a 120MP APS-H sensor years ago. A full frame sensor with 150 million photosites (or, rather, 300 million, if each sensor pixel actually contains two photosites in a dual pixel setup) isn't out of the question. But there are better ways to use all those photosites than simply producing gigabyte-sized files (post RAW conversion) and capturing mushy detail through lenses that can't actually resolve 150MP.