Other than verifying that the profile you created is in fact loaded and 'in play', verifying a calibration w/ the same instrument you used to do the original calibration is a waste of time. It's like having a too small measuring cup verify that you just poured out a cup of water from teh same cup.
While that logic certainly makes sense the analogy isn't applicable. Validation tells you whether or not the profile was able to reliably hit the targets and there is nothing wrong with using the same instrument to do so. Just because you profile your screen doesn't mean that the screen is capable of hitting the specified targets and if that's the case validation will tell you where your monitor has failed to perform. In addition validation tracks performance over time. Through repeated validations you should expect to see a similar performance every time. If things start changing (a shift towards blue, luminance starts dropping down faster than normal, the blues are getting a little hot) then it's indicative of a failure. You don't need to have a separate instrument to do that kind of validation.
It's like having a too small measuring cup verify that you just poured out a cup of water from teh same cup.
That's actually quite useful. If you can't really see what you're pouring the water into, you pour a cup out, and when pouring it back you get 3/4ths of a cup then that provides extremely useful information about your cup. If you run the validation again and still only have 3/4ths of a cup you know that the target cup somehow shrunk. If you immediately take another measurement (to make sure there wasn't a problem with your validation) and you know have 1/2 a cup coming back you can deduce that there is most likely a hole in the target cup.
Given the relatively poor device to device tolerances indicated by Ethan's test I'd say that unless you have a Konica Minolta CA 310 or a PR 730/735 sitting on the shelf, you're better off using the same instrument to do your validation as that will give you better consistency and a better indication of when something on your monitor is starting to out of spec. Now there is one flaw there and that is... how do you know your test instrument isn't falling out of spec? Actually you'd have the same problem even if you used a different instrument. The way you can tell if a single instrument is out of spec is if you start seeing the same types of errors popping up across multiple screens as it's unlikely all of them would fail in the exact same way at the exact same time.
Cheers, Joe