I looked at the Scientific American blog article. It is well written by an expert in digital imaging. The author does not adjust the monitor to match the print, [...]
The author not only does adjust the monitor to match the print, he repeatedly states that is the purpose of the calibration he is discussing.
The author says exactly what I've said: contrast (gamma) and color temperature recommendations are for "If you do mostly print work". The brightness of the monitor is a matter of ambient light, but the recommendations for that are also geared towards grading displayed images for printing.
The article repeatedly makes the point that calibration is specific to matching printer output.
[...] but rather uses hardware calibration just like Mr. Schewe and most other experts.
The hardware calibration equipment does not determine what brightness, contrast, and color temperature settings should be. Those are set manually and the hardware measures how closely they can be matched.
The OP has not indicated being equipped to do a proper hardware calibration. It's wonderful that you and I and everyone who claims to have expertize can and does, but that doesn't help the OP. His need for such equipment is a different discussion.
And lacking hardware to measure how close the printer is to standard and how close the monitor is the the selected parameters, the OP has little choice but to obtain a standard print to use for manual comparisons. He absolutely will not get the more accurate results that proper hardware can produce, but he can at least get close enough to get on with his work.
The SciAm author uses gamma 1.8 and a white point of D50 (5000K), whereas Schewe uses D65 and gamma 2.2. Otherwise their approaches are similar.
Rather clearly those settings calibrate the monitor to match the way a print will look! That is, he is doing exactly what I described. And what he has said is exactly what I said.
The hardware device used to calibrate the monitor doesn't choose the parameters for those characteristics, they are set manually. The hardware device then measures the linearity and therefore the color accuracy, and provides a look up table to maintain correct colors. (And I'm sure you are aware of that, as is Schewe.)
The disagreement isn't on what is being done, it's on what perspective to put it in for someone to best learn how it works and how to manipulate it for effect. Note for example that I recommended the OP start with a gamma of 2.4, which would darken the monitor display. His complaint is that the print is darker than his monitor, and that is a compensation for that. He might well find that with a lower brightness, something like the gamma 1.8 recommended in the cited article will be more appropriate for his needs
The image is almost always edited, either by the rendering software or by the artist. See Karl Lang.
But not the "standard test image". That is accepted as correct to start with (and is usually generated on the fly by the hardware calibration equipment), and the question is how to get the display device to correctly display a correct image. We don't need to look at it on a monitor when calibrating a printer, nor do we need to print it when calibrating a monitor... except as is noted in the article I cited, and you clearly agreed above, that when calibrating a monitor it is necessary to set the brightness, the contrast, and the color temperature manually to match the desired output characteristics which in this case is specifically to match prints.
What Schewe specified is very commonly used to view web pages targeted at sRGB. Note that the cited article specifically says that is not the correct choice to calibrate when editing for a printer. Note also that I did discuss exactly that earlier too, and while the values I suggested to the OP are different than those specified in the article, they are very close (and different due to what the OP stated plus they were intended to be a starting point to see if that helped his situation).