AFAIK, there are only two people here posting who had any experience, under NDA, speaking with Adobe engineers like Simon Chen and Eric Chan about the development of Enhanced Detail last year as it was being developed and tested. Here are some facts, as much as I can provide under NDA and the facts can be corroborated by other betas.
Adobe didn't wake up on day and thought, "let's produce a new rendering process that forces people to take the time to convert raw to linear DNG and extra space" because they felt those two attributes were good for customers workflows per se. They did it that way because that's the way, for now, the feature had to be implemented. The "why did Adobe do this" question was asked last year. What happens in the future may change. Asking today why this was implemented as it was, especially by people who don't have a clue about the ACR processing code, marketing people who work for competing companies of Adobe who may or may not know how their own software code actually works, is as silly as me asking why my 2017 Mazda CX5 doesn't run on electricity or why my iPhone X doesn't receive G5.
Adobe didn't claim this feature is for everyone or works well on every image. They did testing with Enhanced Detail with Siemens Star resolution charts showing a 30% increase of resolution (NOT increase of pixels!). They didn't claim more, they didn't state this was true for all images and cameras that capture this data.
There is of course a before and after preview and the ideal zoom ratio to examine whether it's worthwhile converting the data to a linear DNG so users can see, on a case by case basis if they wish to convert and use Enhanced Detail.
The same people asking why this is, are some of the same people who spent post after post, page after page defending Topaz Lab's claim they convert JPEG to raw and can edit a JPEG as if it were a raw without any evidence to defend those claims. Adobe can defend the claim that in some captures, Enhanced Detail will enhance the detail. As can users. They cannot "defend" why one must convert the data to a linear DNG to the degree they explained their processing to those under NDA. Nor can they defend that if you use LR or Photoshop to convert a wide gamut image to sRGB, you'll clip colors. That's how it works kids. Unlike Topaz, they will not use marketing shills to state they can produce stuff that doesn't exist and processing that can't be backed up.
Now this is a new feature and it will evolve. One issue is the amount of processing and OS support needed today (simply examine the OS requirements for Enhanced Detail to work, one OS being just released a few months ago). Few here have any experience producing software. I have a little. Yeah, it is possible that ED (Enhanced Detail for short) could be produced directly from the raw without a DNG intermediate but what if 8% of the user based had hardware support for it? Wouldn't fly well now would it. What happens in the future happens. TODAY, if you want to use ED, you convert to a linear DNG and if you're smart, you view the preview first instead of doing a batch convert. How you'll handle this in a year is your guess. Some of us will know before that.