Foreshadowing back when..
This thread, dedicated to 32bit Float recording in itself but not addressing the details of how it is implemented in specific recorders won't really be of much practical interest to tapers.. other than being useful to dispel some academic misunderstandings about what it can and can't do. In other words it will be mostly academic because what really matters is how its implemented in each specific recorder in question.
That's where the rubber meets the road and where all the current confusion lies!
jerryfreak, nice experiment. Thank you for it.
I don't agree with the last part of the final statement (in boldface); it even seems propagandistic to me--an attempt to make the shifting digital noise floor seem like a virtue when it isn't one. I don't mean that it's necessarily a defect, either, because if it's low enough at all times, no one will hear it shifting. But "it successfully evades detection" is the best that can be said about it if so.
--best regards
(Bolding above is my emphasis) Of course back then we were speculating on where potential issues could arise. The current hullabaloo about noise-floor windowing artifacts during the hand-off between multiple ADC's is a specific aspect of this. DSatz essentially called out the noise shift issue quite specifically. Going back and reading his comment about
successfully evading detection brings to mind a another thought on a somewhat deeper level..
There is a deeper fundamental change going on in this shift to multi-ADC designs, which is a shift away from a recording system that is fully agnostic/isotropic in respect to the recording of whatever content fits within its bandwidth, to one which gives that up in achieving some alternate aspect deemed more valuable for the intended use case, even though doing so introduces problems for less-common uses. If evading detection of the ADC switching artifacts can be successfully arranged for the recordists who value not having to set levels far more than a truly isotropic data set, such a trade off is likely to be a welcome one for them - its good enough for their purposes by definition and they gain a quite welcome new feature, even if the scheme may not be good enough for outlier uses where a more fully isotropic data set is required for more esoteric applications such as recording ultrasonic signals, dramatically pitch shifting content, or whatever.
The machine has become specially optimized for certain uses, at the cost of being less optimized for others. Of course many other features of any recorder are optimized for its intended use, but the digital recording itself previously has not been.. unless recording to a lossy format.
I'm reminded of the development of things like noise-shaped dither, and psychoacousticly tuned lossy data compression. Those are useful tools, the successful implementation of which required careful determination of how much fidelity to the source is needed, prior to sacrificing what lies beyond the limits of perception. Its an optimization for certain use cases, which makes for a tricky game in determining where those limits might be, how they might be different for different uses, and how close one is willing to get to them in seeking to leverage human hearing perception to advantage. Noise shaped dither moves the bulk of the noise to where it is perceptually less obvious. Lossy codecs minimize storage requirements in part by discarding data deemed perceptually irrelevant. The fundamental shift is from a complete data set that contains extraneous information to one that is perceptually equivalent yet not fully complete. Fundamentally this is the same philosophical calculus based upon making a decision about what maters and what doesn't. If well implemented, it's not going to be s problem for most, but may for some. Its a sacrifice of true fidelity for all use cases, for easier use for intended target use cases.