I don't think this is how linear quantization works - it's 1 bit per 6 dB of dynamic range, and that's fixed. So by going to 24-bit you're not dividing the same dynamic range into more bits, you're adding more dynamic range by (theoretically, anyways) decreasing the noise floor of the A/D chip. At least, that's how I understand it...
Nah. There IS MORE RESOLUTION as far as I understand this. How about some math...
So, can we at least agree on this to start? 16-bit has 96 db of dynamic range and 24-bit has 144 db of dynamic range?
If so, then here's how I do the math, but maybe I'm missing something here?
16 bit = 2^16 = 65,536 unique #s possible / 96 db of range = .0015 db per unit possible
24 bit = 2^24 = 16,777,216 unique #s possible / 144 db of range = .0001 db per unit possible
So, whether you're using the whole range or not, the fact is that there is 15 times the resolution happening at 24- vs. 16-bits, which is why when you normalize (aka zoom in) it holds up much better (see SD example above), which is also one of the reasons why there's not nearly as much need to run hot (cause at 24-bit, I can run 10 db under you running at 16-bit, then normalize it, and STILL end up having more resolution than you had if you ran your peaks perfectly hot up to 0db at 16-bit). This example of course is ignoring the quality of the ADC, but that's a different subject.
Am I understanding this correctly? I'm not trying to represent myself as someone who really knows this stuff, but I'm trying awfully hard, and I think I get it...
We've been through this exercise before*** and it's meaningless to divide the number of dBs of dynamic range by the number of levels. They both represent the same thing. One of the numbers is on a linear scale and one of the numbers is on a logarithmic scale. Each additional bit used in a binary number adds exactly 20 log 2 dB of dynamic range and it doubles the number of possible levels that can be expressed. That's all there is to the math. Try this:
Let's start out with 2, raise it to the 16th power, take it's log and multiply by 20.
20 log (2^16) = 20 log 65536 = 96.329598612473982468396446311838
^ ^ ^
bits levels dBs
That's all there is to the math and dividing dBs by bits or dividing bits by levels or dividing levels by dBs doesn't mean anything. They all are ways of representing the same concept and that is what kind of resolution you get. The more bits, the way more possible levels you get. The more bits, the more dBs of dynamic range you get. (In fact, you get approximately 6.0205999132796239042747778944899 dBs of dynamic range per additional bit, so just multiply the number of bits by 6.0205999132796239042747778944899 to get the possible dynamic range. We usually round that off to 6 db per bit because it's easier to do the arithmetic.)
*** Look here for where this same topic was discussed previously:
http://taperssection.com/index.php/topic,77804.msg1037087.html#msg1037087Nobody is arguing that 24 bit converters have the same resolution as an ideal 18, 19 or 20 bit converter. (Obviously a 24 bit converter has more resolution than any converter with a smaller number of bits in the encoded values that it produces.) What we are arguing is that most 24 bit converters have no more accuracy than an ideal 18, 19 or 20 bit converter. The reason they have nor more accuracy is that the lower 4, 5, or 6 bits are indistinguishable from the results you'd get by flipping a coin and calling heads a 1 and tails a 0. It's as if noise is being added to the signal you are encoding and the amplitude of the noise occupies the lower 24, 30 or 36 dB of the dynamic range.
Bottom line: Resolution does not equal Accuracy and it is Accuracy that determines the S/N of the recordings we make and ultimately sets the achievable dynamic range in the recording. Notice I'm talking about the dynamic range of the recording, not the dynamic range of the encoding scheme. There's a difference and in the case of today's 24 bit A/D converters, its a big difference.