As were listening there is one more thing to consider. Dithering (here again I'm no authority) in my understanding not only uses it little bits of noise to soften the noise created by some forms of processing, but it also brings subtle nuances out in the source that would otherwise be lost. By adding a chunk of noise to the ends of these nuances, they become audible.
Ok JT, or Matt (or anyone else who knows this stuff)... now say it right for me but I think I'm close! Point being that as we are listening for less noise, we should also listen for more music. Different versions of the test may have strengths in one end but not the other or there may be one best way to achieve both benefits of dithering. I guess we will find out!!!
I think I'm going to post a link to this on another thread. I think many non-lappy tapers (like me) don't often make it to this side of the sight.
Matt
Seems I'll need a little "dither strategy" paper anyway so this is my take on this:
The MS Excel graph below attempts to show an analog sine wave (in green). The maximum is at "17", thinking this is suitable to demo the performance of an ADC with 32 discrete levels (5 bit converter). The frequency of the analog signal is about 1% of the sample frequency. The sine wave contains no noise.
This analog signal is input to an ADC implemented in MS Excel spreadsheet.
The ADC in question is assumed to be perfect: The ADC has a totally stable voltage reference against which it measures the signal. The ADC does not inject RF into the analog part. The ADC does not mess with the power supply for the analog stage. The ADC has a totally isolated digital ground. The clock has no jitter. The ADC does not add any jitter to the clock signal. There is no leakage of the clock signal into the analog signal. The analog signal does not modulate the clock. The sample and hold holds a perfect representation of the analog amplitude until this amplitude is sampled. The ADC does not deviate from linear in its conversion of small or large signals. The ADC input gate adds no noise to the signal, there is no capacitive modulation,... ad infinitum....
The result of this perfect ADC, processing the green analog signal, is represented in the red dataset. The output is a sequential series of integers in the range -15 to +16 sampled at a fixed interval. The sample is valid for that one instant in time.
We see that there is a difference in the amplitude of the two representations. Truncating the data causes this discrepancy. This is the quantization error. Eyeballing the graph we see that it is less than on bit in amplitude. In fact, it's for all practical purposes triangular and so eats up 50% of the least bit.
Ooops, we just lost 3 dB dynamic range right there.
But we also see that the quantization error itself stems from changes in the analog data. That is, the quantization error is *correlated* with the input signal. Our ADC, when processing analog data, introduces it. That's bad. Not only do we loose 3 dB dynamic range but we find scads of new harmonics in our digital representation of the original analog signal. So our perfect ADC alters the very signal!
What is worth noting is that the exact same thing happens if you do *any* floating point processing on already digitized data and then proceed to store the result as an integer in any form. The severity of the effect depends on the bitdepth and scales with the relative size of the least significant bit.
Dithering means to add noise to the analog signal. The quantization error will now correlate with the signal + the dithering noise! Carefully selecting the dither amplitude to +- 1/2 bit (can be "explained" with basis in the graph) derails the quantization error quite effectively. The error is now correlated with the noise! I.e. correct dithering manages to break the correlation between the desired signal and the quantization noise. Our original signal is now better represented in the digital domain but the drawback is that more noise is present in the digitized dataset.
For dithering to work, the noise must not be correlated between samples (it must not be "signal like"). This means it must contain high frequency components causing a different noise value at every sample.
White noise is something that the ear and brain is used to. It's also harmless on subsequent processing and it is easy to generate in both the analog and digital domain. So white noise is the basic form of noise used with dithering.
A strategy for reducing the perceived level of noise is to put it in a region of the spectrum where the ear is less sensitive to acoustic energy. By sending all the noise energy up in frequency one attempts to bring the quantization error out of the audible range. This is the idea behind noiseshaping.
PS. The MS Excel graph contains some oddities. The sine wave is clearly distorted and the vertical bars are
offset by half a time step to the right. The first problem is odd in that all data is generated by an Excel formula. The latter is corrected by some option setting I'll find one day...
Edited: reduced the size of the image to suitable size.