Become a Site Supporter and Never see Ads again!

Author Topic: 24Bit dither to 16bit vs. recording just 16bit  (Read 9694 times)

0 Members and 1 Guest are viewing this topic.

Offline MattD

  • Taper Emeritus
  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 4634
  • Gender: Male
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #15 on: April 27, 2006, 07:02:23 PM »
SparkE, others have replied before I saw this, and I think their explanations are pretty good and close to what I would have said. Big +T to Brian for going above and beyond with the proof of concept.
Out of the game … for now?

Offline BayTaynt3d

  • Trade Count: (4)
  • Taperssection All-Star
  • ****
  • Posts: 1816
  • Gender: Male
  • Live from San Francisco
    • BayTaper.com
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #16 on: April 27, 2006, 07:59:15 PM »
OMG, Brian just ended that conversation real quick. GAME OVER.  :P
BayTaper.com | One Man’s Multimedia Journey Through the San Francisco Jazz & Creative Music Scene

Offline SparkE!

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 773
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #17 on: April 27, 2006, 08:37:32 PM »
OMG, Brian just ended that conversation real quick. GAME OVER.  :P

Yes Brian's post ended the game, but it was my own point that he proved.  The link you posted showed harmonics that were only down by about 50 dB from the fundamental.  Brian's post shows harmonics that are down 96 dB.  You should NEVER expect a 16 bit representation to to better than that because that's the total dynamic range available to you for representing a signal.  The reason that the quantization noise concentrates itself into discrete harmonics is that the chosen waveform is periodic with an integer number of sample intervals per cycle so that exactly the same samples occur every cycle.  Brian's analysis tools are better than the ones I have available for me to use, so I'd like to ask him to re-do the experiment, but this time to use a tone that is not harmonically related to the sample rate.  I'm going to go out on a limb here and suggest a frequency for the tone that I have no ability to test in the same manner since I do not have a copy of Adobe Audition, but I strongly suspect that it will show a general increase in the noise floor and the harmonics will not rise so far above the noise floor as in the highly contrived example where exactly the same samples repeat every 48 samples.  The tone frequency that I propose is 1105.0455 Hz.
How'm I supposed to read your lips when you're talkin' out your ass? - Lern Tilton

Ignorance in audio is exceeded only by our collective willingness to embrace and foster it. -  Srajan Ebaen

Offline BC

  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 2269
  • Gender: Male
  • Bongo Bongo
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #18 on: April 28, 2006, 01:50:47 AM »
Brian's post shows harmonics that are down 96 dB.  You should NEVER expect a 16 bit representation to to better than that because that's the total dynamic range available to you for representing a signal. 

I think it is important to note that we are not just concerned with one single frequency when recording music. We will be recording a wide range of frequencies, of which many will be harmonically related to the sample rate. So I would think that dithering, which prevents these harmonics from occurring, seems to be beneficial.

As far as the level of the harmonics, they are buried below -96 db, as are the highest levels of noise for even the most severely noise shaped spectra. So wouldn't this imply that truncation vs dither should be inaudible? I would guess the differences ARE audible, so there must be something going on with dither comparisons besides simply the noise floor.  ???  Just thinking out loud here.

In: DPA4022>V3>Microtracker/D8

Out: Morrison ELAD>Adcom GFA555mkII>Martin Logan Aerius i

RebelRebel

  • Guest
  • Trade Count: (0)
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #19 on: April 28, 2006, 03:13:01 AM »
 You can not fix things with dither afterwards.The dither is added to the signal BEFORE the truncation. Also,plain dither is noise that IS audible. It is less audible then the distortions and noise modulation when you do not add dither. That is because with dither, the noise is spread evenly over the audio spectrum.Without dither, the truncation errors make for energy concentration -much higher amplitude, though only at some specific frequencies, and that is more audible. Then there is noise modulation... There is a way to add the noise so it is more concentrated at say 15-22KHz, where the ear hears it less. It is called noise shaping.

http://www.lavryengineering.com/white_papers/dnf.pdf

Offline SparkE!

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 773
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #20 on: April 28, 2006, 12:56:05 PM »
Brian's post shows harmonics that are down 96 dB.  You should NEVER expect a 16 bit representation to to better than that because that's the total dynamic range available to you for representing a signal. 

I think it is important to note that we are not just concerned with one single frequency when recording music. We will be recording a wide range of frequencies, of which many will be harmonically related to the sample rate. So I would think that dithering, which prevents these harmonics from occurring, seems to be beneficial.

As far as the level of the harmonics, they are buried below -96 db, as are the highest levels of noise for even the most severely noise shaped spectra. So wouldn't this imply that truncation vs dither should be inaudible? I would guess the differences ARE audible, so there must be something going on with dither comparisons besides simply the noise floor.  ???  Just thinking out loud here.



Wait a minute... We need to be careful to understand that these harmonics are the result of the repeated appearance of exactly the same quantized samples, cycle after cycle.  That is to say it only happens when exactly the same waveform appears repetitively and the period of that waveform is exactly an integer multiple of the sampling frequency.  Show me where that ever happens in live recorded music.  Even when a band uses a synthesizer to produce a single, continuous tone, there is undoubtedly enough ambient noise that exactly the same samples do not appear on each cycle of the recorded waveform.  So in real life, these harmonics caused by quantization will never be seen.  In real life, other real sounds prevent the repetitive occurence of a unique sequence of samples.  (This is also the reason that you don't get much compression when you zip a .wav file.  The algorithm used by pkzip, or WinZip was developed by a guy named Phil Katz.  Phil's algorithm takes advantage of the repetitive patterns that exist in most files in order to store them more efficiently.  There just aren't that many repetitive sequences in a .wav file with any reasonable bit depth.)  In real life, other sounds perform the function that dithering does on pure synthesized tones, but without adding noise.  My premise is that if the problem doesn't exist in real life, why add noise in order to cure a non-existent problem?

Again, I'm willing to keep an open mind here, but it's going to take some compelling evidence to sway me.  I'd like to see real world examples and evidence that you are thinking about what's really going on.  I'm not easily swayed by marketing hype written by someone whose intent is to get you convinced that you need the latest equipment unless I can also verify for myself that what they say is true.  Unfortunately, I think that the taper community needs a good, healthy dose of scepticism when it comes to new technology.  We seem to have lost the ability or the desire to understand the subtle nuances of the technology we use.  Without careful scrutiny, we are unable to tell the difference between snake oil and miracle cures. Worse, it appears to me that dithering is an area where we're willing to buy a cure for a disease that does not in fact exist in the wild.
How'm I supposed to read your lips when you're talkin' out your ass? - Lern Tilton

Ignorance in audio is exceeded only by our collective willingness to embrace and foster it. -  Srajan Ebaen

Offline mhibbs

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 284
  • Gender: Male
  • it's all about the GA preamps
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #21 on: April 28, 2006, 04:05:19 PM »
My personal preference would be for them to round a 24 bit sample to the nearest 16 bit quantity because that would result in the least degradation to S/N, but so many people believe in dithering that it seems to be the most commonly accepted way of doing things.  To me it doesn't make sense to intentionally add high frequency noise to a perfectly good recording, but that seems to be what most people want to do.  (That's essentially what dithering does.)

Well, the idea is that the noise is added in a range less sensitive to human hearing (or less audible in simple terms).  So in theory, you gain resolution in the more audible ranges by dithering a 24bit signal down to 16bit.  Which is why the old Apogee 16bit units that everyone loved used 18bit (ad500e) and 20bit (ad1000) chips and dithered to 16bit using uv22.

Wait a minute... in both cases I'm ending up with a 16 bit recording and neither is subject to missing codes.  I have the whole dynamic range to work with in both cases.  All dithering does is add noise that I supposedly can't hear.  How does that give you an increase in resolution?

I can believe that this was the marketing approach for selling the concept of dithering, but I still fail to see why adding supposedly inaudible noise to a recording is supposed to help it.  That's like trying to suggest that I should go back and mess up the least significant bit of all of my old 16 bit recordings by altering its value according to some probability density function.  And that makes my recordings better?  I doubt it.  We'll spend hours and hours on our recordings to try to make sure that we copy them in a bit perfect fashion, yet we're supposed to believe that if we add high frequency noise to our recordings during the conversion to a lower bit rate format that it makes things better?  If someone can tell me what's wrong with my logic, I'd love to hear it.  Something in the back of my mind tells me that this many people can't be wrong... but then again I know how gullible the American public can be.

And please guys, I'm not saying that you ARE wrong.  I'm just saying that I don't get it and I'd really like for someone to explain to me in a technical manner why dithering is desireable.  At best, it seems to me that it could be argued that dithering is inconsequential (which seems unlikely since some of the golden ear set claim to be able to hear 20 kHz sounds).


Check this out...

    quote:One method of increasing the resolution is by dithering or noise shaping the audio information stored in the 16 bits available. A simplified explanation of dithering is that it adds specialized noise to the lowest bit (way down below where your meters read) in a way that allows you to hear information that is below the threshold of this one bit. Usually you can get about a bit and a half of extra resolution with this method.

    Another method of increasing the resolution is by employing a very sophisticated method of noise shaping to the digital data. This noise shaping, besides increasing the resolution of the smallest bits, mathematically moves the noise to a range of the audio spectrum that is less apparent to the human ear. Noise shaping processes like Sony Super Bit Map and Apogee UV22 can make you think that you are listening to 18 bit or 18 1/2 bit recordings with only 16 bits of data coming off of the CD.

    Both of these processes need to be performed on data that is more than 16 bits to start with. That means that if you are recording live performances to DAT and you are not going to do anything else to the recording, then you should use a converter that has more than 16 bits of resolution and perform the dithering or noise shaping before you store the 16 bit data. Once the data has been stored as 16 bits, you can not get the extra information out of the sound to provide what is necessary for correct dithering or noise shaping.


http://www.rogernichols.com/EQ/EQ_96-04.html

and here for some info specific to uv22

http://www.users.qwest.net/~volt42/cadenzarecording/DitherExplained.pdf


mitch
Oade preamp museum curator

Offline BayTaynt3d

  • Trade Count: (4)
  • Taperssection All-Star
  • ****
  • Posts: 1816
  • Gender: Male
  • Live from San Francisco
    • BayTaper.com
Re: 24Bit dither to 16bit vs. recording just 16bit
« Reply #22 on: April 28, 2006, 04:27:25 PM »
Not that I'd jump off a bridge if someone told me to, but common sense tells me that sound engineers and software designers and hardware makers wouldn't bother plowing millions of dollars into noiseshaping and dither algorithms if they weren't useful for something. I also find it reassuring that after just reading Mastering Audio by Bob Katz, he's all over the dithering thing.

Considering some of the conversation in this thread, I also found this quote of his to be interesting:

The maximum signal-to-noise ratio of a dithered 16-bit recording is about 96 dB. But the dynamic range is far greater, as much as 115 dB, because we can hear music below the noise. Usually, manufacturer's spec sheets don't reflect these important specifications, often mixing up dynamic range and signal-to-noise ratio.

He has some excerpts from the book regarding dithering and noise shaping here:
http://www.digido.com/portal/pmodule_id=11/pmdmode=fullscreen/pageadder_page_id=27/
« Last Edit: April 28, 2006, 04:28:56 PM by Tainted »
BayTaper.com | One Man’s Multimedia Journey Through the San Francisco Jazz & Creative Music Scene

 

RSS | Mobile
Page created in 0.055 seconds with 32 queries.
© 2002-2024 Taperssection.com
Powered by SMF