Become a Site Supporter and Never see Ads again!

Author Topic: 24 bit > 16 bit  (Read 26416 times)

0 Members and 1 Guest are viewing this topic.

Offline Petrus

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 126
Re: 24 bit > 16 bit
« Reply #60 on: September 09, 2007, 11:09:31 AM »
Mathemetical answer: higher sampling rate (from 44.1 to 96, say), puts more detail to the wave recorded. All the added detail is at above 22.05 kHz range. It can not be heard by humans. There is NO detail added within the audible range.

Sound wave consists of many frequences mixed together. To accuratelly describe the highest frequency signal component (smallest detail) we need to sample the signal at at least twice that frequency. That frequency can describe all the lower frequences PERFECTLY. If there were some detail that is not perfectly described, it would mean there are some even higer frequences present. But as we can not hear over 20kHz signals there is no practical season to record them.

Using 96kHz sampling us usefull only if the file is much slowed down for effects, and the mic used has usable response to about 40 kHz.
« Last Edit: September 09, 2007, 11:16:46 AM by Petrus »

Offline JasonSobel

  • Trade Count: (8)
  • Needs to get out more...
  • *****
  • Posts: 3327
  • Gender: Male
    • My show list
Re: 24 bit > 16 bit
« Reply #61 on: September 09, 2007, 11:34:04 AM »
I am still not sure that I got a direct answer to what I was trying to ask, so I will try asking another way.  Assuming that we are recording in 24 bit, and assuming that we are just recording a single guitar, all of the frequencies of which are well within the 20-20K range, if we are recording at a 96K sample rate, will there be more data points representing the guitar, within our audible range, than if we were recording at 48K, or is all of the other additional data made up of other points above and below the audible range?  I am just trying to discern whether we are truly getting more data points on the audible portion of the waveform, than just recording a lot of additional inaudible data at higher and lower frequencies.

a very difficult question. however I think you can say (theoretically) you'll get more available dynamics (since it has 24 bit resolution) and better stereo image (because of the 96'000 samples per second).

regards
nicola

This should be a straightforward scientific/mathematical answer, IMVHO.  It is the crux of the question to which I have been trying to get an answer.  There have to be either more, less or the same number of points, sample wise, to describe the same exact musical note.  I am just trying to determine, one way or the other, whether or not we are really getting more data, within the audible realm, as opposed to adding additional data above and below that range.

you are getting more samples per second, and those extra samples don't just sit idly by and unused, even if all the sound is within 20-20k.  so, in that sense, yes, at 96 kHz, relative to 48 or 44.1 kHz, there are more samples being recorded that is defining the music.  that is the "easy" answer to your "easy" question.  However, the real question is whether the analog waveform, recreated from the digital recording, is any different if you record at 48 kHz vs 96 kHz.  in theory, all of the analog frequencies within 20-20k are able to be reproduced with a sampling rate of 48 kHz.  so, at 96 kHz, you are getting more samples to the same music, but are they just redundant?  obvsiouly, lots of people of lots of different opinions, as evidenced by this thread and many others.  As mentioned, there is more than just the frequency response.  there are also the time and spatial aspect of a recording.  it's not just what notes are played, but precisely *when* they are played in time.  it's been said (in other threads, with links to articles on the subject) that the human ear can differentiate between two audible events occuring less than 1/48000th of a second apart.  So, if we record at 48kHz, while that is enough to capture all the audible frequencies, it may not be enough to accurately define the exact moment of when a note is played.  these very minor timing errors can throw off things like soundstage and stereo imaging.  For these reasons, 96 kHz is probably a good idea.   of course, all this is my opinion, based on my own unscientific comparisions (by recording the same band at the same venue again and again and again, etc, etc...)  you should probably do some of your own comparisons and decide for yourself which same rate to record at.

Offline jmz93

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 265
  • Gender: Male
Re: 24 bit > 16 bit
« Reply #62 on: September 09, 2007, 02:26:59 PM »
I record in 24-bit with my R-09 whether I need the dynamic range or not, because you can hear digital noise if you record in 16-bit.
Try recording say a 1KHZ test tone, 50db or so down at 16 bit, and then
at 24 bit. Boost the volume of both files a lot so you can clearly hear the noise floor and listen to the dfference.

This is also a useful thing to do to actually hear the products of various dithering algorhythms.
Record a test tone 50 or 60 db down in 24-bit, dither to 16 using various methods, saving the results in their own little files.
Boost all of them so the volume is high enough to clearly hear the resulting background noise. 
I recently did this with the various dither options in Sound Forge, and compared them to the PowR3 dithering routine in Sonar 6.21 Producer. 


Offline F.O.Bean

  • Team Schoeps Tapir that
  • Trade Count: (126)
  • Needs to get out more...
  • *****
  • Posts: 40690
  • Gender: Male
  • Taperus Maximus
    • MediaFire Recordings
Re: 24 bit > 16 bit
« Reply #63 on: September 09, 2007, 11:35:32 PM »
I record in 24-bit with my R-09 whether I need the dynamic range or not, because you can hear digital noise if you record in 16-bit.
Try recording say a 1KHZ test tone, 50db or so down at 16 bit, and then
at 24 bit. Boost the volume of both files a lot so you can clearly hear the noise floor and listen to the dfference.

This is also a useful thing to do to actually hear the products of various dithering algorhythms.
Record a test tone 50 or 60 db down in 24-bit, dither to 16 using various methods, saving the results in their own little files.
Boost all of them so the volume is high enough to clearly hear the resulting background noise. 
I recently did this with the various dither options in Sound Forge, and compared them to the PowR3 dithering routine in Sonar 6.21 Producer. 



On Sound Devices website, they have a great example of the diff between 16 and 24-bit. they record just someones voice at say -50db down like you stated, and then add gain until 0db, and there is TONS of noise in the 16-bit example. However, in the 24-bit example, it sounds perfect and there is no audible noise added to the signal. thats why we all record and peak at or around -6db in 24-bit and add the extra few db's of gain in post, because even tho its technically bringing the noise-floor up with the added gain, its so inaudible, that noone in their right mind can hear the noise when adding gain in 24-bit.

Just at allgood 2 months ago, I recorded a set while I was at my campsite, and needed to add +12.5db Gain in post on my 24/48 signal, the end result is fantastic and no noise can be ehard. not even in the ditehred down 16-bit version. I will continue to record 24-bit for the rest of my life(or a HIGHER resolution like DSD). the 24-bit stuff just sounds more true and analogish IMO because of all the points of data when the sound is getting digitized. it doesnt sound 'digital' like 16-bit. its MUCH MORE open and natural IMO.

Now the question for me is, if your end goal is 16-bit for cdr's anyway, is it better to record directly in 16-bit or to record in 24-bit and ditehr down to 16-bit?
Schoeps MK 4V & MK 41V ->
Schoeps 250|0 KCY's (x2) ->
Naiant +60v|Low Noise PFA's (x2) ->
DarkTrain Right Angle Stubby XLR's (x3) ->
Sound Devices MixPre-6 & MixPre-3

http://www.archive.org/bookmarks/diskobean
http://www.archive.org/bookmarks/Bean420
http://bt.etree.org/mytorrents.php
http://www.mediafire.com/folder/j9eu80jpuaubz/Recordings

Offline Petrus

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 126
Re: 24 bit > 16 bit
« Reply #64 on: September 10, 2007, 02:15:22 AM »
In the examples given above the samples were intentionally recorded at too low a level (-50 dB) then normalized. Of course this brings the noise floor up, at 16 bits really bad, 24 bits not bad at all. If we had 32 bit systems, we could record the sample at -120 dB, normalise it and complain that 24 bit system is unusable...  When using 16 bit system recording voice at -50 dB is a user error, not system fault.

But of course 24 bits has it's benefits and I record everything at 24 bits, only to get that 6-12 dB safety headroom before normalizing and downconverting to 16 bits. There is a real work flow benefit there, which ensures I get maximum 16 bit end quality, why not use it?

Offline Todd R

  • Over/Under on next gear purchase: 2 months
  • Trade Count: (29)
  • Needs to get out more...
  • *****
  • Posts: 4901
  • Gender: Male
Re: 24 bit > 16 bit
« Reply #65 on: September 10, 2007, 10:40:48 AM »
Now the question for me is, if your end goal is 16-bit for cdr's anyway, is it better to record directly in 16-bit or to record in 24-bit and ditehr down to 16-bit?


Quoting myself from earlier in the thread in answer to your question. :)  Quality of the dither routine used is one reason to record at 24bit.

As people have said, recording at 24bits is better if you want to do any post-processing.  But another reason to record at 24bits rather than at 16bits even if you will be dithering to 16bits is the quality of the dithering available to you in your field recording equipment.  At this point, much of the available ICs used internally in our recorders/ADs will be 24bit.  To record at 16bits, the recorder will need to dither down the 24bit data to 16bit data. 

The quality of these dither routines varies, and different equipment mfgs and software vendors use different routines -- Sony's SBM process, Apogee's UVHR process, Grace's ANSR method, etc.  The quality of say the UA-5's on-board dither process might not be as good (or as good sounding to any particular individual's ears) as a dither process available via s/w.  Wavelab in particular has licensed Apogee's UVHR dither process, so dithering from 24>16 in post using Wavelab might sound noticably better than whatever dither routine your AD or recorder uses.
Mics: Microtech Gefell m20/m21 (nbob/pfa actives), Line Audio CM3, Church CA-11 cards
Preamp:  none <sniff>
Recorders:  Sound Devices MixPre-6, Sony PCM-M10, Zoom H4nPro

Offline SparkE!

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 773
Re: 24 bit > 16 bit
« Reply #66 on: September 10, 2007, 05:33:34 PM »
Mathemetical answer: higher sampling rate (from 44.1 to 96, say), puts more detail to the wave recorded. All the added detail is at above 22.05 kHz range. It can not be heard by humans. There is NO detail added within the audible range.

Sound wave consists of many frequences mixed together. To accuratelly describe the highest frequency signal component (smallest detail) we need to sample the signal at at least twice that frequency. That frequency can describe all the lower frequences PERFECTLY. If there were some detail that is not perfectly described, it would mean there are some even higer frequences present. But as we can not hear over 20kHz signals there is no practical season to record them.

Using 96kHz sampling us usefull only if the file is much slowed down for effects, and the mic used has usable response to about 40 kHz.

You are leaving out some important details.  Nyquist's theory covers signals that have been bandlimited to half of the sampling frequency.  For such signals, sampled at a rate that is at least twice the highest frequency contained in signals, it is possible to reproduce those signals exactly within the limits of the resolution of the sampling circuits (that is the resolution of the A/D used).  However, we don't have a perfectly bandlimited signal with no components above 22.05 kHz.  Components above that frequency are actually aliased back into the audible band when you sample at 44.1 kHz.  For example a signal at 34.1 kHz will play back at 10 kHz if you use a 44.1 kHz sampling rate.  So you need two things in order for your analysis to be correct:

1) You need to record only signals whose frequency content is strictly limited to frequencies less than 22.05 kHz.
2) In order to be able to exactly reproduce the original waveform, you must use Nyquist filters both for bandlimiting the original signal and to smooth the digitized signal on playback.  Nyquist filters are a mathematical fiction that can only be approximated in the real world.  In general, their 3 dB point occurs at 1/2 the sampling frequency and their response is symmetric about that point through a transition band of a specified width.  The most commonly discussed Nyquist filter is the so-called "brick wall" filter that passes all frequencies below 1/2 the sampling rate and absolutely nothing above 1/2 the sampling rate. Everyone knows that brick wall filters are not physically realizable.  The other type that can be more easily approximated in the real world is the so-called "raised cosine" lowpass filter where the transfer function looks essentially like half a cycle of a cosine wave that has been shifted upwards by its peak value.  In the real world, the stopband never goes completely to zero, nor is the stopband response symmetric about its 3 dB point.

Nyquist theory is helpful for helping to understand how fast we have to sample in order to get good results, but it doesn't tell the whole story because it relies on math that doesn't translate well into the real world.  If you don't believe me.  Try recording a 34.1 kHz tone at 44.1 kHz sampling rate and tell me that it doesn't sound EXACTLY like 10 kHz when you play back the recording.
How'm I supposed to read your lips when you're talkin' out your ass? - Lern Tilton

Ignorance in audio is exceeded only by our collective willingness to embrace and foster it. -  Srajan Ebaen

Offline datbrad

  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 2298
  • Gender: Male
Re: 24 bit > 16 bit
« Reply #67 on: September 10, 2007, 05:42:28 PM »
Been following this thread, and want to point out a couple of things. First, absolutely, mastering in 24 bit will allow far more freedom from having to ride the level controls to run as close to 0 as possible during capture, allowing for lower levels that can be boosted in post without introducing noise. However, a 16 bit recording with optimized levels will not sound much different than a 16 bit product produced from a 24 bit master using the same front end (mics>pre). The reason that UV22 and SBM sound better than straight 16 bit A/D is precisely the fact that they quantitize at 24 bits, and then use a noise shaping filter to remove the digital noise that the very act of quantitization creates. This makes the actual perceived dynamic range to approach 18-19 bit depth to the listener.

The dynamic range of a rock show through a PA is about 40db, and a jazz show, maybe 60db. Watch your levels during a show. Do they sweep constantly from far below -12 up to -2db? I have rarely seen that, except for a single acoustic instrument recorded in a pin drop quite setting. So, if your levels range from -12 during the quite portions, and hit-2 during the loud portions, that's only 10db of dynamic range.

Using the example of the average listening space, cars driving by, lawn mowers, dogs barking and/or kids playing outside, and HVAC systems inside, it's hard to imagine the average Joe sitting in an acoustically dead lab setting listening for differences in dynamic range between 24 and 16 bit. To me, the real advantage of 24 bit is the ability to simply not be as concerned with managing the recorder in the field to optimize levels. I am not saying you don't have to be a "good" a recordist with 24 bit, but you definately do not have to be as good at setting and controling gain live as you do with 16 bit.

Sampling is another very misunderstood thing. Regardless if it's PCM or DSD, digital samples are taken 2 per frequency per second, one for the left channel and one for the right. It goes back to basic electronic theory of hertz measurement of cycles. 48khz takes 2 samples of each frequency per second from DC all the way up to 24khz, at which point the anti alaising filter cuts off the analog input. 96khz takes the same 2 samples per frequency per second from DC all the way up to 48khz, far beyond the ability for 99% of capture or playback systems to reproduce, and no human can hear. There are more "points", but these are not compressed into the same audible range, as with the difference between standard and high def video which has more actual lines of resolution within the same screen area.

DSD does the same thing, but takes 2 samples per frequency per second into the 2.8ghz range, using 1 bit per sample, and because the samples from DC to 24khz are represented with less bit depth than PCM, results in the industry having mixed opinions as to the advantages of DSD, and why PCM was not replaced by DSD outright, which would have happened if the opinions were not mixed.

The reason that any higher sampling frequencies above 44.1 sound better at the capture point is due to analog filters to prevent crossing the Nylquist Frequency. To prevent a signal higher than 22.05 khz from hitting the A/D, a filter starts to act on the signal at 20khz, because there is no such thing as a perfect brick wall high pass filter, and it needs 2khz of roll off to kill the input by 22.05 khz. This roll off starting at 20khz is audibly noticable. Recording at 48khz eases the task of the filter, as it does not kick in until 22 khz to roll off to full attenuation by 24 khz. Recording at 48khz or higher, and resampling in post does not have the same impact as the filtering is digital, so the theoretical upper frequncy reproduction limit of 44.1 can be achieved, which is 22.05Khz.

So, I would answer these questions this way:

Is 24 bit better than 16 bit? Well, it depends on the source, recording environment, capture front end, and how much attention you want to spend riding the gain controls of your recorder.

Are samples from 48khz and above better than 44.1 at capture? Yes, but above 48khz is probably unecessary, but does no harm other than taking up more storage space.

Remember, digital recording is 2 samples per frequency per second, with PCM using more bits per sample to represent dynamic range and that is all. Master at 24 bit 48khz or above, use Wavelab UV22 to dither to 16/44.1, and that will sound better than a 16/44.1 master. Or, use an AD1000 or MiniMe, or SBM in the field at 48khz and optimize your levels correctly, and you will end up with the same result.

Sorry about the lengthy post!
AKG C460B w/CK61/CK63>Luminous Monarch XLRs>SD MP-1(x2)>Luminous Monarch XLRs>PMD661(Oade WMOD)

Beyer M201>Luminous Monarch XLRs>PMD561 (Oade CMOD)

Offline illconditioned

  • Trade Count: (9)
  • Needs to get out more...
  • *****
  • Posts: 2996
Re: 24 bit > 16 bit
« Reply #68 on: September 10, 2007, 06:13:42 PM »
Mathemetical answer: higher sampling rate (from 44.1 to 96, say), puts more detail to the wave recorded. All the added detail is at above 22.05 kHz range. It can not be heard by humans. There is NO detail added within the audible range.

Sound wave consists of many frequences mixed together. To accuratelly describe the highest frequency signal component (smallest detail) we need to sample the signal at at least twice that frequency. That frequency can describe all the lower frequences PERFECTLY. If there were some detail that is not perfectly described, it would mean there are some even higer frequences present. But as we can not hear over 20kHz signals there is no practical season to record them.

Using 96kHz sampling us usefull only if the file is much slowed down for effects, and the mic used has usable response to about 40 kHz.

You are leaving out some important details.  Nyquist's theory covers signals that have been bandlimited to half of the sampling frequency.  For such signals, sampled at a rate that is at least twice the highest frequency contained in signals, it is possible to reproduce those signals exactly within the limits of the resolution of the sampling circuits (that is the resolution of the A/D used).  However, we don't have a perfectly bandlimited signal with no components above 22.05 kHz.  Components above that frequency are actually aliased back into the audible band when you sample at 44.1 kHz.  For example a signal at 34.1 kHz will play back at 10 kHz if you use a 44.1 kHz sampling rate.  So you need two things in order for your analysis to be correct:

1) You need to record only signals whose frequency content is strictly limited to frequencies less than 22.05 kHz.
2) In order to be able to exactly reproduce the original waveform, you must use Nyquist filters both for bandlimiting the original signal and to smooth the digitized signal on playback.  Nyquist filters are a mathematical fiction that can only be approximated in the real world.  In general, their 3 dB point occurs at 1/2 the sampling frequency and their response is symmetric about that point through a transition band of a specified width.  The most commonly discussed Nyquist filter is the so-called "brick wall" filter that passes all frequencies below 1/2 the sampling rate and absolutely nothing above 1/2 the sampling rate. Everyone knows that brick wall filters are not physically realizable.  The other type that can be more easily approximated in the real world is the so-called "raised cosine" lowpass filter where the transfer function looks essentially like half a cycle of a cosine wave that has been shifted upwards by its peak value.  In the real world, the stopband never goes completely to zero, nor is the stopband response symmetric about its 3 dB point.

Nyquist theory is helpful for helping to understand how fast we have to sample in order to get good results, but it doesn't tell the whole story because it relies on math that doesn't translate well into the real world.  If you don't believe me.  Try recording a 34.1 kHz tone at 44.1 kHz sampling rate and tell me that it doesn't sound EXACTLY like 10 kHz when you play back the recording.

Have you tried "recording a 34.1kHz" tone?  I havne't done this, but I would *hope* there is a lowpass filter somewhere in my recorder.  Can you confirm or deny this?

  Richard
Please DO NOT mail me with tech questions.  I will try to answer in the forums when I get a chance.  Thanks.

Sample recordings at: http://www.soundmann.com.

cshepherd

  • Guest
  • Trade Count: (0)
Re: 24 bit > 16 bit
« Reply #69 on: September 10, 2007, 06:28:47 PM »
1011100001110011110000110101010101100011110010101011010111101010100101101

The sound of Music.

Offline SparkE!

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 773
Re: 24 bit > 16 bit
« Reply #70 on: September 10, 2007, 07:31:38 PM »
Mathemetical answer: higher sampling rate (from 44.1 to 96, say), puts more detail to the wave recorded. All the added detail is at above 22.05 kHz range. It can not be heard by humans. There is NO detail added within the audible range.

Sound wave consists of many frequences mixed together. To accuratelly describe the highest frequency signal component (smallest detail) we need to sample the signal at at least twice that frequency. That frequency can describe all the lower frequences PERFECTLY. If there were some detail that is not perfectly described, it would mean there are some even higer frequences present. But as we can not hear over 20kHz signals there is no practical season to record them.

Using 96kHz sampling us usefull only if the file is much slowed down for effects, and the mic used has usable response to about 40 kHz.

You are leaving out some important details.  Nyquist's theory covers signals that have been bandlimited to half of the sampling frequency.  For such signals, sampled at a rate that is at least twice the highest frequency contained in signals, it is possible to reproduce those signals exactly within the limits of the resolution of the sampling circuits (that is the resolution of the A/D used).  However, we don't have a perfectly bandlimited signal with no components above 22.05 kHz.  Components above that frequency are actually aliased back into the audible band when you sample at 44.1 kHz.  For example a signal at 34.1 kHz will play back at 10 kHz if you use a 44.1 kHz sampling rate.  So you need two things in order for your analysis to be correct:

1) You need to record only signals whose frequency content is strictly limited to frequencies less than 22.05 kHz.
2) In order to be able to exactly reproduce the original waveform, you must use Nyquist filters both for bandlimiting the original signal and to smooth the digitized signal on playback.  Nyquist filters are a mathematical fiction that can only be approximated in the real world.  In general, their 3 dB point occurs at 1/2 the sampling frequency and their response is symmetric about that point through a transition band of a specified width.  The most commonly discussed Nyquist filter is the so-called "brick wall" filter that passes all frequencies below 1/2 the sampling rate and absolutely nothing above 1/2 the sampling rate. Everyone knows that brick wall filters are not physically realizable.  The other type that can be more easily approximated in the real world is the so-called "raised cosine" lowpass filter where the transfer function looks essentially like half a cycle of a cosine wave that has been shifted upwards by its peak value.  In the real world, the stopband never goes completely to zero, nor is the stopband response symmetric about its 3 dB point.

Nyquist theory is helpful for helping to understand how fast we have to sample in order to get good results, but it doesn't tell the whole story because it relies on math that doesn't translate well into the real world.  If you don't believe me.  Try recording a 34.1 kHz tone at 44.1 kHz sampling rate and tell me that it doesn't sound EXACTLY like 10 kHz when you play back the recording.

Have you tried "recording a 34.1kHz" tone?  I havne't done this, but I would *hope* there is a lowpass filter somewhere in my recorder.  Can you confirm or deny this?

  Richard

Most recorders at least make a token attempt at a lowpass filter ahead of their A/D, but most are not adequate in my opinion.  In fact, the ones that have adjustable sample rates usually use the same filter, regardless of selected sample rate.  :o  That's messed up, in my opinion.   Seriously, though.  Try recording a 34 kHz tone at 44.1 kHz sample rate.  What you'll get is a 10 kHz tone that has probably been reduced in amplitude a little bit by the lowpass filter in your recorder.
How'm I supposed to read your lips when you're talkin' out your ass? - Lern Tilton

Ignorance in audio is exceeded only by our collective willingness to embrace and foster it. -  Srajan Ebaen

Offline Petrus

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 126
Re: 24 bit > 16 bit
« Reply #71 on: September 11, 2007, 05:31:14 AM »
Here is a good (and long) series of articles sensibly discussing the merits of 24/96 digital:

http://www.moultonlabs.com/more/taking_stock/P0/

The bottom line, more or less, is, that 24/96 is better than the analog part of the recording chain, and even 16/44.1 is better that the acoustic part of the recording/listening chain.

And another long one: http://www.moultonlabs.com/weblog/more/24_bits_can_you_hear

Many thoughts about A/B double blind testing etc...
« Last Edit: September 11, 2007, 06:27:57 AM by Petrus »

Offline chunga1

  • Trade Count: (0)
  • Taperssection Newbie
  • *
  • Posts: 34
Re: 24 bit > 16 bit
« Reply #72 on: September 11, 2007, 09:44:23 PM »
the Zoom H2 records up to
44.1kHz 16 and 24 bit
48kHz 16 bit and 24 bit
96kHz 16  and 24

so is 96kHz even needed??? would it be higher quality than doing a 48kHz 24 bit recording???

all this talk is getting a little confusing for a new taper  like me....is there a simple explanation for these recording levels listed abpve??

thank's,
tim
SP-CMC-2(AT-831>SP-SPSB-1>NJB3

Offline Petrus

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 126
Re: 24 bit > 16 bit
« Reply #73 on: September 12, 2007, 05:13:10 AM »

so is 96kHz even needed??? would it be higher quality than doing a 48kHz 24 bit recording???


In theory, yes, but as the microphones and loudspeakers and headphones do not reach past 20kHz, trying to record frequences up to 44+ Khz make no difference in the final product. And even if they could reproduce those frequences at over 20 kHz we humans could not hear them. So it is a total waste of space.

Offline Arni99

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 770
  • Gender: Male
Re: 24 bit > 16 bit
« Reply #74 on: September 12, 2007, 07:13:05 AM »

so is 96kHz even needed??? would it be higher quality than doing a 48kHz 24 bit recording???


In theory, yes, but as the microphones and loudspeakers and headphones do not reach past 20kHz, trying to record frequences up to 44+ Khz make no difference in the final product. And even if they could reproduce those frequences at over 20 kHz we humans could not hear them. So it is a total waste of space.
No,
you mix up 2 different things:
96KHz in terms of recording-quality refers to 96.000 samples per second each at 24bit(or 16bit) resolution and not the mic-frequency of 96KHz.
1st: SONY PCM-M10 + DPA 4060's + DPA MPS 6030 power supply (microdot)
2nd: iPhone 5 + "Rode iXY" microphone/"Zoom IQ5" microphone

 

RSS | Mobile
Page created in 0.101 seconds with 39 queries.
© 2002-2024 Taperssection.com
Powered by SMF