Become a Site Supporter and Never see Ads again!

Author Topic: Taperssection - product listening comparisons *gold standard*  (Read 19938 times)

0 Members and 1 Guest are viewing this topic.

Offline it-goes-to-eleven

  • Trade Count: (58)
  • Needs to get out more...
  • *****
  • Posts: 6696
Re: Taperssection - product listening comparisons *gold standard*
« Reply #60 on: December 05, 2008, 10:55:31 AM »
I think matching RMS levels between samples should be the number one thing done before any audio comparison.

RMS levels should match but peak levels *should* also..  When doing comps I tend to match peak levels and then expect the rms levels to also match.   If they don't then something's generally awry.

I always keep a written record when I master my shows. I have a very thick binder notebook. In addition to regular source info it includes seat location, pre-amp gain throughout the recording, and peak and rms levels for each section of the recording where different pre-amp gain was used.   And then, ultimately, any gain corrections applied.  Those notes are absolutely invaluable when I record a venue or performer in the future (how much gain and what will your final peak be with mk4's when you're 6 feet from Ravi and 20 feet from Roy's drum kit and the kick drum is facing you?).  Checking levels is so bush league ;-)

Offline SparkE!

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 773
Re: Taperssection - product listening comparisons *gold standard*
« Reply #61 on: December 05, 2008, 11:04:20 AM »
I think matching RMS levels between samples should be the number one thing done before any audio comparison.

RMS levels should match but peak levels *should* also..  When doing comps I tend to match peak levels and then expect the rms levels to also match.   If they don't then something's generally awry.

Your ears respond more to the average signal power level than they do to peaks.  That's the reason you can run a limiter and not severely affect the way the audio sounds.  Matching the rms levels is the way to go, in my opinion.  So, I would suggest matching rms levels and expect the peak levels to match also.  It's harder to do and still avoid running one of your peaks into the rail, but it really does help to avoid the problem where one source is preferred to the other due to signal power differences between the two sources.  If you match rms voltage levels, you automatically match the signal power levels, assuming you are playing both sources back on the same system.
How'm I supposed to read your lips when you're talkin' out your ass? - Lern Tilton

Ignorance in audio is exceeded only by our collective willingness to embrace and foster it. -  Srajan Ebaen

Offline Church-Audio

  • Trade Count: (44)
  • Needs to get out more...
  • *****
  • Posts: 7571
  • Gender: Male
Re: Taperssection - product listening comparisons *gold standard*
« Reply #62 on: December 05, 2008, 11:19:47 AM »
I think matching RMS levels between samples should be the number one thing done before any audio comparison.

Under most cases if the recorder has a digital knob for gain it should be easy if not then I suggest using a calibrator like the one I use for calibrating my microphones with a 1k tone. This is the middle of the spectrum all you are doing is calibrating the level settings of both devices under test a single tone will do that.

If the mics aren't perfectly matched it does not matter because both recordings will be made with the same microphones. And if both devices under test have detented attenuators then again it does not matter as long as both are set to the same value. The problem will arise with continuously variable gain controls if they are present then you MUST calibrate the input and output levels any differences on input and output levels relative to gain settings if they are detented should be attributed to the modification.


If you use a complex waveform like music it will be impossible to calibrate both levels unless you do a RMS level calculation on the spectrum the only problem with that is if there are any differences in the spectrum you will have changed them by changing the level "Fletcher munson curve" so a single tone must be used.

Remember we are just trying to electronically calibrate the input sensitivities.. so that both sets of devices under test are the same if both dont respond the same to 1k I say you have serious problems with one of the units and the test should be scraped.


Chris
for warranty returns email me at
EMAIL Sales@church-audio.com

Offline Church-Audio

  • Trade Count: (44)
  • Needs to get out more...
  • *****
  • Posts: 7571
  • Gender: Male
Re: Taperssection - product listening comparisons *gold standard*
« Reply #63 on: December 05, 2008, 11:24:17 AM »
I think matching RMS levels between samples should be the number one thing done before any audio comparison.

RMS levels should match but peak levels *should* also..  When doing comps I tend to match peak levels and then expect the rms levels to also match.   If they don't then something's generally awry.

Your ears respond more to the average signal power level than they do to peaks.  That's the reason you can run a limiter and not severely affect the way the audio sounds.  Matching the rms levels is the way to go, in my opinion.  So, I would suggest matching rms levels and expect the peak levels to match also.  It's harder to do and still avoid running one of your peaks into the rail, but it really does help to avoid the problem where one source is preferred to the other due to signal power differences between the two sources.  If you match rms voltage levels, you automatically match the signal power levels, assuming you are playing both sources back on the same system.

No need to match levels if you run 1k through the gear before you run your tests. Then the signal chain is calibrated differences between left and right are a direct product of the mics and should be ok since these discrepancies will be on Both sets of recordings since the same pair of mics are being used.

Chris
for warranty returns email me at
EMAIL Sales@church-audio.com

Offline it-goes-to-eleven

  • Trade Count: (58)
  • Needs to get out more...
  • *****
  • Posts: 6696
Re: Taperssection - product listening comparisons *gold standard*
« Reply #64 on: December 05, 2008, 11:33:57 AM »
I think matching RMS levels between samples should be the number one thing done before any audio comparison.

RMS levels should match but peak levels *should* also..  When doing comps I tend to match peak levels and then expect the rms levels to also match.   If they don't then something's generally awry.

Your ears respond more to the average signal power level than they do to peaks.  That's the reason you can run a limiter and not severely affect the way the audio sounds.  Matching the rms levels is the way to go, in my opinion.  So, I would suggest matching rms levels and expect the peak levels to match also.  It's harder to do and still avoid running one of your peaks into the rail, but it really does help to avoid the problem where one source is preferred to the other due to signal power differences between the two sources.  If you match rms voltage levels, you automatically match the signal power levels, assuming you are playing both sources back on the same system.

No need to match levels if you run 1k through the gear before you run your tests. Then the signal chain is calibrated differences between left and right are a direct product of the mics and should be ok since these discrepancies will be on Both sets of recordings since the same pair of mics are being used.

That incorrectly assumes the frequency response of the gear under comparison is identical, including during transients.  Just because the levels match at a constant 1k doesn't assure they'll match at 50hz.  And it isn't practical on live sources.  Even if you calibrate that way during the recording, final differences still need to be measured and tweaked.

Offline Church-Audio

  • Trade Count: (44)
  • Needs to get out more...
  • *****
  • Posts: 7571
  • Gender: Male
Re: Taperssection - product listening comparisons *gold standard*
« Reply #65 on: December 05, 2008, 11:51:21 AM »
I think matching RMS levels between samples should be the number one thing done before any audio comparison.

RMS levels should match but peak levels *should* also..  When doing comps I tend to match peak levels and then expect the rms levels to also match.   If they don't then something's generally awry.

Your ears respond more to the average signal power level than they do to peaks.  That's the reason you can run a limiter and not severely affect the way the audio sounds.  Matching the rms levels is the way to go, in my opinion.  So, I would suggest matching rms levels and expect the peak levels to match also.  It's harder to do and still avoid running one of your peaks into the rail, but it really does help to avoid the problem where one source is preferred to the other due to signal power differences between the two sources.  If you match rms voltage levels, you automatically match the signal power levels, assuming you are playing both sources back on the same system.

No need to match levels if you run 1k through the gear before you run your tests. Then the signal chain is calibrated differences between left and right are a direct product of the mics and should be ok since these discrepancies will be on Both sets of recordings since the same pair of mics are being used.

That incorrectly assumes the frequency response of the gear under comparison is identical, including during transients.  Just because the levels match at a constant 1k doesn't assure they'll match at 50hz.  And it isn't practical on live sources.  Even if you calibrate that way during the recording, final differences still need to be measured and tweaked.


This is my take on it.

1- Put 1k tone in and measure output of device under test. Or three tones at 1k 50hz and 10k
2- play source record source
3- Listen to source
4- evaluate differences

If you go and change RMS levels between samples you are fooling with the spectral differences between the samples. And now you might be erasing the differences between the samples and making them sound more the same.

If there are differences between the samples then say you have a bump on one device under test at 1k to 5k you the level of that sample will be louder if you go and change the rms level so the other sample is same amplitude then you lose your ability to hear the bump as well as you would have before. If you use a standard frequency to calibrate the inputs then you have a level playing field and you can now assume any differences are the MOD it self and not a side effect of rms level calculations and the resulting normalization that is applied to the wav file.

This is my take on it I agree level influences quality or perceived quality but we must make sure that we use ether a single 1k tone or say a 50hz 1k and 10k tones to make sure both decks or devices are even in basic level so we can hear the differences if there are any in the audio spectrum.

I do believe that these tests should be backed up with FFT measurements to make sure there is no dips in the test frequencies as this could be used to "fool" my test. But that these results should be viewed after the files are listened to and voted on.


Maybe I am wrong but we are just trying to make sure the gain settings are all equal after that we want to be able to hear the differences between the devices under test if that is so if you apply RMS and normalization you are changing the files and any spectral differences between the two files could be changed not to mention that fact that you might also be increasing the harmonic distortion of the file being "tweaked" with out even being able to see it but you might be able to hear it, Now you have changed the amplitude and distortion!

Chris
« Last Edit: December 05, 2008, 11:56:42 AM by Church-Audio »
for warranty returns email me at
EMAIL Sales@church-audio.com

Offline Church-Audio

  • Trade Count: (44)
  • Needs to get out more...
  • *****
  • Posts: 7571
  • Gender: Male
Re: Taperssection - product listening comparisons *gold standard*
« Reply #66 on: December 05, 2008, 12:20:03 PM »
That incorrectly assumes the frequency response of the gear under comparison is identical, including during transients.  Just because the levels match at a constant 1k doesn't assure they'll match at 50hz.  And it isn't practical on live sources.  Even if you calibrate that way during the recording, final differences still need to be measured and tweaked.

Assuming we are talking about preamps or recorders here, anything that doesn't test for flat frequency response from 20Hz to 20kHz* within +/-1dB should be disclosed as such in its specs, which is easily verifiable under test.  Transient response is not a huge issue here, normally you'd test frequency response with white noise, which is constantly transient.  You can also test with a frequency sweep, or several sine waves.  All of these should yield the same result in a reasonable quality amp.

For microphones, of course, that is not true at all, except for calibrated measurement mics.



* Filter behavior of a digital converter at 44.1kHz may result in slight attenuation of frequencies above 16kHz.  That is interesting to test and note for a particular device, but if we are concerned with its analog performance up to 20kHz, set the device at any higher sample rate.

Actually white noise is perfect most meters can see it correctly and it solves all of the issues I had with using music as a means of calibrating the device under test. Good idea... You can use a cd player and a set of patch cables into the xlrs on the preamp and press play then set the levels on the meter you can use a compressed white noise so its more constant less likely to jump around on the meters. Again we are not setting level we are just setting overall balance between both sets of devices under test.


Chris
for warranty returns email me at
EMAIL Sales@church-audio.com

Offline Ozpeter

  • Trade Count: (0)
  • Taperssection All-Star
  • ****
  • Posts: 1401
Re: Taperssection - product listening comparisons *gold standard*
« Reply #67 on: December 06, 2008, 04:40:13 AM »
Quote
So long as the levels are set correctly, there is no difference to the amplifier whether its incoming analog signal is from a mic or previously recorded source
Which I think means that there would indeed be no major objection to my simple suggestion of using a mic-level pre-recorded source (eg CD player into Mackie mixer with mic-level output into recorders under test) as a test bed.  Of course you'd have the CD player's imperfections and the Mackie imperfections being heard, but they'd be heard on both recordings.  Must get around to that simple R-44 vs H2 test I promised several pages back!

The only time I did any kind of on-site comparison of kit was when I recorder one half of a classical chamber concert using a Sennheiser MS pair into a Motu Traveler (who preamps seem to be rated as quite good) into a laptop via firewire, and in the second half I substituted a Phonic firewire mixer whose preamps have no street cred at all - but personally I can't hear any great problem with them.

Listening afterwards, I couldn't pin down any significant difference between the two concert halves - in fact now I think I've forgotten which was which and certainly can't say that one is better than the other (and is therefore presumably the Traveler).  That included listening to the quality of background noise between movements as well as the music and the applause.  Of course they performed different works in each half, and so you can't compare like with like, but if there was a significant difference, you'd still hear it on such a test.

Of course it could be that there were differences and that I'm too deaf to hear them.  But that particular test left me quite happy with the humble Phonic mixer and with no desire to purchase the Traveler instead.

Maybe that would be a reasonable and practical way of testing under real world conditions.  Simply record one half with one set of kit, and the second with the comparison set, but use the same mics and don't move them.  One could then listen to bits of each and see if there was anything that would enable one to say which bits came from which kit set, A or B.  Then say whether you preferred A or B.  But saying that A or B was the best (= the most accurate) would be another whole layer of complication, unless there was a gross inadequacy in one of them.  How would you judge?  One might sound very sexy but might not actually be true to the original experience through your ears.

Which reminds me of another bit of testing I did - this involved some tests of a Sennheiser MKH series mic, a Naiant mic, an LSD2, and a Gefell studio mic.  They were each put up in from of a pair of Genelec monitors in a pro recording studio control room, recording the same prerecorded piece onto a multitrack via a Grace 8 channel preamp. so you could play back switching rapidly between the different versions by hitting each solo button in turn.  What I now recall was that the LSD2 sounded preferable to the others until one compared it with the original recording - the LSD2 added something nice, but not something accurate.  The Sennheiser sounded rather dark even though it's what I use for all my recordings in a main pair - but actually quite accurate as you would expect when compared to the source.  There was remarkably little to choose between the Naiant and the Gefell - I mean, a difference, but not much when you compared the price.

However, I'd still not base a purchasing decision on such a test unless I had no choice.   For whatever reason, there's no substitute for a real sound  in a real acoustic for evaluating mics.   But for preamps and recorders, using a pre-recorded source fed into the items under test at mic level seems to me to be a reasonable method.

 

RSS | Mobile
Page created in 0.051 seconds with 33 queries.
© 2002-2024 Taperssection.com
Powered by SMF