Gear / Technical Help > TS Knowledge Base / Archive

AES Paper: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

<< < (2/4) > >>

Gutbucket:

--- Quote from: aaronji on July 19, 2016, 11:08:33 AM ---I wish more of the papers were open access, though...
--- End quote ---

^This.

The only reason I once considered joining was access to the papers, but then at some point they raised the fee rates even for members to access them and I lost interest.

I was not aware of any open access AES papers (except access to copies hosted by the writer themselves, or by some other organization for them outside of the AES) until seeing this, so I look forward to seeing if there is anything interesting in the other open access papers available through their site.

Gutbucket:
AES search results for all "open access" papers- http://www.aes.org/e-lib/online/search.cfm?type=elib&title=&oa=yes
(organized by date of publication)

aaronji:
^ Interesting pattern discernible in those open access publications:  once you get to the more modern ones (first couple of pages are from the early fifties), they are mostly from outside of the US.  A lot of the overseas public funding sources stipulate open access publication, which is funded by fees paid by the authors.  The US does not generally require this (to my knowledge), so this is not uncommon in a lot of journals.

Thanks for the list, by the way.  There are some interesting papers available.

voltronic:

--- Quote from: aaronji on July 19, 2016, 12:44:47 PM ---^ Interesting pattern discernible in those open access publications:  once you get to the more modern ones (first couple of pages are from the early fifties), they are mostly from outside of the US.  A lot of the overseas public funding sources stipulate open access publication, which is funded by fees paid by the authors.  The US does not generally require this (to my knowledge), so this is not uncommon in a lot of journals.

Thanks for the list, by the way.  There are some interesting papers available.

--- End quote ---

I may regret asking  ::)  but I'm curious as to why the methodology used by this researcher was so poor.  It sounds like you're someone who works in a field where you do this type of thing a lot, so I'd like to know what I as a layman am missing.  My research experience is limited to some papers and studies done as part of my Master's in music, and nothing nearly this involved.

aaronji:
Well, there are quite a number of little things, but here are a few of the big ticket items (so to speak).  The first is that the independent variable (exposure, stimulus) and the dependent variable (outcome, response) should be relatively homogeneous.  For example, you could do a meta-analysis looking at the effects of statin use on LDL cholesterol levels.  Ideally, you would want to have all included studies using the same statin (say, simvastatin) and dosage in all participants and LDL measured in exactly the same way.  Outside of clinical trials, it is generally impossible to obtain that kind of data, though, so I think most reviewers would accept statin use versus LDL, even if there were several different medications and a couple of ways of measuring LDL (although they might very well ask for sub-group analysis stratifying on those).  Still quite homogeneous, since statins all work via the same pathway and LDL measurement variability is pretty consistent across measurement methods.  Reiss, in contrast, takes disparate outcomes and forces them into the same box and does the same with the exposures.  In some cases, he allows his own bias to influence how he does this ("for each trial, it was treated as a correct discrimination if the highest sample rate, 192 kHz, was ranked closer to “live” than the lowest sample rate, 44.1 kHz, and an incorrect discrimination if 44.1 kHz was ranked closer to “live” than 192 kHz").

Another major problem is that he considers proportions as means and analyzes them as such (this maybe gets into the statistical weeds a bit, but I will keep it brief).  In doing so, the methods he implements make a lot of distributional assumptions, particularly assumptions of normality.  The actual trial data, though, is not at all normal, it is binomially distributed (like coin flip data).  Any analysis needs to explicitly account for that and there are a number of methods for doing that (such as Stuart-Ord).  The R package "meta", which is freely available (as is the base R package), implements several approaches.  Incidentally, this criticism applies not only to the meta-analysis, but also to the binomial test panel in Table 2; the test is appropriate at the individual study level, but not for the aggregate.  Really, this issue is even more complicated, because the trials themselves are not independent.  There is correlation between a given subject's choices, so that 1 trial in 1000 people or 10 trials in 100 people or 1000 trials in 1 person are not the same thing, statistically.

Then there is the obvious publication bias problem seen in the funnel plot.  He explains this away, but omits mentioning a very plausible explanation, which is that supporters of higher sampling rates conducting these studies may shelf a study that doesn't fall in line with their expectations (i.e. fails to reject the null hypothesis).  This might not even be completely conscious on the researcher's part ("this is ongoing work, for which my sample size is currently insufficient").  Sensitivity analysis could help here (assuming he used an appropriate set of studies and statistical approach in the first place).

There are many other things, too, such as his curious, and curiously inconsistent, approach to multiple testing corrections or his unfortunate tendency to cite P-values as percentages.  Honestly, if I had to do a formal review of this, it would take hours.  The paper is poorly structured and Reiss' bias is all too obvious in many places.  Anyway, that should be enough (or more than!), but let me know if you are interested in more detail...

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version