I would like to point out something well in advance of the A/B/X testing, should it occur. In order to be "successful" at identifying which cable is in use, there needs to be some criteria in place that defines exactly what constitutes "success". I think it's reasonable to insist that the test subject must be able to identify which cable is in use with significantly more certainty than could be achieved by merely guessing. By guessing, we should expect a 50% success rate, but just because the guy happens to choose correctly more than half the time is not sufficient to declare him a winner. The fact is that he should be expected to choose correctly 50% of the time and he should be expected to choose wrong 50% of the time. For a given set of trials, though, it's very likely (nearly 100% likely if the number of trials is large and even, 100% if the number of trials is an odd number) that the number of correct choices and incorrect choices will not be identical. Also, we need to remember that if I do trials where I flip a coin 100 times, I'd expect that in 99% of those trials, the number of heads will be between 37 and 63. (This is based on the binomial cumulative distribution function with probability of success of heads of 50% and 100 coin flips.) So we have 99% confidence that if someone flips 64 or more heads out of 100 flips that the coin is biased towards heads. But the question we have to ask is, "is it a strong bias?" If the coin, on the average flipped heads 63% of the time, 50% of the time it would give 64 heads or more out of 100 flips. Being right 63% of the time is not a strong bias, so just picking the right cable 64 out of 100 times tells us that we're 99% sure that he can pick the right one more than 50% of the time, but we can't say that he has more than 63% confidence in his choices. That's not very good in my opinion. I'd like to see something more like 95% confidence. In that case, we'd expect him to pick the right one 95 times or more out of 100 tries. The chances of him doing that by pure 50% chance are vanishingly small. You could spend all your time flipping a coin and never get 95 heads out of 100 trials and in fact, it's very likely that you would not have time in your entire lifetime to accomplish that feat. Or 100 lifetimes.
But guess what? They aren't going to to 100 trials. That would be way too fatiguing to the test subject. I bet that they do 15 or less. With only 15 trials, it's not until he gets 12 of 15 trials correct that we know with 99% confidence that the results could not have been obtained by 50% chance. If his confidence level was 95%, we'd expect him to get 14 of 15 right at least half the time and 97% of the time, he'd get 12 or more correct. So, by changing the number of trials from 100 down to 15, we require a higher percentage of success to achieve the same level of confidence that we've reached the correct conclusion about the test subject's ability to tell the difference between cables.
Of course, it's not up to us to set the criteria for these trials. Those will be set by someone else. We just need to keep in mind that we need to look at what the math actually says about the results that they publish. It's pretty easy to make persuasive-sounding arguments using data that only weakly supports your assertions. I'll be watching their results with a critical eye and I hope the rest of you will too. Don't be fooled by their words. Their numbers will tell the real story.