The 3.5mm port is dying – at least when it comes to smartphones. If the persistent Lightning headphone rumor wasn’t enough to persuade you, the fact that Motorola beat Apple to the punch should be. Motorola’s new Moto Z and Moto Z Force don’t have that familiar circular hole for your cans to plug into, and it now seems inevitable that almost every phone within a few years will forgo the port in favor of a single socket for both charging and using headphones.
This is a change that few people actually want. It’s driven entirely by the makers of our phones and their desire to ditch what they view as an unnecessary port.
It’s all about control. You can’t put DRM on a 3.5mm jack, but you can do so on a digital port or wireless connection. Imagine only Beats headphones being certified to pull the best quality audio out of an iPhone, protected through Apple DRM.
You know it’s going to happen.
That’d be the biggest waste ever. Have you tried Beats headphones? Apple could pipe out the highest bitrate, best mastered PCM audio file ever and it’d still sound like veiled mud on a Beats headphone!
Meh, whatever. It’s about time I got myself a dedicated DAP anyway. You’ll have to pry my Oyaides from my cold dead hands.
Hope Apple are prepared for a f–kton more warranty claims than usual. Sure, 3.5 jacks fail, but I’m certain they have had at least five lighting port failures for every 3.5mm jack failure. The lightning port is far less robust. People have had lightning port snafus from just using their iPhones while charging.
Edited 2016-06-10 13:01 UTC
The sound quality is decent but no way near the price you pay for Beats headphones (just like any other Apple product in fact).
You still have to take into account the packaging, the marketing, the “special” development done for Beats (including dedicated radio streams) and so on..
It’s kind of sad to see how companies have ended up doing.. Any type of rumor that comes out of Apple labs is taken for granted.
It seems that companies will go to any extent to try out whatever these show because there is the general idea that whatever Apple comes up with will sell well.
beats are really overpriced crap, but they are also based on the lie that everything else is good, just get better headphones.
that’s plain stupid, and repeated daily by perfectly smart people all over the world.
signal chain says you have to have the best possible source before rendering it, or why bother. just shining turds if you listen to AAC or MP3 through expensive headphones.
in the visual realm speakers would are akin to eyeglasses. you wouldn’t tell people that can see pixels in your low-resolution output to just get better glasses would you?
use lossless audio, 24bit if available, then a good DAC and amp and almost any speakers will sound good. they will at least sound the best they are capable of, aka operating at peak efficiency.
Edited 2016-06-10 19:14 UTC
Speakers/headphones are by far the most important component determining the quality of sound from an audio system. Obviously it’s better to have a good all round system, but it makes much more sense to connect expensive speakers to a cheap MP3 player & basic amp than connect cheap speakers to an expensive source playing 24bit lossless audio. Those cheap speakers would be incapable of delivering any of the subtle benefits of the higher quality source.
A cheap amp can ruin a great source and plugging expensive speakers into it won’t fix it. If you have to focus on just one piece of the puzzle then choose between the amp and the speaker, but do so with the understanding that both are equally as important. A sacrifice with one can’t be reversed or overcome by the other.
Every component may be important, but that doesn’t mean that their impact on sound quality is equal. Looking online, there’s a lot of disagreement about how much things like amps and cables really matter in an audio system.
The blind testing of amplifiers that I’ve seen generally finds little evidence of difference in sound quality between different amps (if they aren’t overdriven and clipping). That includes testing of cheap amps against expensive ones.
In contrast, people can consistently tell the difference between different speakers in double-blind ABX tests.
You can find some examples listed here: http://www.head-fi.org/t/486598/testing-audiophile-claims-and-myths
That says nothing about any other kind of test.
I’ll share this with you… The people who really know & understand this subject don’t debate or argue about it because their real world knowledge & experience always trumps someones flawed logic or opinion. A lot of the people trying to find answers don’t or aren’t going about it correctly. According to the internet, the world is filled with audio professionals, or “audiophiles” thinking they’re as good or better. In reality the only people who truly know their stuff are the guys who’ve got years & years of hands-on experience – not the people who read some technical paper or conduct some ill-thought out test.
It looks like you never participated on a double blind listening test…
I’ve run into professionals with years of experience who buy into some of the worst audiophile snake oil, e.g. ridiculously expensive cables. Common sense and conventional wisdom quite often end up debunked when put to the test. I’ll certainly take actual testing over anyone’s personal opinion.
It’s true that not all professionals are equals no matter what field you’re talking about. I’ve come across countless individuals working professional in audio & video production who lacked knowledge or a greater understanding of how to do their job. Those people are everywhere and they aren’t highly regarded. I know by your descriptions that you’ve never run into anyone in the upper echelon – people who really know their sh.t..
As far as trusting tests over opinions.. You can absolutely do that if you wish but you can easily been placing your trust in something that was flawed in its very design and the test results you believe so much in are more meaningless than having any real value.
If audio is a truly interesting subject to you, I strongly recommend you don’t discard any sources. Read everything you can and take it all with a grain of salt. Some of it is subjective and some of it isn’t.
OK, now, I do believe that you may have a long, meaningful and skilled experience with sound processing, but the thing is, if you buy or download music produced by good professionals, or even if you rip it with a good ripper and from a good source, the weaker point will be the mechanical reproducer most of the times. I have yet to see a bad DAC in the last 10 years but saw a lot of bad amplifiers and a lot more bad speakers and headphones than anything else. Sure, regular people may introduce a lot of distortion when adjusting the sound for their like, but this is not what the other guy was talking about, I think.
And there is also the scam pushing people to buy cables with gold-plated contacts, super amplifiers whose distortion curve is only observable when the sound is almost ruining your cochlea (or impairing it for some time) and speakers with unbelievable price tags whose performance is minimally better when analyzed with professional equipment and indistinguishable to most human ears instead of just a good set. Of course, this phenomenon is not exclusive to the sound industry, you have it also on cars that are capable to reach more then 200 mph when the limit all over the globe rarely reaches 75 mph and on many other fields like those of screens density, camera resolution and on and on, all they reaching a point where they just surpass the biological human sensors capabilities.
So, yes, I agree with the other guy, if you buy good music done right, or even bad music done right, don’t turn the volume to the max, and must decide between a top device with original headphones or a just good device with good phones, most of the times, almost invariably in my own sampling, your experience will be better if you buy a good headphone.
Better headphones are a benefit when the source is of higher quality than the lesser headphones are able to produce. There’s no advantage to better headphones when the source is of lower quality than the lesser headphones. That’s such an easy concept to grasp I truly don’t get why people have it confused. There’s no such thing as miracle headphones that magically turn bad quality into good, or decent quality into fantastic. That is simply not how it works. But, sadly, that doesn’t stop people from believing they’ve got magical equipment performing miracles.
You need to work on your dismal reading comprehension if you think that’s what I said.
The point I made is that high-bitrate lossy files are so close in quality to their lossless source that any difference is almost always imperceptible. With modern encoders, and the use of high-bitrates, the file format is unlikely to ever be a significant quality bottleneck, regardless of the hardware playing it. This is backed up by numerous double blind ABX listening tests comparing different formats.
There’d be little real benefit in replacing those lossy files with anything higher quality, not even the lossless original they were created from. In contrast, changing headphones can make a significant and easily heard difference to the sound quality.
I’m not sure why you’re finding that confusing, or what blathering on about “miracles” is meant to prove…
The above amounts to nothing more than an exercise in showing how little you actually know about encoders and how they work. These kinds of silly comments (not points) you’re making are a clear sign you’re not interested in learning anything.
And this kind of smug non-response, failing to address the points I made or deal with the evidence supporting them, is a clear sign that you don’t have anything to teach me.
speakers and wires and amps can all do much better than play mp3 files.
lossy coding is “perceptual coding” meaning they try to remove things they think you can’t explain. it works. get us to agree ‘scientifically’ on what is missing from a piece of lossy music. you can’t. it’s music, it’s the art form most tied into our emotional state.
i’m a broken record and you won’t change me. i’m not an audiophile, i’m poor and have worked in and around recording studios most of my life.
mainstream consumer and tech person is usually confused about audio. nothing new there. but more ignorance now more than ever.
you need pure source file, not degraded, to start with.
i’d take a pure source file + good DAC + good amp playing through any set of speakers over a phone playing through expensive speakers. i love music, and i love real instruments and real reverb and real voices. lossy compression kills all of that.
I’ve often heard this kind of thing when testing disagrees with someone’s opinion. I personally find it implausible that all these tests, many conducted by experienced and qualified people, could invariably be so badly flawed.
But even if that was the case, to me it indicates what tiny differences we’re really talking about here. If there were significant differences (as there are with different speakers and headphones) they’d easily be identified even in a supposedly flawed test (as they are with different speakers and headphones).
If none of the tests devised are perfectly sensitive enough to allow people to hear any differences that do exist, I find it unlikely that those differences will ever be heard in real world listening by the vast majority of people.
Dave_K, rather than going in circles until the end of time, I going to opt not to pick apart your entire post. You lack the experience I have in this field, and I lack the ability to ignore what that experience has taught me so we will always be at odds… That’s ok and not meant as a swipe. I will comment on the following though:
Most people aren’t using more than mediocre-decent equipment. They naturally use and are used to consumer grade stuff, and in some cases “pro-sumer”, which is often more marketing than anything else. For that reason alone most people are already listening to audio at a deficit. Additionally many people don’t even pay attention to differences until you point it out to them. It’s not that they don’t hear them, it’s that they don’t pay enough attention as a listener to actually hear the audio in detail. I’m the opposite – I only hear audio in it’s detail. I wish I could revert back to a state where you think everything rests on the quality of your headphones or speakers. Everything is real simple that way.
I’ve heard that before from “golden eared” audiophiles confident that they could hear the differences that thousands of others had missed. A few ABX test failures later and they were a lot less arrogant about their superior listening abilities…
It doesn’t matter what you’ve heard from these so-called “audiophiles”. People stupid enough to label themselves as such are very obviously not who you should be listening to. Now, ….
When I’m the audience, I know what I’m listening to and I know what to listen for. People like me who have an extensive background in this field will all tell you that they deconstruct audio they listen to it. It’s a matter of knowledge & experience, not magic or gold. That happens to everyone who is a great student, with great teachers, and decades of hands-on training. It’s absolutely laughable, to put it mildly, that you think an `average Joe` is in any way comparable what what I just describe.
This exchange is why most of us never bother responding to the silliness from “audiophiles”, couch experts, and average Joes. It almost always winds up being a total waste of time because people don’t actually want to learn anything. You guys just want to argue subjects you know little-to-nothing about.
Here’s a tip: if you want to convince people of your claims, try providing evidence to back them up.
I’m happy to learn, but I’m not going to believe every assertion made by some random person on the internet. Simply claiming to be an expert, while spouting off things that are contradicted by years of testing, really isn’t very convincing.
Dave_K, I didn’t participate so I could school you. I’ve already stated several facts that anyone can verify for themselves and a few posts back that’s essentially what I advised. If you really want to learn, you will put in the energy and effort to do so but I won’t waste my time as my experience with people like you, who instead of listening would rather debate everything, aren’t all that interested to begin with.
At the end of the day I don’t care what you believe. It’s obvious this isn’t your area of expertise so we really have nothing in common and this becomes less interesting to me with each post. I don’t know what you do for a living but I’m sure the reverse would be true were we talking about your area of expertise.
Continue to believe in those “audiophiles” you hang around with but just know there’s a plethora of more accurate information and fact available should you ever decide to truly want to elevate your understanding of the subject. If/when you do, you’ll realize how backwards and wrong you’ve been.
Here you are acting like the worst of those audiophiles – making claims without backing them up with evidence. Do you really expect people to just accept what you say because you call yourself an expert?
I’ve taken the time to look into your “facts”, examined listening tests designed specifically to test them, and found compelling evidence that you don’t know what you’re talking about.
If you had “accurate information” to challenge what I’d said then I’m sure you’d have presented it. Instead you’ve just evaded the points I’ve made, blathered on about your unproven expertise (as if that’s meant to impress people), and tried to twist what I’ve said into a strawman you can knock down.
Try putting aside your arrogant assumptions about your own expertise for a moment, actually examine the evidence with an open mind, and maybe you’d learn something yourself.
Things murdered by lossy compression:
HI hats.
Splash cymbals.
Electric bass string noise.
Analog synth squeals
Air in the room
Reverbs
vocalists breath and lip sounds
kick drums.
bowed instrument’s timbre.
i could go on. no speaker will recover what’s already gone.
you can’t shine a turd with expensive headphones.
why listen to tech nerds about audio, when you know they are clueless. listen to musicians and producers and engineers in the field. listen to experts. 24bit audio has been the professional standard for 20+ years now.
the internet becoming the distro platform set audio quality back 3 decades.
Then it should be absolutely trivial for you, and other people who make similar claims, to identify lossy from lossless in ABX listening tests.
Why isn’t that the case?
it’s very easy, almost always, to tell the difference between lossy and the master recording. you are a strange creature – the internet guy that claims 72dpi = 1000dpi because our ears suck because of some listening test you’ve only heard of.
Inevitably you link me to xiph . org. I just don’t get this virus of low quality when it comes to audio. it’s an internet-era invention. making fun of audiophiles has been around for decades but this willful ignorance of our ear and hearing hiding behind mp3 convenience modernity is really infuriating.
imagine someone told you the best OS ever EVER was written in 1978, and will never be improved, and can’t be, because no one could possible notice anyway. OS programmers screamed that there could be better but there’s fella’s like you to let them know that they are crazy, low quality is the new norm.
1978 refers to redbook, the standard you based your church on. it’s about 1000k effective bitrate for a stereo file. that was a lot of data to move in the 80’s when CD’s came out. now it’s nothing. in the 90’s when the internet took over they couldn’t push 1000k around in real-time so data compression was invented. research perceptual coding and then google “ghost in the mp3” for more information about how they got it down to 200k and below and sold it to consumers.
stupid youtube clips stream at 4000k these days. we have plenty of bandwidth for lossless 24bit master-quality audio but your ignorance and mockery keeps it from the masses.
i suspect you also don’t mind working for 10% your normal rate? who would notice?
Edited 2016-06-13 18:43 UTC
It’s because ABX tests don’t work for judging musical quality. They are highly flawed but no test formats have been accepted to replace them.
The test gives garbage data. Which is why you can find all kinds of proof based on that test. The test gives bad results so you have to throw it all out.
Go back to people who get paid for this. People who curate catalogs of valuable music. 24bit or bust, baby!
Provide evidence that those tests are highly flawed, or at least explain the reasoning behind that claim, otherwise there’s no reason to take it seriously.
what exactly is being claimed A/B listening tests can’t discriminate between??
(with the big caveat that the source needs to have been well recorded, mastered etc) :
—I say it’s quite “easy” to tell the difference between say 192kbps and 320kpbs fixed bitrate material (e.g. with LAME encoded MP3s)
and SOMETIMES, it’s possible to discriminate between say 320kpbs MP3s and raw 44.1khz PCM WAVs. (but only on very nice hifi’s)
Then CD quality vs 24/96khz (or god forbid, 192khz) – it’s really massively diminishing returns. I wouldn’t claim to have ever heard a noticeable difference I could vouch for (not hand on heart in blind listening for instance).
but < 320kpbs fixed bitrate or < 220kpbs VBR stuff – you certainly don’t need “magic ears” to hear the quality loss if you listen carefully. Of COURSE you still need a 20%-plus quality drop to really pick it out mind.
Yet you take the results of those tests seriously? The burden of proof is on them, it’s their test.
Here’s the flaw —- If the only result of an ABX test is that there is no such thing as high quality, does that make it truth?
If higher and lower quality audio exists – why can’t an ABX prove it? Because the test is garbage in this use.
You want some information on what’s wrong with ABX tests?
http://www.stereophile.com/content/2011-richard-c-heyser-memorial-l…
http://www.positive-feedback.com/Issue56/abx.htm
http://tapeop.com/blog/2015/05/04/problem-bing-and-why-neil-young-r…
http://wfnk.com/blog/2015/06/new-listening-test-a-proposal/
http://wfnk.com/blog/2014/11/how-to-spot-internet-pseudo-science/
Or just think about it, try to do one yourself. You can fool yourself with an ABX test, you can get null results on your own self, which proves the math behind it is flawed.
I have a test for you — stand and lift on leg up. Now lift the other for 3 seconds, I bet you can’t. Your second leg doesn’t work anymore, it failed the test.
I have done a number of ABX tests. In fact, that’s what convinced me to stop filling up my player with FLAC and use MP3 files instead.
What it proved to me was that my biases and assumptions had an impact on what I thought I was hearing. It made me much more sceptical about subjective claims about audio quality that aren’t backed up by testing.
and your decision, based on ABX tests, led you to purposely remove 90% of the audio data from your files.
that’s 10% of what the artist intended you to get.
that’s what the ABX test has done – it has convinced you that a xerox copy is good enough, even when you have the original. that’s sad.
256k < 1000k < 5000k
mp3 < CD < 24bit audio
these are truths. the master is the best version, any degradation you apply in the name of convenience is just degradation. enjoy your degraded music. know that you’d enjoy it far more if it was the real version.
Want to hear real experts talk about the sound quality of music delivery formats?
http://wfnk.com/blog/2014/11/lecture-quality-sound-matters/
mastering engineers, record company catalog managers, and even a guy from spotify talk about quality.
You haven’t argued with ezraz before on the topic to save time, here is his defence:
There isn’t a scientifically reliable way of conducting listening tests. So any use of those to determine quality is flawed, therefore just listen to what audiophiles and companies that sell to them say and take it as gospel. No need for science in audio!
Edited 2016-06-13 13:22 UTC
if you’ve read any physics or electrical engineering textbook on Nyquist Sampling Theorem you will know that high bitrate music is nothing but a pseudo-scientific marketing gimmick.
“Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.”
https://xiph.org/~xiphmont/demo/neil-young.html
unclefester,
Yeah at some point you’ve gotta conclude that storing higher frequencies is futile. Mind you, I personally don’t know where this limit is, and since I’m middle aged I wouldn’t be a good person to judge this anyways.
As far as the argument about poor fidelity at higher frequencies, that could be true especially with the cheap equipment the majority of us are using. But in principal ~22HZ is a human limitation. For electronic equipment and even other species it’s merely an arbitrary cut off point.
http://www.lsu.edu/deafness/HearingRange.html
Someone’s got to think about the cats and dogs, and what about those beluga whales damn it. If we up the rate from 192khz to 256khz, then our audio recordings will finally be worthy of all species on earth (assuming we team up and eat all the remaining porpoises, of course).
You have to remember that 56KBps/44.1Khz assumes perfect hearing. I’ve seen claims that bitrates as low as 40Kbps/32Khz are enough for real world use.
Edited 2016-06-11 08:53 UTC
unclefester,
To be fair, we haven’t held any hearing olympics so the absolute best is not readily known, it’s just an assumption.
I tested myself with an app on my phone and can hear up to 15.98khz at the volume levels my phone is capable of outputting. It’s a fairly noisy environment though with no headphones, so maybe I could hear more someplace quite. Generally though I’d agree a 32Khz sample rate is enough *for me*. Maybe a bit more to give the lowpass filter some headroom to perform a cutoff and prevent aliasing.
I stopped collecting music after mp3s so I have no reference point with AAC bitrates. It obviously depends on codec properties and the number of channels and whether those channels are compressed jointly, etc. In general I was content with 128kbps but that’s not to say there wasn’t a clear improvement with 192kbps, I personally just didn’t care that much.
This may not be true of audiophiles, but I suspect the perception for most people would be that tuning the histogram for a richer mix would sound better than increasing the rates to increase fidelity. In other words, tuning can make it sound even better than the real thing.
Edited 2016-06-11 16:16 UTC
If you are past 11 there is a very good chance that you will be incapable to hear anything beyond 16 KHz, also, not a lot of people can hear near the 20 Hz too.
Want a good sound experience ? Don’t use cheap headphones, don’t turn the sound to the max (amplifier distortion curve get very noticeable close to the limits of if) and don’t get music ripped by random blokes.
Unluckily, there are “incentives” running for some time to insert on records some “enhancement” to the frequency response curve, and some speakers vendors do it too. I’m not sure, but it may have started when equipment manufacturers noted that even people that bought expensive equipments liked to increase the bass, middle and treble ranges.
No quite. Working with higher bitrates during processing has benefits. If you’re referring to audio that has already been processed and is ready for consumption then you’re correct.
Totally agree.
and of course you can’t have an online audio discussion with the ignorance of “yeah but you can’t hear it anyway” and xiph.org comes out.
xiph.org knows nothing about professional audio. they only know about compression codecs. ignore that bad information. that’s all about agenda, and they know nothing about quality.
it doesn’t matter if you “think” you can hear it or not – 256k does not move the speakers, and thus the air, nearly as much as 1400k or 5000k of bandwidth.
music compression formats are a poison. it was necessary long ago, even i used lossy to fit crap on my early devices.
but i started to hate music, or at least find it grating and lacking in punch. then i went back to lossless and WOW there it is again!
honestly, — please, drop the ‘lossless is ok’ and ‘get new headphones’.
it’s very backwards and wrong. it’s goes against the point of making music and recording music in the first place — it’s not about convenience, it’s about the emotion encoded into the music.
do you want 10% from your lover? 10% from your paycheck? 10% from your parents? why would you possibly try to justify 10% of your music, because that’s all you need to hum along.
effective bitrate of 256k MP3/AAC stereo file = 256k
avg effective bitrate of CD stereo file= 1000k
avg effective bitrate of 24/96 stereo file = 3000k
avg effective bitrate of 24/192 stereo file = 5000k
compare with
avg effective bitrate of 720p youtube stream = 4000k
we have plenty of bandwidth for hi-res. the faux science of xiph,org and perceptual coding has you fooled if you think you don’t need more than 10%.
That sounds more like a psychological issue than anything grounded in reality. The often massive impact of placebo effect is the reason why double blind tests are necessary. Are there any such tests that back up your claims?
The sound quality is exactly the same as my $20 Skull Candy cans. You’re paying $200+ for chunks of metal they hide in the ear pieces to make them feel heavy and “expensive”.
I have a $50 set of Sony studio monitor headphones, still nowhere near audiophile or really even studio quality, and they make Beats cans sound like $3 no-name earbuds. You’re literally paying for the brand and nothing more, like a $300 pair of designer jeans when a $50 pair of Levi’s (also overpriced) looks just as good and lasts longer.
Buy that, gan17. Sharp [Shark?] tooth of digital always ends damaging hearing.
HTC tried this concept with the G1 and it failed miserably. It forever pissed me off that I could not use my favorite headphones with this device without a USB adapter.
Why do hardware guys always feel the need to reinvent the wheel? If it ain’t broke don’t fix it.
Funny that from now on a USB soundcard will be known as the “headphone adapter”.
This is an absolutely terrible idea, and Thom is bang on with respect to phone makers (Apple) using this as a way to lock customers in to their accessories, but I thought at least part of the problem with the old school headphone jacks is that they don’t fit into the new thinner phones all that well.
Who the fuck cares? The phones are already thin enough; in fact they’re so damned thin they’re uncomfortable to hold for any length of time.
exactly. size is an important factor. by modern mobile standards it is a big port.
it also can’t be extended any further with features. it’s maxed out, what 4 rings already?
another analog piece of kit bites the dust.
That’s it. Taking no more s#!t, audiophile related. Anything I keep now on, media or hardware, analog or nothing… Silence is golden, also.
So thin you have to strap auxiliary batteries around them. Thin is overrated… I don’t need to hold a razor blade that can’t make it through the day without an outlet.
Honestly this is a tinfoil hat argument, especially as the bulk of Music sold today is DRM free and there are plenty of other ways to get digital audio out of your phone if you want to copy it.
Most device vendors have been reducing ports on devices for years, to facilitate smaller/thinner size. The modern USB-C and lightning ports are smaller, more robust, and have ample bandwidth to carry audio in addition to power.
Technology marches on Thom. The 3.5 audio port is giving way to something more modern. There is no conspiracy.
DRM will come back in the next compressed file format. Thom is right. also this new lossy format (based on Meridian’s MQA) will also force sell new DAC’s.
USB-C, more robust than a 3.5mm audio jack? Nice joke.
Don’t laugh, your typical 3.5 mm audio jack is only rated for 5,000 mating cycles. Cheaper models are sometimes significantly below that.
USB Type-C connectors are rated for 10,000 cycles as required by the spec.
What’s the tension/torsion rating for USB Type-C vs 3.5 mm headphone jack? That’s the important metric to compare. Especially on the cable end of things.
I’ve had to replace a lot of MicroUSB cables over the past two years because they no longer make a solid connection to the post in the USB port unless you press on them in a certain direction with just enough force to *not* break the centre post.
And, I have a handful of devices where the centre post has been cracked/bent/broken and no longer makes a USB connection of any kind.
All because there’s just enough slip in the socket to make up/down pressure on the connector bend the centre post. With headphones, the jack is the post, so there’s nothing to worry about; there’s nothing to put pressure on!
USB Type-C is better than MicroUSB, but I don’t see how a mm-wide post can be stronger than a solid 3.5 mm plug.
Test setups can be carefully built to withstand specific specs.
Still, that doesn’t alter basic physics realities.
So far, so much for industry specs frozen by hegemonic pressure, instead of iteratively allowed to mature.
That goes for API surfaces, also.
Technology marches on, but in which direction?
The 3.5mm audio port is giving way to something post modern: change for the sake of change.
Selling music is declining rapidly in favour of subscription based models like Spotify, which do use DRM.
Also there is precedent. When Blu-Ray players first shipped, they were allowed an “analog hole” for HD output in component video, but today any players that still have component output can be forced by the content producer to output only in SD.
https://en.wikipedia.org/wiki/Image_Constraint_Token
Agreed. We already have USB headphones for PCs and nothing has changed. No one is preventing a manufacturer from using a 3.5mm port either.
But every computer can take a USB headset, although driver support isn’t always created equal (*cough* *cough* Linux). How likely is it, do you think, that I’d be able to connect a USB C headset to an iPhone, or a lightning headset to a USB C-equipped Motorola and still enjoy 100% of my content? Something tells me it aint gonna be happening any time soon.
darknexus,
I agree. The thing is future tech is always at risk of being manipulated by the pro-DRM associations like RIAA/MPAA. In a way we’re extremely fortunate that they did not get their hands on peripheral buses like USB in the early years of development. DRM can’t be added now without breaking compatibility with USB peripherals already on the market, which would make any DRM extremely noticeable and unpopular. So I doubt DRM enabled USB peripherals could achieve critical mass when so many USB devices are already grandfathered in as DRM-free.
However with other tech like HDMI where they got involved from the start, they were successful. If they got another opportunity to add DRM to some new standard or markets from the get-go, it’s hard to think they wouldn’t try to take it.
Edited 2016-06-10 20:52 UTC
If you can connect peripherals from an Android device to Android devices made by other manufacturers then they are interoperable. Forget about Apple, even in the PC world they tried to be as incompatible with the rest of the world as possible,
This is basically forcing the customer to purchase quality dac/amp pair on hers/his own.
The part due to it’s analogue nature doesn’t follow Moore’s law and is probably sticking out of the BOM more and more. It’s not an accident that the Chinese designers have championed the idea first.
Manufacturers can now get away with putting the cheapest ICs, even in the flagships (tiny speakers won’t show any difference).
I guess majority of the smartphone users won’t really care about sound quality anyway.
First, I have USB-C on my Nexus, (which I seriously like, especially when I can charge my phone from flat to 95% in 15 minutes). I also have a headphone jack, but it’s never been used.
Bluetooth has been a better option (for me) than a wired headset for years.
… and please don’t break out that audiophile angle. You’re listening to mp3’s on a freakin’ phone, not a vinyl disc (or even a good quality CD) on a professional amplifier system, and then you’re ramming it down a 1/4″ speaker.
As for the whole beats/drm thing, I’m pretty sure that UEFI secure boot meant the end of linux 10 years ago– If Apple wants to commit market suicide by forcing people to only use beats with their magic usb DRM, then that’s Apple’s problem, not me.
Bluetooth, makes the headset more expensive, you could of course use a separate receiver which have a 3.5mm audio jack.
Would you buy a new headphone if the batteries are dead, or would you try to replace them ? what if the spare parts are not available ?
To be honest, bluetooth audio receiver are great ( if you disregard the latency ). But I do prefer wired headphone, because of the choice I have in size and type of headsets (earbud or headphones, over the ear or on the ear ,open, closed, with noise cancelling or not ).
That’s why getting a separate Bluetooth receiver/dongle is the optimal choice.
BT specs or audio codecs are upgraded? Just replace the receiver and keep using the wired headphones that you prefer.
Headphones die? Get another pair and plug them into the receiver.
Want ear buds today, over-the-ear cans tomorrow, and a giant speaker on your backpack the next day? Just plug in the ones you want to use.
Battery dies in your dongle? Unplug the headphones from the dongle, plug them directly into the phone, and charge the dongle.
Battery in the dongle won’t keep a charge? Replace it.
I have a couple of pairs of wired headphones, yet they are never plugged directly into the phone (even though it has a nicer DAC/amp/equaliser for the headphone port); they’re always plugged into a Sony SBH20 BT receiver/dongle. Why? Convenience. It’s a royal pain to deal with cables between the phone and head. This way, the cable gets tied up and goes from the ears to the shoulders where the dongle is clipped. Doesn’t matter where the phone is, I get audio. No cables to get snagged on anything without smacking myself upside the head first.
nothing ‘optimal’ about bluetooth audio. yet another degradation in the name of quality.
lossy (bluetooth is another layer of lossy) is like a plastic violin – why bother?
Sigh, it’s almost like you didn’t bother to read anything I wrote.
I don’t mind a digital audio standard to eventually replace analog connectors. Something like bluetooth but for wired components.
Consumers are very fortunate that analog jacks just work. What I hate is when peripherals are made to be incompatible with one another. I don’t want to have a different pair of headphones for every device, it’s wasteful, it’s complicated, it’s stupid…but I don’t have much faith in manufacturers resisting the temptation to break compatibility just to sell more accessories.
This video sums it up:
Why Every New Macbook Needs A Different Goddamn Charger
https://www.youtube.com/watch?v=jyTA33HQZLA
Edited 2016-06-10 14:31 UTC
If Apple would limit headphone usage, they will sell less phones. It’s not a monopoly. So that will NOT happen. The thing that will happen is they will sell you an additional $29 lightning-to-3.5 and that’s it.
Plus, DRM era is gone. It’s all about subscriptions.
I think apple is going to license MQA from Meridian, rename it AQA and make it their new format for their streaming store, while at the same time closing their download store (still stuck on 256k AAC).
This AQA will have DRM and vendor lock in and be backed by what’s left of the modern music industry.
The archive types — all of the 20th centuries music — will hate it and hopefully stick with hi-res PCM. 24/192 PCM FLAC sounds amazing and no new format is needed.
While USB headphones will enable DRM, Bluetooth does as well, and people like Bluetooth headphones and headsets. This is probably a reaction to Bluetooth being popular more then a move to implement DRM on headphones. They haven’t done it with Bluetooth devices, so I don’t see them doing it with USB devices.
Companies will make crappy, incompatible accessories all on their own with out any help from the Apple or Samsung.
Anyway, I really want this.
I have a USB headset, but I can’t use it with my phone. If you’re thinking, “Why not just get a regular 3.5mm headset?” They show up as a second sound endpoint, and specific applications can be use them without everything being piped into them or accessing the mic. This means my softphone will use the headset while other applications will send output to the speakers and/or use other mics.
Let’s complain about the VoIP phone manufacturers who haven’t put a USB port or anything else remotely standard on their handsets. Good god that’s a mess.
Flatland Spider,
My house phone has a standard audio headset, but it’s an old-school analog phone. I can only imagine what accessory connections look like these days.
OK, I am really curious, what would be the benefit of going digital if it is already digital until the last minute, when you really need to generate the analog signals our biological sensors where designed to deal with ?
The only usefulness I could think would be if you wanted to meddle with the stream quickly and at hand, like skipping songs or altering the volume level using a control on the cord but it can already be done with 4 wire headphones (OK, there are incompatibly wiring methods right now, thanks again to Apple, but it is easy to fix).
acrobar,
They probably have ways to communicate this info over a stereo plug, but a digital standard (like CEC is to HDMI) is probably going to be better in all respects. Maybe a car stereo that prominently displays a title of what’s playing, etc. It’s small things of course, but they’re nice touches.
When it comes to mobile peripherals, there’s a lot of overlap with bluetooth. When I got my first phone, I strongly wanted a wired headset, but I gave the bluetooth headset a try because the wired headset was proprietary and I wanted to avoid that. I hated the bluetooth headset as much as I thought I would. This is the weird sort of balance I’m forced to play out as a consumer, yea I’m an unusual specimen
OK, fair reply, I was kind of thinking about the headphones or speakers being the last device on the chain because I already use the USB connection when transmitting sound to intermediate devices but I can see how some people may be obligated to use the phone jack to connect to old or unhelpful equipments. Also, I did not think too much about the distance of cables but have seen first hand how they affect the modern multimedia rooms people are building in theirs houses.
Edited 2016-06-12 14:39 UTC
Thom is right. I love typing that!
The 3.5mm jack cannot be feature extended further and is actually very large compared to modern connectors. It’s had a good run of 120+ years.
But that’s for the nerds. The real reason it’s being killed is for DRM. That’s the money talking.
Previous attempts at DRM from ’96-’06 were all outside of the file, outside of the conversion, and put on as an added layer. They relied on OS’s, 3rd parties, and very slow internet to even work, and most didn’t work well. Paying customers hated it so badly it was killed.
MQA changes that. It’s a new encoding method to replace PCM (used in MP3/FLAC/OGG/WAV) and DSD. It’s tricky because it can be stored in existing file containers and should thoroughly and convincingly confuse all but the nerdiest digital audio type. Perfect for selling more lossiness. Already loaded with nonsense terms before Apple buys it.
Why another new encoding method? MQA has lossy compression concepts in the encoding itself along with DRM hooks.
It’s like taking MP3 compression ideas, rewriting them (in perhaps a more refined way), and putting them into the actual encoding itself, not as a 2nd pass after encoding. Pretty cool idea but also pretty unnecessary since we have enough bandwidth now to use lossless FLAC.
The hidden part of MQA is the DRM hooks in the encode itself, to only play a legal version of the song and only play on MQA approved DAC’s. Meridian (creators of MQA) aren’t saying anything here.
Bottom line — spend a few hundred bucks now and get a dedicated DAP. Re-rip all your CD’s lossless and start buying lossless music again. Support the artists you love in tangible ways more than streaming 10% versions of their songs for $0.00000003
Be pono – fathers day sale right now — $300 for a ponoplayer, a 64gb card, and $25 music. Thats a great deal.
Edited 2016-06-10 15:29 UTC
Cheap and effective way to make your CD content available on various devices: use Exact Audio Copy (for Windows, but apparently works under Wine too) to rip CDs. Set it up to *not* throw away wav files once it’s created compressed files. Doing this, I can play wav files on the computer, and mp3 on the phone.
As for buying download-only, thankfully many of my favourite artists use Bandcamp, CD Baby, or some other site allowing wav and/or flac as well as mp3. As far as I’m concerned, the mp3-only Google Play is the last resort.
I’d recommend EAC too – it does a good job even with scratched CDs.
One thing that’s possible with EAC is to have it simultaneously create both lossy files like MP3s and lossless compressed files like FLAC. No need to keep uncompressed WAV files for playing on the computer – you can save space without sacrificing any quality.
The Linux equivalent would be Morituri
https://github.com/thomasvs/morituri
Classic false simplicity. You make the product photos look great, but you have to carry a bag of adapters just to make the thing work.
Couple of folks are trialing Surface Pro’s here at work. They never detach the keyboards, have display adapters for screens, network adapaters and hubs attached. They are so much *simpler* than an unsexy business laptop that does it all in one thing.
Chrispynutt,
Personally I’d rather have the ports built in, these dongles are reminiscent of the days of pcmia!
I went on a business trip to an area that had no wifi, we needed to connect physically inside their lan. My coworker had a macbook pro but forgot his ethernet dongle. Granted he probably doesn’t need it often, but we did need it that time. Fortunately my laptop had an ethernet port though.
Oh yeah……I’m sure it’s about “audio quality”. Lol
If it was about audio quality, then the brand name “Beats” wouldn’t be anywhere in the discussion.
” You know it’s going to happen.”
Shoo! Shoo!
Expensively playing Tom&Jerry with every user?
If wishing anyone, taking the analog signal from the actuators would be children play.
Are children taking those ‘tactic’ decisions?
Are extremely fragile. And seen lots of otherwise perfectly functional mobiles going repair shop or e-trash just because of this [issue].
1. Why there are always people claiming that the removal of X or Y is purely bad is beyond me. Yes we have seen the end of the floppy disk, the Centronics Port, ADB, the mechanical keyboards, etc., etc. Guess what: The world has not stopped turning and the end of the notorious unreliable floppy disk was a good thing (and in some cases like the mechanical keyboards, there is even a revival because there is a market for higher quality keyboards)!
2. Claiming that change is done purely for commercial reasons (or some other conspiracy theory about DRM or whatever) is silly. If you don’t like a product, don’t buy it. There is now way a single company can force a change. This is a buyers market and there is no monopoly at all. If the headphone jack on mobile phones is gone in 5 years, then because the majority of people favor and buy those products. Deal with it.
3. Some people on this site (including Thom) claim that Apple ist the sole responsible entity for the removal of the headphone jack on mobile phones. WTF?! There is not a single iPhone without a headphone jack today and there is none announced. What are you talking about? If you have visions, go see a psychiatrist.
There are leaked pictures of iPhone7 w/o minijack.
Chinese started to remove minijack from their phones right after the first rumors about this.
Edited 2016-06-11 19:11 UTC
“If you don’t like a product, don’t buy it.”
Who is buying from here? So, why the suggestion? Maybe your true suggestion is just ‘suggested’.
“There is now way a single company can force a change.” At least today’s Apple can’t
“This is a buyers market and there is no monopoly at all.”
That PR really was over the board ;D
Government should be busy at more urgent things, then
“IMAGINE only…” Invitation couldn’t be clearer. You’re also invited to imagine whatever you wish, about this chatting.
Seriously Thom, most of the time you seem like a smart guy, then you come out with something like this.
Why the 3.5mm headphone jack is bad for phones.
A. Mechanical
A1. Size. It forces the phone to be at least 5mm thick.
A2. Mechanical Vulnerability. Catch your headphone cable on something. If you’re lucky, it just pulls loose from your handset. Less lucky, the jack is damaged. Having a bad day? The socket is torqued enough to wreck the pcb in your hanset.
USB connectors don’t do that.
B.Electrical.
B1.The sound sucks (part 1). The 3.5mm jack was never designed for good fidelity. That was what the big 6.35mm audio jacks were for. 3.5mm jacks were used initially because the sockets are cheap, and later, as electronics shrank, because they are small. At no time have they been chosen because they are best.
B2. The sound sucks (part 2). Transmitting undistorted analog along a cable is hard. Most of the solutions are expensive. Transmitting uncorrupted digital is easy and cheap. A well concieved audio system will therefore transmit digital along any cables and place the DAC right next to the transducer.
Now I’m sure that Apple would love to force customers to buy only approved headphones, and they would retain a decent proportion of their customer base. These are people who put their thin, metal, phone in their jeans back pocket, sit on it, and complain that it’s now bent, convince themselves that Apple sold them an inferior product, and when the new model comes out they buy it anyway. The rest of us will take, or already have already taken, our custom elsewhere.
I’m not a great believer in market forces as a rule, but I’m fairly comfortable predicting that the only people who get hurt by this are those fools who were in any case soon to be parted from their money.
hardcode57,
A. You made several valid points
B. Not a single one of your points refutes Thom’s points
C. It would seem like both you and he can be correct simultaneously.
Yea. The point I meant to make, but didn’t because I was, frankly, very drunk*, and also because I got distracted by the beauty and truth of my description of an apple customer**, was this: I’ve been wanting the 3.5mm jack to go away for a long time, for the reasons I gave. DRM never occurred to me. It probably isn’t the driving force behind these changes.
*As a Dutch guy, Thom shares some collective responsibility for this. It was the Heineken that did it. I don’t feel at all well.
** I should have pointed out that apple customers don’t just ‘buy it anyway’. They will queue all night to be the first to do so. Such people should not be allowed out unsupervised.
Edited 2016-06-12 11:12 UTC
Getting to that haiku descriptions amount to a kind of ‘express’ art. The sober one Better. On defense of real, decent fans should say it describes the ‘groupies’ kind.
I can’t tell is this trolling or not.
A1) There are no practical reasons phone should be thinner than 5mm
A2) Heh. I’ve never ever heard about broken minijack.
microusb is a very unreliable connector. I’ve 2 broken connectors – Android phone and PS4 joystick.
B1) I’m happy with my $300 earbuds and oh-my-god-shamefull 3.5mm jack.
So far sound sucks badly then you use digital bluetooth headphones.
B2) Fine, you need to buy a separate headphones for each kind of device. Brilliant.
I need both audio output and charge for use in my car.
Do I need to buy USB-C splitter and USB-C audio card and connect them all together? No thanks.
…So do I. So do many, perhaps most, cellphone users. Therefore expect good solutions to be made available by those manufacturers that want to remain in business. I’d expect wireless charging car-cradles to become common, but if there is a better solution, that’s what will be offered.
hardcode57,
While we’re being optimistic, let’s standardize on lithium batteries and ink cartridges too!
The simple stereo jack lasted several decades and may have longer to go still. These days, we really tend to burn through “standards” much faster. USB is a logical choice, but even USB connectors themselves keep changing enough to keep things confusing. I already find the plethora of USB cables I have to keep around annoying, and now there’s another one.
For all the benefits of digital that I can appreciate, the one big pro that undeniably goes to analog right now is the ubiquitous nature of the analog jack. Want to use your headset across your laptop, phone, mp3 player? Unless they all use an analog audio jack, you’ll very likely need to carry adapters to do that. While it’s obviously possible to standardize all digital accessories down the line, that’s by no means a given. Manufacturers like to change things intentionally to make them less reusable and to sell more accessories. Whether you or I support this is irrelevant if consumers at large play along, and they often do.
This is exactly why, as I’m sure some have noticed, I’ve been playing both sides of the field in these comments. I like the improvements that digital can offer, but we don’t really know where this ends up. A decade down the line we’ll need to come back and revisit this
Edited 2016-06-12 14:10 UTC
USB Standard Size really good at the casket plug|socket [realizing now it used 3.5mm 2x real state].
But even Big Intel never got to enforce pin standards. some of them so badly implemented that it was common for that first[and second] generation to cause severe electrical damage.
This lack of enforcement drove Motherboard OEMs to install costly protective circuits. [They still are].
Good Standards are the natural products of long industry iterations.
A1: No, it doesn’t. There are plenty of phones out there that are sub-5 mm with headphone jacks. The absolute minimum size will be around 4 mm, would be my guess.
A2: You have a bigger chance of breaking a MicroUSB port than a headphone jack when snagging the cable on something. That centre post in the MicroUSB port isn’t that strong. Which is stronger: a mm thick post, or a 3.5 mm jack? A USB Type-C port may be stronger than a MicroUSB port, but it won’t be as strong as a headphone jack.
Ripping the PCB off the motherboard is an issue with both ports. USB ports have an extra centre post to worry about. Multiple points of failure here.
Don’t buy it.
Technically, nothing prevents from transmitting digital data and enough power for a noise-cancelling headphone from a jack, while maintaining a compatibility mode with analog signals.
And there are smaller jack formats and adapters.
In all this discussion it seems that almost everyone is assuming that these devices will only deliver digital audio.
I haven’t managed to find any evidence either way, but I would be surprised if they didn’t support analogue audio.
Intel have stated their intention to promote audio over USB-C. To this end they’ve been working on a new standard for this which supports both digital and analogue. Their reasoning is that existing headphones can be used with a simple adapter, and in the future a gradual transition may occur to analogue headphones using the new connector while allowing high end headphones to be developed that utilise the digital audio while adding “extra features”.
This could well be the first implementation of Intel’s plan.
As for motivation, I could believe that there may be evil DRM thoughts behind the managerial support, but as a hardware developer I think the overwhelming reason is to simply shift to a single port. Not only do they get rid of the physical port they may well be able to replace two separate sets of electronic components for audio and USB with a single set (at least if this does take off I can imagine chip manufacturers making combined chips).
Browser: Mozilla/5.0 (Linux; Android 6.0; E5823 Build/32.1.A.1.185) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 YaBrowser/16.4.0.9335.00 Mobile Safari/537.36
USB-C can carry the same three analog lines that go to a headphones jack, so the lack of a 3.5mm jack will only mean you will need a passive adapter.
Adapters are a pain in the ass, but these would be _standard_, unlike those those for HTC-only USB-mini analog audio output that some of us suffered a few years ago. Being standard means thay will come with new analog headphones, and that you can expect them to be good quality if those headphones are any good.
So, no convenience really, and the thickness these phones shed would’ve been better kept and stuffed with battery; but I don’t think that this time it is a conspiracy to restrict our rights and funnel our money and our data into Their Systems.
Does it make sense to have these analog signals on the microscopic USB contacts, in a part of the device full of noisy circuits ?
I think Apple can pull it off and output good analog audio, and cheap phones will have no worse audio than they do now. The rest, I don’t know.
No matter how badly it is done, it will be better than Bluetooth audio, until the day AptX or some other good BT codec becomes common.
If implemented, would be only for little, emergency ear buds.
1 – good speakers are most important – false, unless everything up the chain is high quality already. chances are your existing speakers can drive much more signal than they are being given.
2 – expensive cables are necessary – false, unless you have a very high end system with a perfectly tuned listening room. this is just a way to attack audio people, the expensive cable thing. i use $5 cables from monoprice or whoever.
3 – lossy is not really lossy, because no one can tell – false, everyone can tell, given the proper musical material and listening skills. it has nothing to do with the age or accuracy of your ears. it has nothing to do with the quality of speakers. it has everything to do with the accuracy of the music being rendered before it even gets to the speaker.
the only truth to audio playback is signal chain theory:
nothing can improve what came before it. each step can only degrade (analog) or reproduce exactly (digital).
this shows us that starting the chain with a lossy file is like starting your poster project with a 72dpi image. have fun printing that – maybe no one will notice! maybe xiph.org can prove to us that 72dpi = 1200dpi. that’s basically what they are claiming.
xiph.org claims that your ears -no wait – entire body – can only hear ~ 256k of bitrate. please. utter nonsense. those people are fools, no matter how long their formulas are. fools.
kill lossy now. it’s hurting us. people are going freaking crazy because there’s not even good music anymore, even if you have good music you play it back on crap 10% versions pretending to be full versions. it’s placebo. we expect the full meal but we are given 10%. it makes us crazy.
Edited 2016-06-13 15:01 UTC
I am wondering. Are you a Troll, sir ?
This is strange because 1 is false, 2 is true and 3 is false.
you can disagree but no need to call names.
i’ve been on OS news from the beginning. that makes you a troll not me.
nothing i said was false. the idea that better speakers make things better is bull. biggest lie in audio.
speakers haven’t changed much in 100 years. the real degradation is elsewhere in the chain these days.
About time for audiophile products, NOW.
here’s the easiest audio test —
play the drums (real, not machine). hit them and listen. hit each drum and listen. hit each cymbal and listen. listen to the decay as it fades out. listen to the attack – how fast it rises, what character it has. roll on the snare. roll on the hi-hat as you work the pedal – listen to thousands of variations.
if you can record it, do so. set your DAW to 24bit, put three mics around the drums Botnick-style (over the snare, in front of kick, and off side of the toms in a equilateral triangle).
play it back in that same room at 24bit. play it then hit the real drums. you will hear a degradation between the live and the 24bit recording. even though you are capturing 5000k/sec of signal, microphones aren’t as amazing as our ears and the rest of our body feeling that vibration.
then downsample and dither the file to 16/44. play that back and you will probably hear it as smaller, thinner, and slightly less accurate than the 24bit file.
then take the downsampled file and lossy compress it to 256k mp3. when you play that it will practically be a different drum set, in a different room, with all kinds of crispy sounds and artifacts that weren’t there originally.
honestly, if you don’t hear anything different after this test you are a mental patient. ears do get damaged and degraded but never in a digital lossy way. we can all detect the lossy. some of us just don’t care and like to call names.
some of us do care.. i think it’s critical to understand what we as a society continue to do to our own music – the very thing that keeps us sane, and we attack it and degrade it for the sake of convenience. it’s not convenient to get 10% when you expect 100%.
Edited 2016-06-13 15:13 UTC
Thousands of people care enough to have participated in numerous listening tests. Yet the participants in those tests consistently fail to detect the lossy files once they reach a decent bitrate.
How do you explain those results if there’s really such a huge difference between the lossy and lossless files?
Very easy to explain. Hard for some to accept: ABX does not work for music quality. It has more than 1 fatal flaw yet it’s results are still pointed to.
Note only those that deny high quality exists use ABX tests as proof. This is because an ABX test for music quality only returns NOISE. No real useable data, because the test is a disaster.
People who build sound circuits take hours if not days to do listening tests. No known ABX test you can point to lets listeners live with the sound and review it in multiple listening environments, blind or not.
Actually, most of the online tests provide the samples for people to analyse at their leisure. They can listen to them and compare them as many times as they like, on the equipment of their choice, before actually completing the test.
Of course people can conduct their own ABX tests using their own music and equipment, things they’ve listened to a thousand times, and still get the same result.
What I find interesting is that you’ve claimed that lossy files are easy to detect – that anyone can do it. You’ve talked about various specific sounds being “murdered by lossy compression”. You’ve made it sound like there’s a night and day difference between lossy and lossless, not a difference so subtle that it requires people to “live with the sound and review it in multiple listening environments” before it’s detectable.
If the differences between high-bitrate lossy and lossless are so small that the conditions present in pretty much every listening test have been enough to erase them, it really can’t be that big of a deal.
the second you decide to care and feel the music it’s relatively easy to spot lossy. sometimes immediately, sometimes it takes a few minutes to let the fatigue kick in.
listen i’ve been fooled to. a 256k MP3 or AAC, especially from a modern artist, is hard to tell.
but i don’t give a crap about the bad modern artists that are making fake music using fake instruments. i care about the beautiful stuff – the analog stuff – the real stuff – that humans have created over the last 100 years.
side note — part of the reason modern music sounds like shite is this restrictive, damaged, horrible distro format. why mix and master properly, why use real instruments, when it’s going to come out the paper bag of lossy compression anyway?
“but i don’t give a crap about the bad modern artists that are making fake music using fake instruments. i care about the beautiful stuff – the analog stuff – the real stuff – that humans have created over the last 100 years. ”
By some unknown reason, my brain find insufferable the sounds of interminable copy and pastes -and ‘plastic surgery’ anonymousness- of contemporaneous production.
When every violin sounds Stradivarius, then Stradivarius is not such a good sound. When every performer a Paganini, Then Paganini not such a good performer.
To the brain, digital is rotten at many aristas.
First off, you owe it to yourself to get on a drum or 5 and explore how your body picks up sound. It’s the most relevant thing imaginable for this discussion. Your headphones won’t sound the same after, that’s for sure.
ABX listening tests are highly flawed and the only people pushing them are those that claim there is no thing known as quality sound.
Show me the ABX test that says a fender guitar sounds better than a generic guitar; the ABX test that tells me which orchestra is better; the ABX test that tells me which mix of a Beatles song is better?
You can’t because there are none. ABX cannot prove quality, it can only muddy the water, which makes it very useful for the forces against quality, such as xiph dot org.
So as best I understand what you just said… there’s such a thing as “quality audio” but there’s no way to measure it, so therefore we shouldn’t try and instead should just take the word of “audiophiles” who usually have an interest in selling either their equipment, music, recording services or… all three? Do I get that about right?
Edited 2016-06-13 18:41 UTC
no you aren’t right.
you should use your own ears. stop listening to people on the internet that aren’t in pro audio, or have other agendas.
sure that can include me. don’t take my word. follow your own ears. i’m trying to give you factual information to help you become a critical listener.
As Lapointe once said “Violin, either you play it in tune, or you play it Tzigane. I didn’t chose, I play Tzigane”.
This is a bit derogative joke, but there is also some truth.
Some music will sound best with a Stradivarius.
Some other sounds even better with quirky instruments as a worn-out fiddle.
This is utterly irrelevant as ABX tests aren’t designed to do any of those things.
ABX tests are used to blindly compare ‘A’ to ‘B’ and determine whether there’s any audible difference between them.
If someone can hear a difference then which they prefer is simply a matter of personal preference. However, if no difference can be identified, and that result is repeated by numerous people, it is evidence that audible differences don’t exist, or at least that they’re so subtle they’re unlikely to be heard.
When ABX tests consistently fail to detect a different between two audio formats, wild claims about the sound of one being “murdered” and “crap” start to look implausible.