In 2014, many of you – millions, in fact – helped make Chromecast one of the most popular streaming media devices globally. It’s been exciting to bring Chromecast from one country to now 27 countries, with more to come in 2015. Chromecast usage per device has increased by 60% since launch due to the growing roster of new apps and features.
And today, we’re announcing Google Cast for audio, which embeds the same technology behind Chromecast into speakers, sound bars, and A/V receivers. Just like Chromecast, simply tap the cast button in your favorite music or radio app on Android, iOS, or the web, and select a Google Cast Ready speaker to get the party started.
So, at this point I’m the only one who just uses DLNA to play music and video stored on his workstation from my sound system/TV, right?
There is already bluetooth. There is already a widely used standard that will work with your phone, tablet, or PC and allow you to stream it to portable speakers. So why WHY do we need this?
1. Range. Bluetooth theoretically works at distances up to about 40ft. Practically, with signal interference, often weak transmitters with poor antennae, walls, etc. taken into account it is more like 15 ft. Wifi is at least double, often quadruple that, and most people already try to cover their whole home with wifi extenders so they have full coverage throughout their home.
Chromecast doesn’t work this way at all – direct device-to-device connection isn’t used. You don’t send audio over a connection, you tell the chromecast where to get the audio, i.e. if you play a song from pandora it plays the song directly from pandora – not the device that is controlling it. This is great for many things, not so hot for others. It depends really. But point is it works entirely differently.
2. Manual pairing. You have to pair each and every device… Most devices only remember a single pairing, sometimes 2 or 3, but rarely any more than that. If you are like most people and have a speaker that only pairs with one device you have to keep repairing it constantly when you switch devices.
Chromecast doesn’t require pairing if the devices are all on the same wifi network. And even if they are not, it supports automatic pairing using ultrasonic sound in a pinch. At worse you have to enter a pin code, but only on the sending device (you know, the one you actually have in your hand)… Bluetooth requires you do something to the receiving device too (pain in the ass).
3. Sound quality. It is hit or miss. Some BT speakers support directly streaming native AAC/MP3 or maybe aptX, but the rest of the time the sound has to be converted to SBC, which is pretty damn awful. Even with aptX, there is still an analog-to-digital conversion, which does affect sound quality and some people notice it (I don’t, but I’m old). Anyway, both devices have to understand the same profiles for any of this to work right, or you end up defaulting to SBC (Like I said, I’m old, but I can hear how bad SBC is).
Chromecast supports AAC/MP3/Opus/Vorbis/WAV formats. It also does AC-3 pass-through natively, which will let you do things like connect your HTPC or Android TV to a multi-channel audio reciever without any wires.
I do get your overall point, but this is how I see it…
Bluetooth is great when you own a pair of devices you want to work together mostly exclusively, you want to use as little power as possible (BT has extremely low power requirements), and they are in fairly close proximity of each other. For that it is great.
Chromecast (similarly to Airplay) is great when you have a bunch of devices (and often have outside devices coming in and out) and you want them all to be able to work together with as little friction as possible.
And then of course Bluetooth simply doesn’t do video at all currently. There is a VDP profile for doing it, but seemingly no one seems to support it that I can see. With video being arguably more important to most people, Bluetooth loses out since both Chromecast and AirPlay support both.
ps. In regards to DLNA…
https://www.youtube.com/watch?v=syf1qEgPGXg
Point being with Chromecast, due to its design, you can write applications that essentially act as adapters between pretty much any type of source and the Chromecast receiver. Its capable of it anyway, not saying the particular app mentioned in the video is great (never used it myself).
Edited 2015-01-06 05:38 UTC
On top of that it requires apps to support it. Even Airplay, where audio is involved, can just send any app’s audio. But no, oh no, Cast needs a separate API and apps to support it. Some of the apps I want to use do not support Cast, but those same apps on iOS will send audio via Airplay transparently. Google needs to get on the ball here, because Airplay (or an open standard that’d work just as well) is something I miss on Android. Yes there’s Bluetooth, but that requires your device to stay within range of the speaker at all times rather than just being in Wifi range.
when you can share what you listen/watch with 3rd parties? And be blessed with early planned obsolescence. /s
Hopefully they will push the Opus-codec, which truly is the best streaming codec around. You don’t want to stream mp3 in 2015 and onward!
Maybe it was just my experience, but not much ever worked with DLNA. Chromecast on the other hand seems to have support of pretty much every embedded html video player, and every Android media app that I use. Seems like DLNA was good just not widely implemented. IMHO Google did an Apple here and took an existing idea and made it popular.
My tv doesn’t have DNLA so I added a chromecast for 35 Euro. It was an impulse buy and I am not very impressed. Streaming from Chrome on Windows 8.1 works fine, but sharing the entire screen (so I can play from VLC) only gives a few frames per second so it is useless. Playing local files by dragging them into the browser just gives a download
Chromecast for audio might make sense for the same reasons (no DNLA in your stereo) but I expect that bluetooth takes care of most peoples need.
So Chromecast for casting from Chrome: Okay
Otherwise…mostly useless
You are using it wrong. Install the Videostream extension for Chrome and cast local videos to your heart’s content. Also, you can cast directly from many sites like YouTube, Twitch and Netflix. You might also want to check PopcornTime.io
Also on Android there are many apps for video/audio casting (BubbleUPnP, Localcast, Allcast, Shuttle etc) + Youtube, Twitch, Netflix, TED, TuneIn, 1.FM etc.
I am not using it wrong! I am using the official product with the official software on an officialy supported platform and that only offers a beta-option that performes lousy.
I tried that pluging (videocast) before and it didn’t work with several of my own videos that vlc plays just fine. Subtitles are also a problem sometimes and so are the needed permissions.
No, not Videocast, Videostream – https://chrome.google.com/webstore/detail/videostream-for-google-ch/…
It casts everything I’ve thrown at it, it supports SRT subtitles, and if you have a good Wifi network you can even cast 1080p videos. Although you can cast the screen or a tab, Chromecast is supposed to work with apps. If you can’t be bothered to learn how to use a product just return it and be done with it, just don’t whine on the internet.
A nice resource for information: http://www.reddit.com/r/chromecast/
If you can’t find the answer just ask away – there are plenty of helpful people there, although you can find everything you need to know from Google in a few minutes.
Also check these chrome apps: https://chrome.google.com/webstore/search/chromecast?hl=en&gl=US&_ca…
And if you have an Android device, the experience is superb. Here’s my modus operandi – I track my TV shows with Series Guide, download the episodes with tTorrent using tTorrent’s Series Guide extension, I download the subtitles automatically with SubLoader and then cast the video using BubbleUPnP.
Sorry for the misunderstanding. I did use videostream (not videocast as I wrote earlier) and it did have the problems that I listed before.
ChromeCast might support extensions, but it shouldn’t need them for basic functionality. I understand you are very enthousiastic about chromecast, but I only use it for playing webvideos now. For everything else I just use an HDMI cable
Well, to be fair I’m only downloading stuff that I know will properly play on my Chromecast – 480p & 720p X264 encoded videos in MP4 containers with SRT subtitles. I’ve tried 1080p a couple of times and it works allright, but I don’t think it’s worth it to wait longer for the download.
I’ve also casted a few SD XViD AVIs from Videostream and it worked allright.
Also I have a friend that had a pretty bad Wifi connection and his Chromecast was useless for casting local content until he got a better router.
Edited 2015-01-08 08:00 UTC
That’s it, when standards already exist but aren’t widespread, all it takes is for an alternative version to be made accessible to the Average Joe, and suddenly it’s everywhere. Look at Viber vs Skype for example, I have Skype on my phone and it’s great, but nobody uses it any more because Viber shook the techy impression of things like Skype and suddenly everyone’s using it, despite being nowhere near as flexible and no more open than Skype.
Viber? What’s that?!? Over here, EVERYBODY, and I do mean EVERYBODY uses Skype. Use anything else and you’ll just be left all alone. You might as well talk to yourself!
Well, my LG web tv has DLNA. It “works” some of the time, but is rather slow. I am tempted to get a chromecast and see if it works any better.
All this bluetooth speaker streaming and streaming in general is just another march towards totally crap audio that has been going on for over 30 years.
First cut: CD/redbook reducing the sound to 16/44 digital. 24/44 would have been perfect but it was too much data for 1980’s. Pushback was better DAC/ADC and better digital mastering, then….
Second cut: CD to low quality mp3 (under 300k). Pushback was better encoders and moving to selling 256k files or ogg files, then….
Third cut: MP3 files to MP3 streaming & youtube (low bitrates return). Pushback was artificially boosted headphones and lots of nonsense branding like “lossless”, then….
Fourth cut: going wireless with the speakers. Bluetooth can’t move enough synced data to properly replace a speaker wire, so they reduce the data stream even more.
Each of these built on mistakes of the past and compounded them. If you stream something like spottily to bluetooth speakers and you think you are anywhere near hi-fidelity, or even close to what the average consumer heard in the 1970’s you are mistaken. There is lots of trickery and marketing and convenience going on, but very little accurate fidelity.
It’s gotten so bad that the big pushback is occurring, and Pono, Fiio, and Sony are finally selling affordable and capable 24bit DAPs. If we could just get the record labels to re-digitize all their analog masters @ 24bit we might have a chance of saving ourselves from destroying recorded music.
Sure your modern setup sounds OK at first, but music is emotional nourishment and it is starving you. You owe it to yourself to hear 24bit digital for yourself, and you will see that all this streaming and wireless is in some ways worse than a transistor radio from 50 years ago. At least those people knew they had a bandwidth limitation. Internet/corporate pseudoscience has convinced a whole lot of people that nothing has been removed from the music.
If you don’t believe me, or want to get an opinion on 24bit audio from a regular guy perspective (not an audiophile) check out this review:
http://wfnk.com/blog/ponoplayer-review/
Have you listened to a 1980’s quality setup recently? It was terrible. Despite all the audiophile complaints, I really feel like the average listening experience today is much better than it was then.
No offense, but unless you are an acoustic engineer or physicist, I really don’t trust your opinion when it comes to sound. There are too many people that just throw around terms without really understanding them.
Actually yes, I have several playback systems from several eras around my house and studio. I have speakers from 60’s, 70’s, 80’s, 90’s, and modern, I have vintage turntables and modern DJ turntables, I have a few CD players of various vintage, and I have the usual mainstream phone, laptop, living room and car stereos. I also have a ponoplayer and some 24bit albums and they sound amazing, the best of the bunch.
Signal chain is the key here — the signal chain is Source, then DAC (if digital), then Amp, then Speakers. Improving the source is the best way to improve the playback and most modern folks get that backwards and spend on everything but quality source files.
I totally agree that the internet is crawling with fake experts (most of them referencing xiph.org), so all i ask to use your own ears and decide for yourself. Bad math, incomplete theories, and pictures of waveforms mean nothing if it doesn’t sound good or right.
Hard to say what “average” is but I’d think a 1970’s living room stereo playing vinyl sounds truer to the original than streaming spotify to bluetooth speakers any day. I’d also say that a walkman playing a good cassette will sound more natural than an iPod with Beats headphones. There is a lot of trickery, boosting, and marketing trying to convince you that less is more. Music has a lot more “data” in it than the DSP industry wants you to believe.
I do find it amazing that legendary artist after artist, producer, mixer, people with walls full of grammy’s and the respect of their peers, are somehow equivalent to a post by random bob on a website. Everyone has an opinion, but the people that work with music all day (especially if they go back 2+ decades) understand this way better than internet joe.
Finally – regarding thinking that this is finished science, know that it is not. Far from it. Science is probably about 20% into understanding the ear, being able to sort out vibration sensing throughout the whole body, and our auditory system’s functioning within the brain.
That’s why when you hear it for yourself, you just know, and no amount of math or laboratory results can convince you otherwise. Every month some scientist is announcing new findings about the human senses, and they are always upping our human abilities, not diminishing them.
Also there’s the general viewpoint of science being “done” which is crazy. Here’s a guy smarter than both of us:
“The whole point of science is that most of it is uncertain. That’s why science is exciting–because we don’t know. Science is all about things we don’t understand. The public, of course, imagines science is just a set of facts. But it’s not. Science is a process of exploring, which is always partial. We explore, and we find out things that we understand. We find out things we thought we understood were wrong. That’s how it makes progress.†– Freeman Dyson, 90, Mathematical Physicist
I don’t trust my own ears alone to determine science, anymore than I’d trust my personal reaction to a treatment to determine medicine.
If a post sounds more like a pitch for homeopathy than a dry math paper, its a good clue that its exhortations should be taken with a grain of salt.
Well, there’s a difference between audio and medicine. I would argue that your own ears are the only thing that actually matters in audio reproduction. You build a music system for yourself and if it sounds good to you and you are happy with it, then it’s the perfect setup for you.
It’s quite a shame that the perfect setup for the majority of consumers has little to do with how it sounds to their ears, and more to do with how convenient it is for them, though.
exactly. thanks for the backup. sometimes i feel like i’m the only person typing into a website that trusts their own senses over things they read on the internet. mostly every musician, mostly every great producer, every classical music fan hears it, but since the internet says that “science” says its not true, all those people must be idiots. the arrogance behind ignorance is hard to take sometimes.
here’s your physicist’s take:
“The whole point of science is that most of it is uncertain. That’s why science is exciting–because we don’t know. Science is all about things we don’t understand. The public, of course, imagines science is just a set of facts. But it’s not. Science is a process of exploring, which is always partial. We explore, and we find out things that we understand. We find out things we thought we understood were wrong. That’s how it makes progress.†– Freeman Dyson, 90, Mathematical Physicist
Right which is what every naturopath , creationist, and climate change denier, are trying to tell you.
You can’t just say that science is “in progress” and “might be wrong” and use that to invalidate the results of well accepted studies.
I know Dyson from his work, he’s assuredly rolling over in his grave over the misuse of his words.
Sure, keep lumping me in the with crazies if it makes you feel better.
You’re “well established science” was done 70 years ago for telegraphs and telephones by a man who died of old age before the CD even came out. He had nothing at all to do with professional audio recording.
16/44 audio is equivalent to 720p visuals – it looks pretty good but there is more. Especially on large screen or with very detailed material.
Don’t bother going to the symphony either, your CD of the symphony should represent the exact same thing, correct?
Edited 2015-01-07 15:10 UTC
And Einstein died long before the flying-clocks experiments took place, but that doesn’t mean he was wrong.
Believe what you want. If 16/44 isn’t good enough for you, it doesn’t matter what some schmo on the Internet tells you.
Nope… All I’m arguing is that 16/44 is CAPABLE of holding a perfect recording of the symphony. Not that the microphones, amps, signal processing, and mixing/mastering in a recording is 100% perfect. Nor am I arguing that my HiFi system is actually capable of reproducing the symphony at full-scale. (It’s not.)
Edited 2015-01-07 15:35 UTC
Please don’t use logical fallacies or rhetorical flourishes to try to push your argument.
Even if we ignore the experience of actually being physically present somewhere, your room doesn’t have acoustics even remotely like the symphony hall, you don’t have one speaker for each instrument, so spatial positioning is lost, and there’s always room for quality degradation to be introduced by microphone placement, microphone quality, mixing equipment quality, mixing techniques, and the quality of your speakers, among other things.
Plus, of course, the ability to listen to the same thing multiple times allows you to notice tiny flaws you overlook in a live performance.
Edited 2015-01-07 15:43 UTC
Outside of this stupid discussion about recording technology, I’d like to introduce one about the reason for attending live music.
Recorded music is static, it never changes ( unless your recorded media deteriorates or something ). Live performances allow for the musicians to present different interpretations of music influenced by the audience, concert hall and their own life experiences. You, the rest of the audience, the place, the musician and the music all create a unique experience that can’t be replicated by yourself with any kind of static recording. Suggesting that they serve the same purpose, is kind of crazy.
i can’t argue with that, i attend as much live music as possible. but the rest of the time i’m listening to recorded music, and i want to hear the closest thing to the original mix as possible.
i don’t like things removed in the name of commerce or convenience or someone thinking that i won’t be able to hear it anyway.
i stopped buying mp3 about 6 years ago because of this frustration, and really won’t buy 16/44 anymore unless it’s the only choice. if there’s a 24bit version available i buy that every time because it sounds better. that simple.
but to contradict myself above, i don’t necessarily get the 192 version, i find 96k plenty enough sampling. to my ears and on my rigs, the real upgrade that i can hear every time is the 24bit.
24/44 sounds pretty amazing to me, much better than 16/44.
But it’s not that simple. I’ve spent plenty of money on “high res” tracks that are awful. Even if you believe that 24/192 is inherently better than 16/44 for music, you still have to deal with the fact that not every recording is the same. Putting a terrible master out as a 24/192 release doesn’t magically make it better. There are CDs that sound better than their hi res releases, purely because the guy doing the mastering on the high res version was a putz.
That’s not to say that there aren’t plenty of high res releases that sound fantastic and outshine the CD releases. That’s why I want the ability to play back the high res stuff. Sometimes, it’s the best master/mix. Always best to be able to play as many formats as possible.
ok we are back in agreement, feels good, we need the hugs app 😉
of course it comes down the mix and the mastering, i’ve heard cassettes done on a tascam 4-track i like better than a high-res mumbo jumbo. but listening to classical, and well-produced rock & soul, 24bit on a ponoplayer is the first digital i’ve heard that surpasses vinyl in pure emotion. there’s no formula for this stuff.
it also comes down to the song, which is why convincing people that their music sounds like shite is tough going. they think i’m picking on their music taste or whatever. maybe sometimes i am, i just can’t believe how quickly people online trust signal processing people and theorems over actual musicians and producers, or their own ears.
i’m not an “audiophile” either, i have all kinds of cheap, mainstream gear. i get stuff from garage sales and ebay. the big draw to these new DAPS is that they combine the most important stuff at the top of the chain – good source, good DAC, good amp in one handy, affordable package.
Thank god we don’t use tape cassettes any more, the ’80s standard. Utter garbage compared to CDs, the standard of the 90s.
It does seem like now we’re taking a step backwards. Most of the streaming audio I’m exposed to sounds like junk when compared to an actual CD. Sure you can make ok-sounding music files using mp3 or other formats, but that’s not what people are doing and it’s not what seems to be mostly streaming online.
Edited 2015-01-06 22:24 UTC
Depends on what you mean. If you mean a cheap all-in-one unit, or a boombox, yeah. They were horrible.
If you mean a discrete component system (with discrete amplifiers) feeding into a set of well-balanced speakers that had been tuned for their environment– You’re completely wrong.
Find someone with a quality tube-based quad-amp, a good turntable with the right cartridge and needle, and it will blow your mind (I assume anyone with a setup like that also has the matched speakers).
Today, I admit, it’s more difficult to buy a “bad” speaker– Speaker technology has improved dramatically, and we can get room-filling sound out of seemingly microscopic speakers– although, we now need subwoofers, as the days of the 15″ woofer and three-way speakers appear to be gone for good (I miss the old Mach One speakers from Radio Shack– no, seriously! Best boom for buck at the time). And, to be fair, modern DSP’s do a nice job of sorting all the bits into waveforms that my ears will appreciate.
I enjoy being able to pick audio tracks from my tablet to enjoy without getting up to hunt down the record, or trying to drop the needle in a dark room, but just like we would have problems building a steam engine today, it’s hard to full appreciate just how advanced analog audio had become by the early 80’s.
I should also note, I’m not an audiophile– I love music, but my system is thoroughly modern: 7.1 AV receiver, matched front/center/surround speakers, calibrated for the room, and my bedroom setup (terrifyingly) is a Raspberry Pi plugged into an old cambridge soundworks sub/amp driving a pair of KLH satellites, which is a COMPLETE hack (but sounds very nice), but that doesn’t mean I don’t appreciate the old analog stuff when “Hi-Fi” meant something.
According to science 16/44 is perfect because it is slightly above the hearing of the best ears.
Experts say you could hear the difference with old mp3 encoders but these days you can’t hear it anymore with 128K and up as long as you use a good encoder for the format.
When you use higher bitrate videos aac is used for audio so at 720p and higher you wont be able to hear it.
Some do. Agreed.
For those guys you can just buy your music lossless. Flac is a really great choice for that because it is exactly cd format.
I don’t get that weird pono thing that that famous music artist is pimping. Ofcourse better dacs and better speakers give you better sound than what people are accustomed to. They Usually spend at most $100 total for audio.
Well, I can hear the difference in quality from about 160 and lower (how bad depends on the encoder). Right about the 192 mark it becomes transparent to my ears. And yes this is purely personal. It’s what I am able to notice. With most services using 256, I don’t care anymore.
And it’s good for me emotionally too, because if I’m having a rough day I can turn to whatever music might calm me. No worrying about carrying media with me or extra devices. Google Music’s right there. Much more emotionally healthy than being annoyed I didn’t bring the cd, or that it’s scratched. I think most of these “audiophiles” are just people longing for the “good old days” but have forgotten just what went along with those days of old. Scratched cds, eaten cassettes, crackly records and oh so many wires.
sorry friend, either your ears are a mess or you are having other mental issues blocking you from hearing it.
i suggest a regiment of vinyl played quietly in a non-square room with your focus on the music. ears can be trained to hear better detail musically, and restored to a certain extent.
also, learn an instrument, acoustic is best. piano, guitar, cello, a horn, etc. — your ears will be able to start to determine the difference between real acoustics and the distorted paperbox that is mp3 compression.
Your understanding of the science here is incorrect and based on misleading marketing, not to mention they are discovering finer details about our auditory system all the time.
If you can’t hear the difference between 128k MP3 and something better this discussion is over, your listening ability is a disaster. I recommend learning how to listen or going to the doctor, or maybe just checking out music with more detail and real instruments.
AAC has nothing to do with high def audio. It’s just another name for lossy compression.
16/44 is not “lossless”, unless it was recorded and mixed at 16/44. Some music is, but not much. If it’s analog (released before 2000) that 2″ tape holds more data than 16/44 can accurately capture.
Look up dithering to find out how they get 16 million data points into 65 thousand slots in a way the ear can stand. That’s the reduction from 24bit to 16bit. That’s the Y axis in digital audio.
The X axis is sample rate per second, and in many cases 44k is enough. The Beatles catalog at 24/44 sounds amazing and as good as the vinyl. The Cars also released their stuff at 24/44.
Also I’m sure you enjoy watching 512×400 video on your HD TV, because that’s what you are proposing all we need with “says science”. The ears can detect more resolution than eyes, and our auditory system is more intertwined with our emotional centers in the brain than the visual cortex. So we actually need the audible resolution far more than the visual, but we tend to get that backwards.
Edited 2015-01-06 21:27 UTC
I don’t remember all the other sources I have read but this was the last one: http://xiph.org/~xiphmont/demo/neil-young.html
Lossy compression is meant to hide the fact that you are not storing all the data that was in the original. MP3 is pretty old and AAC is based on superior algorithms.
Studios like to record at higher bitrates and frequencies because you lose information when mixing because of floating point math. When you release it to the public you convert it to 16/44 of course because they are not supposed to mix your songs and it just makes the files larger.
I don’t know how much bits there are in a vinyl or a 2″ tape though I doubt there are more than a CD contains but the big problem with those analog sources is noise and wear. So I get that a lot of people like the background sound of vinyl. And maybe because of the background noise your brains make the sound sound better?
More like the PPI craze in smartphones right now were you can barely see a difference between 300PPI and 400PPI
I agree that they should put way more effort in sound than they do now. Especially in games and movies though songs could also be much better. And lowering the dynamic sound of songs to make them louder is just horrible.
no surprise you cite xiph.org, they are behind most of the misleading information on the web these days. almost every thread i’m in has someone telling me to read xiph.org to be set straight. they are the major problem here, not the solution. no one in professional audio believes xiph.org, that’s the DSP and lossy compression people’s propaganda. monty helped design ogg vorbis lossy format and now works for firefox browser, i think.
AAC – superior algorithms or not, it’s not as good as the original. a digital violin isn’t as good as a real one, and a robot can’t convince you it’s your child. it’s a digital replication and you should really understand the difference between that and the real thing. they are admitting it by calling it “lossy”, i’d accept their description and try understand what you are losing.
analog doesn’t have bits and sample rates. it does have dynamic range and other real-world measurements, but it has none of this digital mumbo jumbo. resolution does not translate directly between analog and digital, which is why it is left to our human senses and not math to detect the difference.
your doubt that 2″ tape holds less “bits” than a CD shows your ignorance in this area. it has no bits, and if you could actually listen to 2″ analog tape then CD you would feel stupid for typing that.
you say “they should put more effort into sound than they do now” but you are telling me that i am imagining anything better than what you have. that’s crazy and counterproductive. your thinking and your linking to xiph.org is the problem here, i’m trying to show you a whole world out there not only beyond lossy, but beyond 16bit audio.
you are using the same arguments that said no one needs 1080p, because no one can see it anyway. but they can, and people love the boob tube. 4K and 5K is around the corner, yet you believe that 20% of the information from a 1970’s vinyl album is good enough for your music? i just don’t understand.
I don’t believe in magic.
i don’t know what you mean by that. i see nothing magical about it. real is real, digital is a re-creation or reconstruction. are the letters you are typing real? can you bite them? nope. bytes not bites, haha
if science/math knew everything about how we hear and how sound worked they could easily produce a voice program that would convince you it’s your relative or friend, like T2. but they can’t because humans are more amazing than computers. if you don’t believe that, sorry for you.
if you don’t care to understand how it works you shouldn’t argue with those that do. i’ve been studying digital audio since the late 80’s and have heard almost every recording medium made in the last 50 years.
plz re-read my original post and think about it for a bit. if you think i believe in magic so be it. there’s nothing magical about recording sound in grooves and playing them back.
Monty went into detail on the fidelity of tape in this video:
https://video.xiph.org/vid2.shtml
“The very best professional open-reel tape used in studios could barely hit… any guesses? …13 bits with advanced noise reduction and that’s why seeing DDD on a compact disc used to be such a big high-end deal.”
Edited 2015-01-07 13:04 UTC
…and I missed the bit about not trusting Xiph people.
Well, all I can say is that I trust Monty’s word more than anyone claiming that “tape/vinyl is more accurate” that I’ve run into so far. None of them seem big on proper test methodologies like double-blind testing and they also tend to conflate “sounds better” with “is more accurate”.
(It may very well be that “vinyl sounds better” or “tube amps sound ‘warmer'”, but that just means that they’re inducing some kind of appealing distortion which should be possible to replicate in code more cheaply without risking progressive degradation at each step in the pipeline from the singer’s mouth to your ears.)
Every piece of evidence I’ve seen leads me to believe that what people are hearing isn’t the “better than 16/44”-ness, but some correlated factor like better mastering or playback equipment that could be done just as well for 16/44 recordings. (Or it could just be confirmation bias)
Edited 2015-01-07 13:32 UTC
+1 This! Just because you have a 24/192 recording that sounds phenomenal, doesn’t mean that it had to be 24/192 to sound that good. You just got a good recording/mastering job that happened to be high res.
It took me a LONG time to come to this line of thinking. For many years I was in the more=better camp and believed in the stair-step representation of a digital signal and figured more steps would always equal better fidelity. I didn’t understand Nyquist (and still don’t get all of the math, which isn’t my strong point, obviously from my error earlier in the thread.)
Look, either the Nyquist-Shannon sampling theorem is correct, or it isn’t. If it’s correct, 16/44 is enough for perfect recreation of everything your ears can hear, even if it seems counter-intuitive. (Whether or not your playback system is up to the task of recreating it is a different issue.)
Edited 2015-01-07 13:59 UTC
Thanks! Those 2 videos are great.
Yes, thanks! I hadn’t seen those videos before. I happen to have most of that equipment here (aside from a proper spectrum analyzer – just the not-quite-adequate one in my scope) and am thinking of grabbing the code that was posted with the second video and playing a bit this weekend.
The one thing I wanted him to show, but he didn’t, was a bandwidth limited signal in analog. So maybe I’ll try it. It would be great to show the square wave with the Gibbs phenomenon after having been run through an analog filter, just to prove that the not-perfect square wave is indeed from the bandwidth limitation, and not from the ADC->DAC conversions. (I know it isn’t mathematically, but there’s nothing like empirical evidence.)
Edited 2015-01-08 13:13 UTC
I completely agree with that. I think we differ on whether or not Red Book audio is sufficient and I don’t for a minute believe that there is a recording of ANYTHING that needs more than 16 bits of dynamic range, but for the most part we’ve been moving more toward convenience and away from fidelity for the better part of 4 decades. Although, I think it probably started with the 8-track or cassettes.
That being said, I would be very interested in one of these devices if it wasn’t married to a speaker and amp. I already have very good playback hardware; it just doesn’t do the “cloud” thing. If I could get a dongle that had an S/PDIF output that I could plug into my DAC, I’d buy it in a heartbeat. (I’d probably even settle for analog line-out.) I’m not always critically listening, and sometimes I just want NPR or talk radio, or sometimes I’m working on something else and just want some noise in the house, so I could just bring up Pandora or something like it so I don’t have to bother with turning the LP over to keep the music going.
For critical listening, I still have my Vinyl and FLAC collections, and still want to have the proper playback equipment for it. I just don’t want to have to have a separate system for the non-critical duties. So, please Google, give me a Google Cast for audio device that I can hook up to my existing playback hardware!
cool, listen to a ponoplayer if you get a chance. sony is making one too, and fiio has a few out, but i’m reading different things about their DAC’s. my issue with 16bit isn’t total dynamic range, it’s detail and accuracy. it’s depth, width, timbre, delays, and room sound.
only in consumer audio do smart people think that 65,000 is more than 16,000,000, or 44,000 is more than 96,000. only in consumer audio do they believe ancient & out of context science telling them what they can and can’t hear. only in consumer audio do they ignore the real experts – musicians, mixers, producers – for internet joe’s.
i have my ponoplayer on now and most of these people have no idea what real audio sounds like. they can type fast and site links and bad science, but they have no idea what their ears are telling them. it could be due to any number of reasons, bad taste in music, bad taste in producers, just not knowing any better being raised on mp3 or cd, but people who know music and make music hear it immediately. it’s called accuracy.
sad thing is almost everyone that argues with me sits in front of an “HD” TV and upgrades their “HD” cameras as soon as possible, but the actual thing that really needs HD – our music – our medicine – and they not only doubt it, they claim that the case is closed on this debate.
16/44 was a compromise in 1978 because those chips could not handle 24bit data, and it’s a pointless standard now.
I don’t think bit depth means what you think it does. 🙂 Bit depth is ONLY about dynamic range.
16-bits equates to about 96dB of dynamic range. If we assume a background noise level of 40dB (which is a really quiet listening room), you have to have the volume turned up to the point where the loud parts are at 136dB in order to have the quietest parts in the recording audible. 136dB is absolutely in “pain” territory.
24-bits gets you 144dB of dynamic range, but since you can’t make use of the 96dB you get from 16-bits, it’s largely wasted.
Also, an LP has a theoretical dynamic range of less than 80dB, so for dynamic range a CD already beats an LP. Not to mention that even the best classical recordings only have about 20dB of dynamic range, so there’s no program material that can make use of more than 16-bits.
Oh, and most pop music has a dynamic range of less than 6dB…
So, where do we stop? Nyquist-Shannon tells us that 48Khz is enough samples to recreate all of the frequencies that the human ear can hear. If we increase sample rate, we only increase storage consumed. The added high frequencies are ultrasonic, and quite probably beyond your tweeters’ ability to reproduce. (And might even hamper your tweeters’ ability to reproduce the frequencies that you CAN hear.)
I have 2 DACs, both can handle 24 bit 192Khz audio. I even have some material, like the HDTracks 96Khz Rumours release, that sounds better than its CD release. Is that because the CD was incapable of recording what’s on the HDTracks release? Absolutely not. It just means that someone took the time to do it right when they mastered for that release. The release is only 96Khz so they can charge $20 for it. 🙂
I have the high sample rate DACs purely so I can listen to the good releases that only come out in high res. I don’t for a moment believe that the high res is required for good sound.
Look, I’m an audiophile. I care about my audio. I care about fidelity, and I wish more people did. But let’s not throw science out the window here.
(Sorry everyone for the off-topic discussion about HiFi on the Google Cast thread.)
Gotcha, but I’m pretty sure I understand what i’m talking about here and I don’t know that i’m explaining it correctly, or my thoughts don’t line up with popular beliefs.
I’m talking detail and accuracy, not headroom or range or extremes. Walk with me…. 😉
A) – Instrument is played in a room and you are in that room. Your ears and body sense it and process it. That is full accuracy and relies fully on vibration through air to your body. No recording tech.
B) – Instrument is played in a room and you are not in the room, but a microphone is. The vibration is recorded to either an analog medium or it is digitized and stored as data. To hear it you need to play the recording back.
I’m sure you are still with me. This is where I’m keeping the focus. Accuracy. How close can we get to making it feel like you are in that room instead of the mic? That’s all I really care about. So perhaps the scope of our discussion is where we are losing each other.
My argument is that 16bit word length is not enough space to accurately store the recording. It is “close enough” perhaps (I own some really good stuff @ 16/44), but it is not the full amount of data that our ears/joints/skin/hair would pick up.
The number of bits/sample in turn depends on the number of quantization levels used during analog to digital conversion. More the number of quantization levels, finer will be the quantization step and hence better will be the information preserved in the digitized form. However, more will be the requirement of number of bits/sample. Hence it is a trade off between the number of bits and information representation.
Per second @ 16bit/44k, the ADC can use about 65,000 x 44,100 data points = a grid of 2,866,500,000 possible readings.
That’s a big number but I don’t believe it’s enough to accurately convey what our ears and bodies detect naturally.
Per second @ 24bit/44k, the ADC can use about 16,000,000 x 44,100 data points = a grid of 705,600,000,000 possible readings.
This is the total resolution available. These readings have to carry every bit of audio data, not just frequency or volume. They carry timbre, depth, pan, room, player, note, style, attack, decay, reverb, stereo delays, every bit of audio personality made in that second.
So 24/44 sounds noticeably better (more accurate) to me than 16/44, assuming it was mixed and digitized properly.
That’s my entire point, everything else is secondary to me.
This is not the right forum for this debate. There have to be 100 forums devoted entirely to this digital/analog discussion. I guess it just comes down to whether or not you believe Nyquist-Shannon. But:
Ah, and this leads me to the psychology of being an audiophile. I sort of touched on it earlier in this thread. When you get right down to it, does it really matter?
If 24/44 sounds better to you, who cares if it is measurably different from 16/44? Even if the difference is entirely in your head (and I’m not necessarily saying that it is) it still sounds better to you, and you listen to music because of what it sounds like to you.
I don’t believe in cryogenically frozen unobtanium cables; wire is wire… But lots of people do, and if they want to buy $5000 speaker cables and $1000 power cables and it makes them believe their HiFi has more fidelity thanks to confirmation bias, how is that any different than a speaker upgrade? It sounds better to them either way.
(Yes, audiophile POWER cables are a thing. – For those of you who are reading this and aren’t into high-end audio.)
Even if it’s an illusion, audio is all about illusion. You play a CD to create the illusion of performers in your room. How is that illusion any different? If 24-bit source material seems to make the system sound better to you, it made the system sound better to you whether it sounds any different to anybody else or not, and that is what matters.
But what I think we both came here to say is that people should actually pay attention to how a music reproduction system sounds instead of just looking for bells and whistles.
Edited 2015-01-07 00:46 UTC
First, I have to ask… did you really just do linear addition of two logarithmic numbers? According to the first dB calculator I could find:
https://www.noisemeters.com/apps/db-calculator.asp
40 dB + 96 dB = 96 dB.
Because really, 40 dB is pretty insignificant (hence “background” noise), although personally, I consider that to be pretty high for a “quiet room”. My bedroom, with ceiling fan running, is ~30 dB. In order to get to 40 dB, I have to be sitting next to an Intel i5 mid-tower with 6 fans running in it.
Secondly, ignoring a basic misunderstanding of decibel values, 16 bits does not equal 96 dB of dynamic range. The dynamic range is a function of the recording and/or playback system. 16 bits means there are 65,536 discrete steps available. Period. For any given frequency (up to 44 KHz), I can express 65,536 values. Turn your monitor to 16 bit color (65k colors). Notice a difference?
24 bits gives you 16 million discrete steps. Can I distinguish 16 million different sound levels? Probably not. Can I audibly notice a “dithered” sound (two sequential sounds that shouldn’t be the same, but are, because of 24 -> 16 bit compression)? Possibly. Can I tell a distorted instrument sound in an MP3, even at 192k vbr? Definitely.
Depending on the instrument, and the pyschoacoustic algorithm used to convert it, I may hear that distortion up to 320k as well, although with a full-on VBR encoding that’s smart enough not to compress sounds that won’t un-compress well, I’m unlikely to notice. Personally, I like FLAC.
These are excellent examples of bad math and poorer science that people use to “prove” things like “You can’t breathe if you’re driving faster than 35 mph”, or that the stars revolved around the earth or that I can’t hear the difference in high def music.
Yes I did. And I should know better.
I guess the Internet has taught me some bad things. LOL. There are hundreds of web pages that make the same calculations, such as this one (not a definitive source by any means, just the first hit from Google):
http://sound.westhost.com/dynamic-range.htm
So, I’ll write that off as a “whoops.” but it still doesn’t change the fact that you get 96dB of dynamic range from 16-bit audio, and there aren’t any recordings with that much dynamic range.
Well, you called me out on decibel math, so I’m calling you out on this one. Yes, there are 65535 possible “steps” but they have a relationship between one another. 2 is twice as loud as 1, 4 is twice as loud as 2, etc. With 16 bits, we get 16 doublings of the output level. 16 x 6dB (SPL) is 96dB (and I’m fairly confident in that math.) 16-bits doesn’t mean there are 65535 steps between arbitrary volume levels, it means there are 96dB of dynamic range between quietest and loudest.
You set the “start” position with the volume knob. But you will get 96dB of range between the quietest signal the recording can capture and the loudest.
24 bits gets you 24 doublings of the output. 24 x 6dB is 144dB. It just means that the loudest possible recording is that much louder than the quietest.
So do I. No arguments here.
Wait, I can breathe if I drive over 35? Seriously? Man this is going to save me SO much time.
Now that I think about it… I might not actually be wrong with my poor addition. I’m pretty sure what your calculator is doing is saying that if I have a 40dB noise in a room and a 96dB noise, the total output of both of them will be ~96dB, but that’s not what we are doing here. The volume knob is not the same thing…
But, I could still be wrong on this one. Good thing it’s not critical to my argument.
(I say this as I’m about to put a 24/96 recording on the headphones.)
Wow, now I’m replying to myself, but I’m positive that my math was correct, and I wouldn’t want the Internet to think I was wrong.
Let’s use audio terminology. We can all agree that the loudest possible sound on a CD is -0dBfs, right? If we take the 96dB number for 16-bit dynamic range at face value, then the quietest we can represent is -96dBfs, right? If I set my volume so the quietest passage is at 40dB, then -96dBfs is 40dB. What’s -0dBfs? 40 + 96 = 136.
Or to put it the other way, if the volume is set so full scale is 136dB, the minimum representable signal is -96dBfs: 136 – 96 = 40dB.
You have now entered the realm where I, as a not-sound-engineer, can no longer argue coherently (A new meaning to “out of my depth”), but after reading up on 16 bit PCM, I’m pretty sure that’s incorrect.
The bit depth is absolutely resolution, as I said earlier– and the fewer steps you have, the more chances for distortion (or mis-representation) of the original signal there is– therefore, 24 bit will almost certainly produce a cleaner, “more true”, signal than 16 bit. Read through on audio bit depth (and again, I’m trusting the wiki, as it seems to correspond to what I remember from a long time ago) for more detail.
It also controls the signal-to-noise ratio, which isn’t quite dynamic range (and this is where my technical knowledge breaks down).
Regardless, 40 dB, for example, is a fairly small number compared with 96 dB– Using the amplitude scale on the wiki page for decibel, 40 dB is “100”, and 96 dB is somewhere between 31,000 and 100,000 (but closer to 100k). Adding 100 to either of those numbers makes no significant difference.
But that’s the thing. We’re not “adding” the two numbers. We’re merely setting the scale of the dynamic range of the recording.
Let’s say we have 10 dB of dynamic range in the recording. The volume knob changes the output SPL of the playback equipment. It could be 40-50dB or it could be 100-110dB depending on where we set the knob, but it’s always 10dB. Make sense?
This is the problem with decibels. They’re not absolute values– they’re relative values, and they have no default units. Decibels can be sound pressure, voltage, optical output power, light sensitivity– and in each application, they have a different meaning.
Yes, 96 dB is 65 thousand times “more” than 1 dB. But that’s kind of meaningless unless you know what 1 dB is equal to. In terms of PCM, “1” is “noise level” (hence, signal to noise). You’ve been equating the dynamic range of the signal, to the sound pressure produced by the speaker system, and there’s very little guarantee of a direct correlation.
Yes, decibels are a ratio, and you have to have a known starting place for them to make sense. That’s why we use dBfs in digital audio. That’s decibels in relation to full scale. -0dBfs is full scale. There is no louder sound on a digital recording. If we have the volume pot set so -0dBfs is 100dB (SPL), we can then say that -6dBfs is 6dB less than our 100dB (SPL): 94dB (SPL) or half as loud. If full scale equates to 110dB (SPL) -6dBfs is half as loud, so it’s 104dB (SPL).
This all actually works out precisely because decibels are a relative unit. It doesn’t matter what full scale is (as far as SPL) in the system. -6dBfs is half of that power, period.
If the source signal doesn’t equate to SPL in the listening room, what exactly does the audio reproduction system do? I’m pretty sure the entire point of a HiFi system is to turn a signal into SPL.
Edited 2015-01-07 19:14 UTC
Thanks, but I think I’ll stick with Monty from Xiph’s “16/44 is all we need” post as has already been linked to by a previous commenter.
I trust his insistence on double-blind testing more.
Edited 2015-01-07 13:10 UTC
I use DLNA too! BubbleUPNP works fine as a controller and host. XBMC/Kodi works fine as a receiver and host. Unfortunately consumer electronics products are obtuse and apparently often don’t support it properly.
DLNA, not so much– I spent years trying to get it, and UPNP, and all my devices to play nicely with each other, so that I could access my media library from any room of the house.
I gave up. mediatomb is nice, but didn’t quite get there.
$50 worth of Raspberry Pi, openElec/XBMC (Kodi), and a 6TGB NFS share, with android phone/tablet remote control (or real remote control on my media center) allows me to listen to anything I want throughout the house.
Good old AC3 audio – embedded in most of the videos downloads I come across on the Net – and FLAC audio isn’t supported by Chromecast, making that device 90% less useful than it could be. I just hope that Google Cast for audio isn’t going to be similarly crippled with the audio formats it will support.
No, transcoding permanently or on-the-fly (e.g. Plex) is *not* an acceptable solution to missing audio codec support.
Edited 2015-01-10 18:49 UTC