I don’t think I’ve ever seen this before, but please correct me if I’m wrong. Samsung anf Google were supposed to unveil the Samsung Nexus Prime with Android Ice Cream Sandwich next week, but in a surprise announcement, the companies said that the press event is cancelled – out of respect for Steve Jobs. In the meantime, leaked specifications reveal that the Nexus Prime could be a real doozy.
The press event was supposed to take place Tuesday next week, but in a joint statement to the press, Samsung and Google announced the event would be postponed to a later, as of yet unknown date. The announcement was not posted online – it was sent to members of the press (most likely those that were invited to the event).
“Samsung and Google decide to postpone the new product announcement at CTIA Fall,” the statement reads, “We agree that it is just not the right time to announce a new product. New date and venue will be shortly announced.”
While many already suspected this had to do with Jobs’ passing, it was the follow-up response to AllThingsD and CNET which confirmed it. “We believe this is not the right time to announce a new product as the world expresses tribute to Steve Jobs’s passing,” Samsung said, “We do not have information on the rescheduling of the event.”
Of course, the fact that Jobs’ death would most likely overshadow the Nexus Prime/Ice Cream Sandwhich unveiling has absolutely nothing to with this at all. Hey, want to buy a unicorn?
So, why do we pay attention to a new, unreleased Android phone at all? Well, it seems like where the iPhone 4S was received as a letdown, it looks like the Nexus Prime will drop some serious jaw. The display is probably the foremost reason why: a 4.65″ Super AMOLED with a resolution of 1280×720 – that’s an HD screen in your pocket. The display is curved as well; whether just the glass is curved, or the actual display, remains to be seen. For the rest, we’re looking at a dual-core 1.5Ghz, probably supported by the PowerVR SGX543MP2 graphics chip; the same chip found in the iPad 2 and iPhone 4S.
Other than the display, all these specifications are rumours, and no matter how staunch, rumours are just rumours. Still, the way Samsung and Google are hyping this thing, I wouldn’t be surprised if most of it is actually true. We’ll have to wait and see a little longer than Tuesday to find out what’s really inside that beautiful curved shell.
Somewhat ironic, given the relationship between Apple and Samsung recently.
Not really, good PR is good PR.
Business is business, life is life. The relationship between Samsung and Apple resembles a marriage about to break up.
Not really, the google and apple execs have known each other long before ios and android came to life.
I believe this gesture from google/samsung can make apple look bad. They can make it even worse by holding a silent minute at the nexsus event then, in the memory of Jobs, donate $10.000.000 to cancer research.
This, as Bruce Lee would put it, is “the art of fighting without fighting”. Or maybe more correctly “the art of doing evil without being evil”
Ok..so this will wait for fall.
So probably the most anticipated phone to be released this month (26&27 Oct at Nokia world)is the Nokia Sun.
Some specs on the nokia sun via Techradar
http://www.techradar.com/news/phone-and-communications/mobile-phone…
Wonder what the specs on the other three codenames are.
Did the ENTIRE world really that that much of a hard-on for Steve Jobs? In what world is it insulting to hold a conference about some pretty sweet upcoming tech because some guy whose company who just released a shitty phone, died? OMG HE’D TURN OVER IN HIS GRAVE!! Gimme a break. Whatever, as long as this doesn’t delay the launch of the new NEXUS then who cares, right?
Edited 2011-10-07 21:33 UTC
The cynic in me wants to say the delay is more to have clear tech headlines. I Hope its more genuine than that though.
Jobs was a Major influence in the industry for many years. Larry Page was barely a tot when Jobs was at his peak the first time round.
The tops of these companies all know one another personally as well as professionally.
You do realize that some google execs will attend jobs funeral? I mean, they cant be at the event and the funeral at the same time.
His funeral is scheduled for friday. The unveiling was supposed to be tuesday.
“Some guy”? “Shitty phone”? Either live on this planet or don’t.
As much as I don’t like the iPhone for its closed nature, I do respect Steve Jobs, and I admire all his ideas that made Apple a succesful company again.
That’s the reason why I modded you up
Don’t believe the bullshit press statement. This was held up by some kind of last-minute technical snag.
So why not release a new product? It wouldn’t be classy to reveal a device that beats the Iphone4S in every respect just a few days after Steves death.
Good move IMO.
Haha, it’s BGR… are you really surprised? The same website that reported earlier this week that Sprint would have an exclusivity deal for the upcoming iPhone. Does anybody really take these guys seriously anymore?
Anyway, I’ve been REALLY tempted to start up a website called ‘techrumorbullshit.com’, which would highlight tech blogs on a ‘wall of shame’ that keep posting BS rumors just to increase their page hits.
As for postponing the event, this could be a bad idea for those who were on the fence about the next iPhone and waiting for this announcement; it’s great to cancel out of respect for your rival’s CEO that just passed away, but not when your rival is about to release a phone in a week or two that’s going to compete with the flagship you were about to announce.
I don’t know that setting up a website which helps increase visitor hits and page ranking based on links-in really punishes a website for posting content designed to increase page hit counts but all the power too you.
Press release release
“Out of respect for the Steve Jobs passing….”
Interpretation
“We don’t really want our products launch’s PR campaign drowned by another event. This together with the iphone 4S not being predicted as disruptive we decided to delay this product launch until fall”
Businesses at that level don’t operate on personal feelings.
Edited 2011-10-07 22:44 UTC
Of course it has to do with personal feelings. Not Google’s and Samsung’s, but yours, the consumer’s. Apple has just released a phone to a giant collective MEH!. The Nexus Prime is almost certainly going to be a more exciting piece of hardware, and the software will most likely bring along some newsworthy features. Taking a jab at Apple by releasing it a week after the new iPhone might be cool, but with Jobs’ passing, it’s just not the right time. Bad taste. Bad PR.
Edited 2011-10-08 20:50 UTC
There is no ‘meh’ from consumers apparently where iPhone 4s preorders are beating all previous phones. Remember the consensus amongst the tech commentators was that the original iPad was going to be a flop. The iPhone 4s will replace the iPhone 4 as the world’s top selling handset.
About that collective “meh”: http://allthingsd.com/20111007/att-says-seen-200000-pre-orders-for-…
Most successful iPhone launch ever.
Watching Apple’s event, the main difference between Jobs and Cook as presenter is that Jobs showed his enthusiasm. He dropped his trademark “awesome” and “insanely great” while he talked about a product. The so-called “reality distortion field”, it turns out, was Steve Jobs telling clueless tech journalists how impressed they should be, because Lord knows they can’t figure it out on their own.
Double the performance with better battery life, “4G” throughput without LTE hardware, massively upgraded photocell with optics to do it justice, and a “beta” voice assistant that blows anything short of IBM’s Watson out of the water… but they didn’t bump the number, and the average tech journalist is an idiot, so “meh.”
It must have been hard on those presenters at the iPhone event knowing that Steve Jobs was literally at death’s door, they did very well under the circumstances. Did you notice the ‘reserved’ empty chair where Jobs would have normally sat and the way the camera lingered over it a couple times?
Yes, tech journalists are generally idiots. This makes sense, since they work for large companies, and need to appease not only their parent company, but also their advertisers, and Apple itself, of course (to ensure an invitation to the next event).
All those large tech sites? Engadget? Ars Technica?
When it comes to Apple reviews, they are – by definition – not to be trusted. Sites like that MUST be nice and kind about Apple, because else Apple cuts them off. They are the IGN and GameSpot of the tech world [when it comes to reviews – otherwise they’re relatively decent, esp. Ars].
If you want real reviews, go to the likes of AnandTech and Tom’s Hardware. Those are the real tech journalists. Major exception to the rule is Siracusa, of course. He’s a legend.
—
As for myself – my response to the iPhone 4S is a big resounding meh exactly because I’m a nerd. I never said it wouldn’t sell or wouldn’t be a success – because at this point, Apple could slap a logo on a turd, call it the iPhone X, and sell 34 million of them by quarter’s end.
Edited 2011-10-09 10:25 UTC
And we see evidence of this where?
Benchmark sites. Appealingly objective in their informational content, and informative provided your question is specific enough.
This is exactly what I’m talking about. You recuse yourself from human taste and understanding by calling yourself a nerd, tacitly claiming that your expectations are special without giving any hint as to what they might actually have been, and go on to insult the general public with a flippant remark that they’re so uninformed and tasteless they’d buy Apple-branded feces, ignoring the fact that Apple’s products have the highest customer satisfaction ratings while selling in the tens of millions, something no amount of loyalty or marketing could achieve. Whom could a journalist who believes such thinking is insightful ever hope to inform?
The iPhone is disappointing because it is, but people will buy anything because they will, and that’s journalism. Who’s the one trying to sell shit?
Do you have any evidence to support the notion that if attendees at Apple event write a critical review of a product that they will be disinvited to the next event? Any evidence at all?
What strikes me about tech reviewers and Apple products is how often they underestimate the potential of Apple products by a quite staggering degree. The way the iPad was greeted is a very good example of that. The way the iPhone 4s was greeted is another example in the same league I think. Leaving aside the fact that by this time next month the iPhone 4s will be the world’s top selling handset the way that the tech press, by and large, has utterly failed to understand the significance or potential of Siri is astonishing. What was demoed seems to be a functional AI system, in a phone, that actually works in the real world and does useful stuff. And it’s not even at version 1 yet. The potential of this is enormous.
You say ‘because at this point, Apple could slap a logo on a turd’ but the point it’s not that Apple could sell a turd the point is that the buying public knows from long experience that Apple won’t sell them a turd. What was the last Apple product of any significance that could be described as turd? Premium brands are partially made by advertising and market positioning but only partially. If a premium brand is actually crap the brand reputation will sooner or later (usually sooner) collapse. And the fact is what Apple sells is a premium brand product experience at not premium prices. The buying public knows this but tech commentators often miss the point because they obsess about specs and stuff that often doesn’t matter that much and the stuff that does matter a lot is often invisible to them. Hence Apple’s baffling, from their point of view, success.
Raise your hand if you have extensive experience dealing with Apple’s PR dpt.
*raises hand*
Can’t say more.
I think you’re confusing things. Very few tech reviewers underestimate Apple. In fact, they overestimate them. The comparisons between Jobs and people like Edison and Einstein are proof of that. The superlatives about how Apple invents this and Apple invents that, even though Apple hasn’t actually invented anything new in close to 20 years – Apple doesn’t invent, Apple perfects and polishes, and they do it very well. However, that’s not inventing. They take existing functionality, and polish it. Heck, even Siri wasn’t developed in-house. Apple bought it.
Only a few so-called “analysts” underestimate Apple because it brings them pageviews. Those are the people you are referring to, and those people are idiots.
I am a nerd, and hence, that’s how I look at Apple’s products. Just look at my reaction to when the iPad was unveiled – I wrote something along the lines of that I personally didn’t see the point, but that Apple would most likely sell boatloads of them. I never conflate my nerdiness with Apple’s ability to sell stuff.
BAM. This is exactly what I mean when I state tech reviews and the press vastly overestimate Apple. What I, as a nerd, saw was this: a voice recognition system with a few more commands in its database than other systems – which might as well become a horrible problem (many times, simplicity > complexity).
The voice recognition system in, say , Windows Phone 7 works very well exactly because it’s simple – “open Facebook”, “text Renate”, “check mail”. Everybody can come up with those commands, since they make sense. Siri’s system works exactly the same, it just has a few more commands to do the same thing. This could actually become very, very confusing.
In other words, unlike you or just about everyone in the technology press, I don’t blindly believe a controlled Apple demo and proclaim halleluja – I want to test it out myself, see how well it works, and compare it to simpler (and therefore, possibly easier to use) systems. I don’t just take Apple’s word for it like you and the tech press do, easily impressed by a shiny object on a screen – I want to test it myself.
In other words, those “analysts” who write click-whoring pieces about supposed Apple failures are idiots, most of the technology press are gullible idiots who will believe whatever Apple shoves down their eyeballs, and I am a sceptic who refuses to believe anything until I’ve seen/used it myself.
The iPad was a good example – I didn’t see the point when it was launched, but did say they’d most likely sell lots of them. When I got one myself, I finally got it, and admitted it easily in the review.
A reverse example: most of the tech press just parrot the Apple nonsense about “post-PC” and iOS being post-PC – even though iOS just a desktop interface with enlarged buttons crammed onto a mobile screen (and not a very good one at that).
I guess that means you don’t have any evidence.
It’s painful to watch people who claim to be interested in technology not get Siri. To use the term ‘voice recognition system’ in relation to Siri just underlines how deeply you have missed the point. Voice recognition is not trivial but it’s not new. Siri is not a voice recognition system it’s an AI system.
Eventually you will catch on and then when it changes everything you can start writing article about how Apple didn’t event it
I’m sorry, more than my word I cannot offer. It’s pretty much a given though – you can’t critisize Apple and still get early access to their stuff for reviews, or press invites. That’s how Apple keeps a tight grip on the press, and ensures all early reviews are positive. I have dealt with Apple about this a lot, but of course, it’s all confidential. I can undersrand you won’t believe me, that’s fine.
As for Siri – you just proved my point. You haven’t used it, have no idea how it works, yet you automatically believe it’s perfect and will change the world. You’re a believer in the church of Apple.
I’m not. I’m a sceptic, with everything (except Fiona Apple). I want to actually use it first in a real-world environment. Then I’ll judge.
And you’re right, Apple did not invent Siri. They bought it.
So you are backtracking. You said people were threatened with being cutoff now you say Apple gives product previews to preferred reviewers – duh who doesn’t?
If you are confused by Siri you might want to have a look at this – the pedigree is extremely impressive.
http://9to5mac.com/2011/10/03/co-founder-of-siri-assistant-is-a-wor…
Forget you prejudices and embrace the new wherever it comes from. Phobias are so limiting.
Do you actually believe the crap you come out with because those last two sentences were the most narrow-minded and up-your-own-arse sentences I’ve read in a /long/ time.
Believe what you want in private, but don’t start preaching to us that blind faith is better than practical experiment, like you are with Thom and his “use before reviewing” mentality.
Edited 2011-10-10 11:35 UTC
“Siri is not à voice recognition system it’s an AI system”
(Disclaimer : Although I believe I have the required knowledge of physics, signal theory, and programming, I have never worked directly on a voice recognition system. So anyone who has, please correct me if you detect some bullshit in the upcoming post)
So you believe that it is possible to make a decent voice recognition system without AI ? I don’t think so, and am going to explain why.
What is voice recognition ? Basically speech to text translation. Basic theory is that you take an audio file or stream of someone saying something, you isolate words and detect punctuation based on the pauses and intonations of the talk, then you take each word separately and try to slice it into phonemes, which are pretty close to syllables but not quite the same thing. From phonemes, you can get the textual word. (to be continued, stupid 1000 char phone browser limit)
Edited 2011-10-10 16:47 UTC
Honestly – the lengths some people will go, people who claim to be genuinely interested in technology, to argue absurdities just so that can belittle something Apple is doing. Do you really believe any that tosh you just wrote?
Clearly speech recognition software recognises words. It may have attached to it a programme that can recognise set phrases and connect those set phrases to an object. That is impressive but you know as well as I that such software is very limited and that it is a very stupid system.
What Siri does is listen to what you are saying and then infer from the context of the conversation what phrases might mean. It seems to do this an order of magnitude better than anything else out there let alone anything on a phone. So if you are having a conversation with Siri about two appointments clashing you seem to be able to say something like ‘move it to the next day’ and (like a human could) Siri will know what ‘it’ is and what the next day is and what moving ‘it’ means all from the context of the conversation you having with it. If it works as claimed, and those commentators with a hands on experience say it does indeed seem to work as claimed, then Siri is very, very impressive and might well represent a true step forward in the way humans interact with technology.
So as I said if people who claim to be interested in technology want to argue that it is trivial just because it is attached to Apple well more fool them. The only way to lose a limiting phobia is to stop being afraid of the phobic object.
Nothing in what you describe even *remotely* resembles an AI – it’s just a speech recognition system. Deriving things from the context of data on your phone (“hey this new appointment you’re making is clashing with this one”) is not AI.
This is my worry about the system. So far, it just looks like a speech recognition system with more commands and the ability to parse some contextual data – which has so many possible error vectors it’s crazy. Language parsing is VERY difficult even without contextual parsing – let alone with.
You’re making it seem as if you can just say whatever you want, with Siri figuring it all out. This is highly misleading, as just as with any other speech recognition system, you’ll have to learn and find out which commands it supports, and which it doesn’t. Programming in some default en-US sentences is all fine and dandy, but what about all the various dialects? Heck, even my friends from Amsterdam (60km from here) have issues understanding me when I go full-on local dialect on their ass.
So far, it seems Siri will suffer from all the usual pitfalls every other speech recognition system suffers from, and not even ten bucketloads of contextual data can change that.
Edited 2011-10-10 16:59 UTC
That voice recognition needs some learning AI algorithms to work well ? I hope I gave enough examples of AI use cases in voice recognition throughout the continuation of the post you’re quoting in order to prove this.
You’re missing a great deal of complexity in the “recognises words” part, as I mentioned above…
Anyway, I want to be sure that you understand that making a computer understand textual commands is a separate problem from voice recognition, or “listening” as you call it. Good speech recognition does not have to understand what you’re saying, only to find out how it is written. Conversely, understanding written sentences does not require you to recognize spoken language, as an example modern search engines do some amount of natural language processing without asking you to talk to them first.
Recognizing simple, well-defined sentences is only one example of thing that can be done with text translated from spoken language. It happens to be simple and reliable, which is why many devices do it. But natural language processing algorithms can go beyond that. As an example, they can work with synonyms and different noun declinations, correct your spelling and grammar, or locate the keywords in a sentence and ignore the rest.
Listening to what you are saying IS voice recognition, so what would be new there is contextual commands. But is it really new ? All of the phones I have ever owned have this “back” button. Its behavior changes depending on what I’m currently doing. As an example, if I press “back” while I’m writing a text message, my phone will ask me to confirm, because it is likely that I did it by mistake. This is a typical example of a contextual command which everyone knows and loves, available on every phone sold today.
I fail to see what’s so outstanding. When I press the red button of my phone to close a running program, the message I send to the OS is no more detailed than “close that”. The OS has to find out which software is currently shown on screen before it may close it.
OSs already have to deal with context-based orders in their current incarnation, you just don’t see it because they are sufficiently well designed not to be mistaken.
It agree that it may may be a step forward in the integration of various existing concepts together (voice recognition, textual order processing, context-sensitive commands, web searches). I don’t agree that it is a big advance in any of these domains. The technology was already there. What Apple have done is to put it together in a package, that may or may not be more pleasant to use than other solutions. Real world testing will tell.
The main thing which I’m afraid of when it comes to Apple products is the reaction of other members of my beloved specie
Take touchscreens, as an example. Many people love them, probably because it’s nice-looking, feels simple, and allows for larger screen estate per cubic centimeter of phone. Thus, the market for keyboard-based phones is declining. Now, when you look beyond the shiny coating, touchscreens have many drawbacks, and for many common phone use cases their usability is perfectly awful. I understand that they have advantages, but good physical keys are the thing which works best for me. Thus, I wish I would still have the choice to buy a good touchscreen-less phone. I don’t. I hope voice command won’t set a new fashion in a similar way, and reduce my phone usage efficiency like touchscreens have.
Edited 2011-10-10 19:28 UTC
Good grief man how can one person write so much and say so little. Speech recognition is complex. AI is more complex. Practical and working AI is even more complex again.
Siri is not speech recognition (but it does use speech recognition as an input tool), Siri is the most advanced AI on any consumer device. And it’s still only a beta. The world tilted, things shifted and you didn’t notice. Phobic thinking is very limiting.
Again, what do you call AI ? I had some issues defining it myself, so I asked wikipedia for a definition. What I found is that it called AI any computer program which uses the mechanisms of human intelligence (such as reasoning, learning…) to maximize a performance criteria, which is even broader than the broadest definition I used earlier.
You say that a speech recognition system is not AI. It is, because it uses learning algorithms on the inside to maximize its performance in the long run. It needs to be, because it has to adapt itself to a very wide range of situations, far beyond what its developers can think of. In my opinion, good speech recognition also qualifies as “practical and working AI”, because in its rawest form it already provides a useful service : a mean of dictation and command input for situations where using a keyboard and buttons is not a practical option.
What I find interesting is your claim that Siri is the “most advanced AI available on any consumer device”. There comes a new statement, with information on the inside. And what I say is, put some meat there. What does “advanced” mean in objective claims ? Is it a measurement of internal complexity ? Amount of features ? Or, perhaps what matters more, performance at a given task ?
If it is about performance, which criteria do you use to evaluate the performance of every single AI in this world ? And if it’s not every AI, which group of AIs is it ?
I may sound pedantic, but precision is important when you make big claims. Saying that you have a cure for cancer is one thing, But when you think of it a bit, we already have tons of these. They just work badly. A revolution would be a cure for cancer that has high success rates, is fast, and has few side effects. You say you have a revolution, say why.
I guess it will be just as pivotal and game-changing as Facetime was.
Oh wait.
Now, what are the problems which explain why computers took so much time to get this speech to test translation relatively right ?
First, there is the [word sliced in phonemes]->[written word] translation. It is not as simple as it looks, because many European languages have this “feature” that there are several ways to write a given phoneme. If you go in Asia, things are even worse : words are not commonly spelled using syllables, but using more complex characters which are often also words in their own right.
For all these reasons, voice recognition systems need an internal dictionary to associate a bunch of phonemes with a written word.
As a starting point, someone who wants to create such a dictionary can use a regular dictionary, take the phonetic expression of each written word, and create a phonetic-to-written dictionary from that. But if you stop at this stage, you’ll miss all the everyday familiar vocabulary that is not officially recognized by national dictionaries, such as weasel words. These words, along with other things which are not found in dictionaries (such as the names of numbers, letters, and mathematical symbols) must be added manually.
Manually adding words that are not in the dictionary takes a lot of time and effort, and developers cannot think of everything, so some words will always end up missing. Especially taking into account that our vocabulary is in constant evolution. For this reason, good voice recognition systems must be able to learn new words. Which is a first form of AI.
Edited 2011-10-10 17:05 UTC
Second, there are homophones. Two different words which are pronounced in the same way. These are very frequent in the basic French vocabulary, I don’t know what the situation is in English.
How does a voice recognition system discriminate between both ? It can use two tools : the frequency at which a word is used (when in doubt, the most frequently used word is the safest bet), and structural analysis of the sentence to check which word it is most likely to be.
As an example of the second form of discrimination, in French we have “a”, which is the present form of the “avoir” (to have) verb, and “Ã “, which is used to introduce location complements in a sentence. Both are extremely common. To discriminate between both, the voice recognition system could check the sentence for the presence of a verb. If there is none, then we are most likely talking about “a”.
I hope that it is obvious that both word frequency analysis and sentence analysis are operations that are best adapted to each individual user, who has a different way to speak. So we need learning, so we need AI here too.
So far, I have assumed that there is a unique way to pronounce a given word across all countries which speak a given language. This is, of course, perfectly false. Regional differences are strong, to the point where even humans sometimes have a hard time understanding each other.
As an example, having mostly learned British and American English, I have a hard time communicating with Indian people. I know the words, just not the pronunciation. In French, some people pronounce “é”s the way I pronounce “è”s, some people pronounce letters which I don’t pronounce in words, and vice-versa. Words have a different meaning and are used in different contexts. In fact, even the way punctuation is introduced in a spoken sentence can subtly vary.
A voice recognition system must adapt itself to this. Since we generally only specify what is the language we’re speaking, and not the regional variant, it has guess which regional variant we are using, and remember that. If it doesn’t know about our regional variant, it must also get used to it. Again, our voice recognition system learns, so that’s AI.
What about individual differences in pronunciation ? Even in a given region, people talk in a different way, depending on the life they have lived. Some people speak slowly, other go very fast. Some people use a very formal vocabulary, when other are very familiar in their everyday speak. The voice recognition system must adapt itself to these different behaviors if it wants to have optimal performance.
Then even for a single individual, pronunciation varies depending on the circumstances. You speak differently when you’re tired, when you’re running, when you’re in a meeting, when you’re troubled, when you’re in shock… Again, a voice recognition system must adapt itself to that. Thus, more AI.
I could go on and on about detecting phonemes in a noisy environments, people who “eat” phonemes when they speak too quickly, neologisms, context sensitivity and the languages that are heavily based on that such as Japanese, and so on, but I hope that at this stage you see my point.
Many people, in which I believe you are included, think that voice recognition is simple. This feeling comes from the fact that we do it everyday, in a relatively painless fashion, only asking people to repeat what they just said infrequently. The truth is, it is not, and there is a reason why children take so much time to get a rich vocabulary.
Voice recognition is a fantastically complex problem, whose complexity probably borders that of translating one language to another. It is not only a problem of processing power, but also of gathering the required knowledge in a way that is accessible to a computer program. AI gathers knowledge from where it is most useful, the user, and makes use of it to improve the recognition quality, so it obviously a vital part and has been there for ages. In academia, I am ready to bet that voice recognition is mostly studied in AI labs, in the same kind of team that works on automated translation.
Saying that “Siri is different from voice recognition because it is an AI” is thus deeply, totally wrong. Voice recognition IS AI. Slapping stuff after it which processes the extracted text, like a WolframAlpha backend that can find answers to an oral question, is certainly a nice touch, could qualify as an interesting integration effort, but is by no means the revolution you want to make it be.
Edited 2011-10-10 17:42 UTC
What can I say guys – watching otherwise intelligent people twisting themselves in absurd knots just so they can that something important that Apple has done is trivial is embarrassing and unnecessary. You just have to let go of a phobia and your freedom of action and thought is immediately increased. Just try it.
As I said before if, as is likely, Siri turns out to have been a major inflection point in tech development, it is sadly predictable that you guys will be the first trumpeting about how Apple didn’t invent it. It’s all so tiresome.
What part of “they bought it” don’t you understand?
Good to start early
So Siri, which only appears on Apple devices, is trivial and if not then it has nothing to do with Apple. Keep twisting those knots.
Define what you call AI in this post. I can think of two definitions right now. One has an official status among the field, but is very broad (computer program with an ability to learn), and every phone with an annoying T9 feature qualifies. The other is Hollywood’s view of things, which you may argue best fits the average guy’s expectations : to look, behave, etc… like a human being. I believe current phones, and computers as a whole, remain stupidly far from this : can iOS 5 understand metaphors and sarcasm now ? Write music ? Do poetry ? Drive a car ? Code its next release ? Or even, to pick something which some modern computers can actually do, win a TV game without an internet connection ?
Not to like rub it in Thom but you gleefully put down other journalists and trump your platform of choice using totally random metrics and standards and you also tend to flip flop as and when it suits you.
I am not sure that really makes you a better tech journalist.
]{
Platform of choice? And what would that be? Looking around my living room room, I see 4 different platforms I use every day (Linux, Windows, iOS, and WP7; my first Android device will be delivered tomorrow)….
I am just saying you certainly have a platforms ‘bias’ and you rail about how everyone who has a different ‘bias’ is an idiot.
You made a big deal of the fact that the Galaxy II S was better/newer then the iPhone 4S because of the CPU and the screen yet AnandTech is showing that it beats the pants of a 1.5 GHz Galaxy II S and will almost certainly do the same with a Nexus Prime (since it’s basically the same CPU).
You complain about how the screen is too small where you used to rail about how the screen was too big for your thumbs.
To some you sound like an ‘idiot’ sometimes. Recognize that and be mature enough not to vilify everyone because they disagree with you.
I’m not denying it will be the next #1 selling phone on the market, it just doesn’t bring anything interesting to the table. The last one at least had an exceptional screen and promised a great camera (which it failed to deliver on, and will fail to deliver on yet again), this one has nothing. Or a “voice assistant that blows anything short of IBM’s Watson out of the water” if you will (RDF much?). So meh. It’s just an upgrade on last year’s fashion accessory.
I can accept that they’ve delayed the launch a couple of weeks, as long as they don’t delay the actual release, which I believe is supposed to start rolling out in November.
My beloved N1 is starting to feel a little dated, and my wife just picked up an Infuse 4G that is giving me a serious case of screenis envy. The leaked specs on the Prime would easily take care of that.
That has got to be the most idiotic fracking name out there. The marketing department must be filled with 4year olds.
Never head of product code names?
Windows 7: “Blackcomb/Vienna”.
Firefox 4: “Tumucumaque”
MacOS X 10.6: “Snow Leopard”
Adobe Photoshop CS5: “White Rabbit”
Ubuntu 12.04: “Precise Pangolin”
It’s not uncommon, and most have themes (Apple uses big cats, Microsoft uses locations near Redmond). Google just chose desserts for theirs.
Edited 2011-10-09 02:52 UTC
Would you have preferred “Candy String” ? I’m sure this cake-based naming scheme has some perversion potential…
Based on some of the comments ive seen around the web, I’m sure Jobs decided as a last evil deed, that he would die just to usurp the press conference with his funeral.
what if they are change the image sensor to this one?: http://www.samsung.com/global/business/semiconductor/newsView.do?ne…
It’s not that I don’t think that the Google founders don’t respect SJ. I am certain that they did.
However, I can’t help but think this is more of a strategic move. The iPhone 4S is unveiled and SJ dies the next day. There is no chance at all that an event by Samsung would get any press. Moreover, it would be seen as totally insensitive if they were to pick on Apple at a time when their home page features a memorial to SJ.
Finally, you have to think that the best time to launch the Nexus Prime (which is a great looking phone) would be at the time when the Apple is sold out of iPhone 4S’s and is running a big backlog. Everyone is already at 1-2 weeks delay. A few days after launch it will be 2-3 weeks. That would be the best time to unveil a ‘better smartphone’.
It took a large number of years to finally get rid of curved glass on tv sets and computer monitors, and now they want to reintroduce it on mobile phones… Ugh.
Hi,
“Super AMOLED with a resolution of 1280×720 – that’s an HD screen in your pocket”
can somebody explain how Samsung counts pixel on SuperAMOLED?
SuperAMOLED does not use standard red-green-blue pixel but instead it use: green-blue-green-red and how then you count for one pixel ?? green-blue or you take four elements for one pixel??
Outdated information. We’re already a few generations ahead. Even the Galaxy SII – released in APril – no longer uses that system.
good to know !
Makes me wonder if I shouldn’t have waited on that galaxy s2.
I think between the nexus prime and the galaxy note, I would prefer the note. Oh well…
I’m getting my Galaxy SII Wednesday. I’m open to questions .
i got mine last week. thats why i was wondering if i shouldn’t have waited and gotten either the nexus prime or the galaxy note. then again, who knows when the galaxy note will come to US, considering we just got the s2.
the s2 is pretty nice. i’ve been using an original first gen iphone for the past 4 years. i find android to be a little unintuitive at times, but the customizability is insane. it was overwhelming at first and made android seem overly complicated in comparison to ios, but now that i have everything setup the way i want its great!
AnandTech just posted some performance comparisons and guess what, the iPhone 4S beats the Galaxy II S in browser performance by a wide margin (it’s twice as fast).
In fact, the iPhone 4 with iOS 5 is only marginally slower then the Galaxy II S.
Admittedly, if your into the ‘bigger is better’ the II S screen will make you happy.
Just goes to show that comparing CPU speeds on phones is sort of a bit dumb.