In the last three weeks, we also launched the new AI-powered Bing into preview for more than 1 million people in 169 countries, and expanded the new Bing to the Bing and Edge mobile apps as well as introduced it into Skype. It is a new era in Search, Chat and Creation and with the new Bing and Edge you now have your own copilot for the web.
Today, we take the next major step forward adding to the incredible breadth and ease of use of the Windows PC by implementing a typable Windows search box and the amazing capability of the new AI-powered Bing directly into the taskbar. Putting all your search needs for Windows in one easy to find location.
I feel like adding a glorified autocomplete that tricks people into thinking it’s a real artificial intelligence to the Windows taskbar is not exactly what Windows users are looking for, but I guess that services revenue cash register has to keep dinging. I’m getting major cryptocurrency, NFT, and web3 vibes from all this, and I can’t wait for the whole thing to come crashing down once again.
Since they are partnering with OpenAI to give the finger salute to Google, I’m wondering what will happen when Google will flex a tiny quantum bit its muscles.
I’m not sure how its implemented in Windows Search, but the Bing AI is much more than a glorified autocomplete. However, the story on NPR I was listening to this morning portrayed it as rather concerning. This looks like it is only going to Bing Preview users and not the wider population of Windows 11 users though, the alarming behavior experienced by AP reporter Matt O’Brien might not surface, since his experience was with the chat bot side and not the search, But, still, it was concerning, and I’d wary of search results.
Yikes.
There are some great stories about people figuring out how to abuse the AIs. They’re pretty funny. 🙂
I thought ChatGPT was deleting all the conversations once they were over to avoid that problem. Maybe it’ll just take more than 24 hours this time.
dark2,
Several years ago microsoft had a bot that echoed its collective users. The result was lots of trolling, as one can imagine:
https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/
However it got shut down and I don’t believe microsoft includes a similar feature this time around, I assume that every conversation is independent and no state information is shared between conversations. Am I wrong about this? There’s likely still lots of Inappropriate responses in the training data though.
I’m not sure. People are taking screenshots of the conversations.
I also thought getting training data was part of the reason for making ChatGPT public.
Good lord no! Windows search is allredy so bad that it hurts.
Why can’t all this indexing and constant treaing at your flash drive just end? Microsoft can just buy FileserchX from GoFF and have the best search that shitty file systems like NTFS can provide.
Finding content in a file in windows search is amost impossible without sollutions like filesearcx graphically. And i am using windows so little nowadays that i do not even bother to use the terminal.
AI may be over-hyped these days but I really don’t think it won’t “crash down”. They will slowly keep improving and be useful in certain situations like they already are.
But if there’s an AI on Windows, I want it to be Clippy for the GUI!
Yeah, no kidding, If anything, I’m finding the overblown anti-hype around AI even more tiresome than the overblown positive hype that preceded it – and while I realize it’s fashionable to ride said anti-hype train at the moment, it is still disappointing to see that sort of thing from tech news sites that really ought to know better. E.g. there was a big nothing-burger article on ArsTechnica recently, making a big deal out of the fact that someone was able to get ChatGPT to “write” “malware” intended to accompany a phising EMail… while neglecting to mention that the malware it wrote was a PHP script, so I guess it’s a real risk to all of those people who also run web servers on the same computer they use to check EMail, and are insane enough to think to themselves “Hey look, random PHP script I wasn’t expecting… seems legit, I guess I better copy it to the htdocs folder of my web server and execute it.”
The stuff I find really bizarre is people making a big deal out of things that ChatGPT & the like do/can be used for – when it’s stuff that people have already been using existing tools for, for literally decades. E.g. the sensationalist stuff about how all programmers are going to be replaced by AI, and/or we’re going to be deluged with AI-powered malware, etc… based on the fact that people have been able to use ChatGPT to “write” code. My response being along the lines of “Yes, and…? You mean the way people were already effectively using Google to ‘write’ code? The only difference is that something like ChatGPT significantly streamlines the old process of ‘run google search for a coding question/issue, spend 5-10 minutes skimming StackOverflow threads, then do some trial and error with copy-pasted code examples, and tweak until you get the results you want’.”
And while the technical specifics are way over my head, the first thought I had when reading/hearing descriptions of how ChatGPT was trained/its models were created, was along the lines of “Oh, so it’s basically using the same techniques/principles as every spam filter in the last 10-15 years, that use Bayesian classification to ‘learn’ to distinguish spam from legit messages – just at a much larger scale, a few evolutionary leaps more advanced, and combined with a frontend interface that uses the same kind of ‘conversational’ system as Google Assistant/Alexa/Siri/et al.” It strikes me as much more evolutionary than revolutionary, but it seems as if the majority of people opining on it are determined to treat it/pretend as if it’s wholly novel and unprecedented. That, or act is if “AI” is an absolute, all-or-nothing term, that can only refer to an infallible, self-aware replication of human consciousness – in words, “It’s not AI unless it matches what I’ve seen in movies and TV.” I’m sure the people who have been working on AI research/development since the 1960s would be very interested (or more likely confused/amused) to learn that what they’ve been working on is not actually “AI,” because they haven’t been able to create HAL 9000/GladOS yet.
StephenBeDoper,
I mostly agree with your overall post, things do get overhyped. But I think this specific point is worth talking about.
Conceivably AI might well replace traditional programming quite soon. Granted there’s a lot to the job, especially on the human/world interaction front, but when you boil it down to basics, as programmer’s we’re just a very high level abstraction/interface for our users. Either formally or informally users request a change in diagrams and english worlds. We take this input and actually implement it as computer code. I actually think that maybe we should be giving more credibility to the threat that we might be replaceable by AI. The question may shift over time from one of “if” AI could replace us, to how much does an AI developer cost next to a human one?
Any thoughts?
Most of my experience is in web dev, so I’m not sure how applicable that is to other types of coding/programming. From that perspective, from playing with ChatGPT a little so far, I’ve found it works OK for dealing with the trees, but no so much the forest – in other words, it produced surprisingly good results when I asked to write JQuery code for specific purposes, but it’s not at the point where someone without any web dev experience could tell it “build me a website” and have it know when it would need that code and take the initiative to build it.
I can see purpose-specific variants/equivalent being able to do that sort of thing, at least within limited, specific domains. E.g. train something like ChatGPT on/to prioritize info from documentation & developer forum threads for a specific CMS platform, and have it act as a more sophisticated version of an “install wizard” to more intelligently guide someone through the setup & post-install config – or as a “virtual assistant” for doing development on that particular platform. But I don’t think it’s there yet.
That said, I do agree that programming in particular is probably one of the easier targets for AI, at least in terms of work that requires a reasonably high degree of skill (as opposed to, say, churning out SEO bait for a content mill). I think that’s because, unlike natural language, the nature of programming languages (the much stricter syntax compared to natural language, the availability of both large amounts of documentation, and examples of working code), there’s much less subjectivity in terms of what is or isn’t a “correct” answer.
As it happens, a few days ago I attended a workshop on making use of AI & ChatGPT, geared towards writers & other media professionals. One point the presenter made (which mirrors my own experience), which I think is relevant here, is that things like ChatGPT can be useful tools, but they really require human oversight and it’s a really bad idea to just use the output from those systems as-is without doing any proofreading/QA. Which, by extension, means that in order to use it effectively, you need to have the knowledge/skill/experience to both word your prompt in a way that will get the desired result, and to evaluate the quality of the output & correct errors, etc.
Which I’d summarize as: in order to use it effectively, you first need to know how to produce the desired results on your own. And I think that applies to coding/programming as well; I can definitely see AI replacing humans for a large amount of tediuous “code monkey” work, and/or changing the nature of a large amount of programming work from “writing code” to “managing/directing an AI to write code & reviewing the output.” But I don’t see it outright replacing human programms, beyond the entry-level interns.
That’s a good point – and in the same vein, it’s occurred to me that the jobs that are most at “risk” from AI are things that computers could probably already do as well as/better than a human, but are still currently being done by humans because of economic realities. E.g. I mentioned jobs writing for “content mills” earlier, also thinking of things like the front-line support “techs” for large service providers (who job involves little more than replying to tickets with “I’m sorry to hear that you’re having a problem with [copy-paste the customer’s description of the issue], have you tried [list of links that were automatically retrieved from our knowledebase based on the presence of key words in the ticket,” or things like “poornets” in general (such as people in developing/economically disadvantages parts of the world being paid dollars a day to do advertising click-fraud). I can see the availability of systems like ChatGPT pushing us past the tipping point where it’s no longer cheaper to have people do that stuff.
StephenBeDoper,
Yes, once again I agree 🙂
Obviously things will improve in the coming years. Still though, even under the limits of the technology used to make chatgpt today, I think that simply providing more focused training into a specific domain such as programming a specific CMS system without any distractions could yield far deeper and more advanced capabilities. “Change the header to XYZ” “Make this text bold” “Add a new entry to nav bar”, “Create new page to show these products”. “Apply changes to production”…
I think we’re extremely close to this being technically possible and it could be more than just cheap labor who are effected.
Bring back Bob and Rover!
Anyone else think there is a market for a minimalist, small, fast, unchanging desktop operating system that focuses solely on serving the user, and only gets security updates? Imagine the amount of gigabytes that would be saved. Bonus is that you wouldn’t have to throw your computer out every six years!
I expected Linux to fill this void, but all of the out-of-the-box, desktop oriented versions come with Gnome, which is… not that. “Gnome has updated and all of your third party, unsanctioned extensions to make the software behave the way you want it to, and not adhere to the design of it’s creators have quit working, because they have not been updated to be compatible with the new version. Have a nice day!
I’ve been thinking about a CLI/TUI only Gentoo version. Is that what you mean, or were you thinking Debian?
There is really no enthusiasm whatsoever involved. When it comes to Microsoft and their products. It’s like some upper management decided this would make sense. And then some engineer needs to integrate it. And doesn’t really want to do that. And after a huge PR campaign follows on just how great they all are. Look. AI in the taskbar search field. Just to throw the term AI in the PR. Who cares.
There is a buzzword. They needed to cover it.
I’m getting major cryptocurrency, NFT, and web3 vibes from all this, and I can’t wait for the whole thing to come crashing down once again.
Difference here is AI has potential to actually be useful, unlike your examples. It can, and is, useful in some situations.
It can be. The Natural Language processing capability is impressive.
Now, OpenAI’s ethics and the ethics of the people involved is concerning.
I am sure this will cause no issues whatsoever as windows search is such a reliable well oiled machine. Adding more to it can only help.
kepta,
+1 I don’t know what this means, but it made me laugh 🙂
Feeding everyone’s data to Elon Musk and Peter Theil at OpenAI isn’t a problem, right? Alright, well I’m going to go ahead and feed everyone’s data to OpenAI anyway. 😀
I may be a hayseed, so my question is What’s the purpose of this socalled AI? As ar as the MSFT story goes, it just tells me thtat it may be another method to blast so called smart advertising to the user that interrupts productivity while better cementing in the inability to uninstall or disable it. In that case it should be called smart malware.
It can process natural language.
Instead of having to know a very specific and very obscure query language, people can feed the search mechanism a sentence.
“Hey BIll, Find my word files related to pirates.” versus “file=[*.doc, *.docx,] contains=regex(i, [Pp]irate) c:\* +network_shares -exclude=[*\cache]” (or whatever the query syntax is).
@Flatland_Spider
For every feature, the user has to make sacrifices to a publicly traded company. My question is, What would be the user’s sacrifice? My guess is privacy and more ads.