On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected “botnet-like” activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney’s official Discord channel.
↫ Benj Edwards
So “AI” companies are allowed to ingest whatever data they want, but as soon as someone ingests their data, it’s suddenly a problem?
Seems like a sound business model.
Thom Holwerda,
Obviously we’ve had big differences of opinions about AI, but I agree with you here that the hypocrisy is on full display.
This doesn’t necessarily rise to the level of unlawful, but still very hypocritical.
A difference in opinion? I’m pretty sure Thom is not gonna discuss his opinions on AI, and he’s also not writing a article explaining why he feels putting AI in quotes makes sense to him.
I for one am sick of seeing them, and to me the quotes, the “spicy autocomplete,” the belief that AI can only reproduce other people’s work and is therefore an infringement machine, are just signs of deep denial (and of course the sweet ignorance). This is because, you see, none of these things are threatening. An autocomplete infringement “AI” will not take our jobs and change society in ways we cannot plan for, but an AI just might.
drstorm,
He’s shared his opinions on AI quite a lot lately. Putting it in quotes is a form of sarcasm.
I also disagree. although I don’t want to make a big deal of it. The reason we’ve made language models work this way is because strait linear output is very practical and efficient to work with. But it isn’t a limitation of neural nets & AI, if it made more sense to, we could train them to work non-linearly similarly to how we can tell an AI to replace a portion of an image that we selected. The technology is extremely flexible and powerful.
I think it’s inevitable that AI will take over many jobs in the coming years as costs come down. Profits are like drugs and employers won’t be able to help themselves.
Well, I guess there really is no honor among thieves after all.
And how would you train a digital mind?
I’m not even talking about current AI systems, I’m just asking in general. Imagine you created a digital mind that needs to learn. Go!
>”And why would you train a digital algorithm masquerading as a below average human mind?”
Fixed it for you.
andyprough,
Those are the words of someone with strong preconceived notions. If you were more open minded however, you’d have to concede that AI is actually making very strong gains.
https://www.reuters.com/technology/bar-exam-score-shows-ai-can-keep-up-with-human-lawyers-researchers-say-2023-03-15/
https://www.technologyreview.com/2023/09/14/1079465/ai-just-beat-a-human-test-for-creativity-what-does-that-even-mean/
https://www.windowscentral.com/microsofts-neural-language-model-surpasses-human-performance-superglue-test
Obviously we don’t have “general AI”, but the truth is there are many intellectual domains that don’t need general AI for computers to succeed at them. Tasks that used to be limited to human intelligence are coming within reach. Workers who don’t plan for this as AI becomes more affordable and accessible are at risk of being betrayed by employers who care only about profits.
AI scrapping AIs further muddies the legal workers. They can now claim they didn’t steal the image, they used a legitimate source. Who in turn need to justify the sharing the copyrighted material.
This will end up as a snake of “owners” and make legal battles ever more complicated, difficult and expensive.
Eg if I take content from Wikipedia and reshare it via my AI, who is the one that gets sued?
The big AI lawsuits have begun. This will cost billions, if not hundreds of billions, if you try to calculate the value of all the stolen data. I’d say it’s likely that some of the multi-trillion dollar big tech firms of today won’t even exist 15 years from now due to the tsunami of AI litigation headed their way. Nearly every person on earth has a potential legal claim against them. This will be unlike anything we’ve ever seen.
andyprough,
“Copied” would be a better word there since nothing is being “stolen” in any literal sense. Also not everyone is in agreement that using public data for training purposes constitutes copyright infringement, especially when the NN doesn’t contain the original.
I realize there are pros and cons with AI, but I think we really need to tread carefully before dictating that computers should not be allowed to learn from public information sources as humans do.
1) That would clearly hinder beneficial applications of AI
2) It won’t stop companies from pursuing AI in unregulated countries
3) It will put local companies and workers at a major disadvantage.
IMHO the problem with AI is not that it learns from public works like we do. The problem is the potential for AI to bring even more economic inequality. This is what we should be focusing on rather than a futile fight to stop AI altogether. It’s happening whether we like it or not.