Warning: this article is disturbing. Companies shouldn't be able to cause people psychological damage to get funding.
"The problem with this description isn't just that it's wrong. It's that the AI is eliding an important reality about many loans: that if you pay them down faster, you end up paying less interest in the future. In other words, it's feeding terrible financial advice directly to people trying to improve their grasp of it."Our unevenly distributed AI future is terrible already! Update (1/23): And who cares about copyright?
"This academic-to-commercial pipeline abstracts away ownership of data models from their practical applications, a kind of data laundering where vast amounts of information are ingested, manipulated, and frequently relicensed under an open-source license for commercial use."Andy explains the research to corporate profit pipeline. Seems like there should be a way to handle consent even at this scale. Many CC licenses require attribution—I wonder how these image models handle that.
"At minimum, Stable Diffusion’s ability to flood the market with an essentially unlimited number of infringing images will inflict permanent damage on the market for art and artists."Describing image models as sophisticated collage tools takes some of the mystery out of AI and makes it clear work is being used without consent. This essay has a clear description of the diffusion process.
"This isn’t just a problem for Stack Overflow. In pretty much every other example where you see ChatGPT screwing up basic facts, it does so with absolute self-assurance. It does not admit a smidgen of doubt about what it’s saying. Whatever question you ask, it’ll merrily Dunning-Kruger its way along, pouring out a stream of text. It is, in other words, bullshitting."Effortless bullshit. At scale. What could possibly go wrong?
"The statistics contrast starkly with the confidence in AI presented by Facebook’s top executives, including CEO Mark Zuckerberg, who previously said he expected Facebook would use AI to detect “the vast majority of problematic content” by the end of 2019."If your answer to an extremely difficult problem is "AI will solve it" you are not really interested in solving that problem. Facebook leadership knows this but they say it anyway while human moderators suffer with no resources because AI will solve this problem any minute now.
"A sense of awe is almost exclusively predicated on our limitations as human beings. It is entirely to do with our audacity as humans to reach beyond our potential."AI lacks nerve is a fantastic way to put it. [via om]