I think a better understanding of how generative AI products are produced would help clear up some of this magical thinking that’s happening.
LAION-5B is an open-source foundation dataset used to train AI models such as Stable Diffusion. It contains 5.8 billion image and text pairs—a size too large to make sense of. In this visual investigation, we follow the construction of the dataset to better understand its contents, implications and entanglements.An exercise in (and advocacy for) AI dataset transparency. Excellent information and presentation here.
Calculating the energy cost of generative AI can be complicated — but even the rough estimates that researchers have put together are sobering.Generative AI tools can be fun and can help productivity but are those gains worth their higher resource cost?
Hippocratic promotes how it can undercut real human nurses, who can cost $90 an hour, with its cheap AI agents that offer medical advice to patients over video calls in real-time.First, do no harm. Tech culture is going just great.
Most really unacceptable losses don’t happen because of a single bad event or what we might call a component failure, but because two subsystems, both operating correctly according to their designers’ intent, interact in a way that their designers didn’t foresee.I would like to make this quote into a cross stitch and hang it on the wall in every IT department everywhere. Lots of great thinking here about how we keep systems operating safely, especially with AI chaos engines being integrated everywhere.
Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.Not much to this summary, but interesting to hear AI skepticism is on the rise even as it's being built into every technology product.
Pitch work is basically when a director, writer, producer, or any combination of those get together with an artist and say, “We want to pitch to studios and we need imagery.” All of that has now been given to generative AI.Fascinating interview with concept artist Karla Ortiz about the impact of generative AI on her industry.
To even entertain the idea of building AI-powered search engines means, in some sense, that you are comfortable with eventually being the reason those creators no longer exist. It is an undeniably apocalyptic project, but not just for the web as we know it, but also your own product. Unless you plan on subsidizing an entire internet’s worth of constantly new content with the revenue from your AI chatbot, the information it’s spitting out will get worse as people stop contributing to the network.Speaking of newsletters that recently moved away from Substack, Garbage Day made the jump to Beehiiv. Go read about AI search nihilism and a bunch of other stuff.
AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do.This talk is exactly how we should be thinking about AI, algorithms, technology in general. Technology doesn’t spring from the Earth fully formed, it’s the result of people designing it and making decisions that they should be responsible for.
UnitedHealthcare, the largest health insurance company in the US, is allegedly using a deeply flawed AI algorithm to override doctors' judgments and wrongfully deny critical health coverage to elderly patients. This has resulted in patients being kicked out of rehabilitation programs and care facilities far too early, forcing them to drain their life savings to obtain needed care that should be covered under their government-funded Medicare Advantage Plan.A current "benefit" of AI: providing cover for inhumane policies. Policy creators can blame the algorithm.