Ben continues in his thread, "[The moderator crackdown is] just a reminder that anything you post on any of these platforms can and will be used for profit. It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you."
These user conflicts highlight the way site owners extract monetary value from the community in ways that aren’t shared back. Now some 3rd party will be making money from their time and energy. So disappointing to see companies being bad stewards of good impulses like the desire to pitch in and help share knowledge.
404 Media
Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history.
AI is why we can’t have nice things. Also maybe having a private for profit company organize the world’s information was a terrible idea. They make decisions to maximize their profits, not provide a data heritage for humanity.
Mystery AI Hype Theater 3000
A clearer-eyed view of what has happened in the last two years is that a few companies have amassed enormous amounts of data (mostly taken non-consensually) and the capital to buy the computing resources to train enormous models which, in turn, can be used to output facsimiles of the kinds of interactions represented in the data.
I think a better understanding of how generative AI products are produced would help clear up some of this magical thinking that’s happening.
Knowing Machines
LAION-5B is an open-source foundation dataset used to train AI models such as Stable Diffusion. It contains 5.8 billion image and text pairs—a size too large to make sense of. In this visual investigation, we follow the construction of the dataset to better understand its contents, implications and entanglements.
An exercise in (and advocacy for) AI dataset transparency. Excellent information and presentation here.
The Verge
Calculating the energy cost of generative AI can be complicated — but even the rough estimates that researchers have put together are sobering.
Generative AI tools can be fun and can help productivity but are those gains worth their higher resource cost?
Hippocratic promotes how it can undercut real human nurses, who can cost $90 an hour, with its cheap AI agents that offer medical advice to patients over video calls in real-time.
First, do no harm. Tech culture is going just great.
Most really unacceptable losses don’t happen because of a single bad event or what we might call a component failure, but because two subsystems, both operating correctly according to their designers’ intent, interact in a way that their designers didn’t foresee.
I would like to make this quote into a cross stitch and hang it on the wall in every IT department everywhere. Lots of great thinking here about how we keep systems operating safely, especially with AI chaos engines being integrated everywhere.
Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Not much to this summary, but interesting to hear AI skepticism is on the rise even as it's being built into every technology product.
Pitch work is basically when a director, writer, producer, or any combination of those get together with an artist and say, “We want to pitch to studios and we need imagery.” All of that has now been given to generative AI.
Fascinating interview with concept artist Karla Ortiz about the impact of generative AI on her industry.
To even entertain the idea of building AI-powered search engines means, in some sense, that you are comfortable with eventually being the reason those creators no longer exist. It is an undeniably apocalyptic project, but not just for the web as we know it, but also your own product. Unless you plan on subsidizing an entire internet’s worth of constantly new content with the revenue from your AI chatbot, the information it’s spitting out will get worse as people stop contributing to the network.
Speaking of newsletters that recently moved away from Substack, Garbage Day made the jump to Beehiiv. Go read about AI search nihilism and a bunch of other stuff.
In this 5 minute video linguistics professor Emily M. Bender talks about AI as a marketing term. She proposes replacing the term AI with automation to avoid making mistakes about responsibility.
AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do.
This talk is exactly how we should be thinking about AI, algorithms, technology in general. Technology doesn’t spring from the Earth fully formed, it’s the result of people designing it and making decisions that they should be responsible for.
« Older posts