ai

404 Media
Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI.
Thanks for the warning while you keep destroying reality, I guess?
stackoverflow.blog
The present wave of generative AI tools has done a lot to help us generate lots of code, very fast. The easy parts are becoming even easier, at a truly remarkable pace. But it has not done a thing to aid in the work of managing, understanding, or operating that code. If anything, it has only made the hard jobs harder.
If AI starts doing the work of junior developers how will you get senior developers in the future?
doublepulsar.com
I think they are probably going to set fire to the entire Copilot brand due to how poorly this has been implemented and rolled out. It’s an act of self harm at Microsoft in the name of AI, and by proxy real customer harm.
AI has really obliterated the idea of getting consent from users. Big companies are just enabling data theft on a grand scale now. It's like people who build houses working for thieves rather than homeowners.
Medium
What is AI? In fact this is a marketing term. It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies.
Emily Bender has a great clarifying way of thinking about AI. I found her breakdown of the kinds of systems that are being called AI today very helpful.
helmut-schmidt.de
By narrating their products and services as the apex of “human progress” and “scientific advancement,” these companies and their boosters are extending their reach and control into nearly all sectors of life, across nearly every region on earth. Providing the infrastructure for governments, corporations, media, and militaries. They are selling the derivatives of the toxic surveillance business model as the product of scientific innovation.
Interesting talk by Meredith Whittaker (President of the Signal Foundation) looking at big tech's surveillance business model and how we might imagine a different way.
tomshardware.com
Ben continues in his thread, "[The moderator crackdown is] just a reminder that anything you post on any of these platforms can and will be used for profit. It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you."
These user conflicts highlight the way site owners extract monetary value from the community in ways that aren’t shared back. Now some 3rd party will be making money from their time and energy. So disappointing to see companies being bad stewards of good impulses like the desire to pitch in and help share knowledge.
404 Media
Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history.
AI is why we can’t have nice things. Also maybe having a private for profit company organize the world’s information was a terrible idea. They make decisions to maximize their profits, not provide a data heritage for humanity.
Mystery AI Hype Theater 3000
A clearer-eyed view of what has happened in the last two years is that a few companies have amassed enormous amounts of data (mostly taken non-consensually) and the capital to buy the computing resources to train enormous models which, in turn, can be used to output facsimiles of the kinds of interactions represented in the data.
I think a better understanding of how generative AI products are produced would help clear up some of this magical thinking that’s happening.
Knowing Machines
LAION-5B is an open-source foundation dataset used to train AI models such as Stable Diffusion. It contains 5.8 billion image and text pairs—a size too large to make sense of. In this visual investigation, we follow the construction of the dataset to better understand its contents, implications and entanglements.
An exercise in (and advocacy for) AI dataset transparency. Excellent information and presentation here.
The Verge
Calculating the energy cost of generative AI can be complicated — but even the rough estimates that researchers have put together are sobering.
Generative AI tools can be fun and can help productivity but are those gains worth their higher resource cost?
Gizmodo
Hippocratic promotes how it can undercut real human nurses, who can cost $90 an hour, with its cheap AI agents that offer medical advice to patients over video calls in real-time.
First, do no harm. Tech culture is going just great.
free-dissociation.com
Most really unacceptable losses don’t happen because of a single bad event or what we might call a component failure, but because two subsystems, both operating correctly according to their designers’ intent, interact in a way that their designers didn’t foresee.
I would like to make this quote into a cross stitch and hang it on the wall in every IT department everywhere. Lots of great thinking here about how we keep systems operating safely, especially with AI chaos engines being integrated everywhere.
« Older posts