Samuel R. Bowman
36% of another sample of 480 researchers (in a survey targeting the language-specific venue ACL) agreed that “It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war” (Michael et al., 2022).
So we've got that going for us. LLMs strategically manipulating people into acquiring power sure sounds like a serious flaw in the software. A bit more information and context at the unfortunately named NYU Alignment Research Group. (ARG? Seriously?!)
« Previous post / Next post »
Hi! You're reading a single post on a weblog by Paul Bausch where I share recommended links, my photos, and occasional thoughts.