Anthropomorphizing language influences how people perceive a system on multiple levels. It over-sells a system which is likely to under-deliver, and portrays a world view in which the people responsible for developing the systems are not held accountable for the system’s inaccurate, inappropriate, and sometimes deadly output. It promotes misplaced trust, over-reliance, and dehumanization.
This article addresses a major pet peeve of mine in AI discourse. These simulation technologies do not have emotions! The technology can’t feel regret, can’t apologize for bad behaviors, can’t feel pain. The owners and operators of these technologies are human and transferring their real world responsibility to a non-entity is causing a lot of problems.
See also: Journalistic Malpractice: No LLM Ever ‘Admits’ To Anything, And Reporting Otherwise Is A Lie