Philip Agre takes the institutional vantage point of a company and posits that the human-replacement agent that is an Artificial General Intelligence is not a human as we understand ourselves–just like the car salesperson wasn’t a full human within the world of a car dealership. Agre says, “These companies are projecting their institutional understanding of what a human is, and we can see these projections clearly in ChatGPT, even in such basic facets as its format. First and foremost, human beings are, to capitalist firms, employees. They exist to be told what to do, then politely and unquestioningly do it. ChatGPT only speaks when spoken to. When it does speak, it does so in the most hollow and saccharine way, completely bereft of emotion save for that professional cheeriness For a certain kind of employer, the LLM is so tantalizingly close to being not just a full person, as these companies understand people, but a perfected one.

More from Mia Consalvo, Nancy Baym, Jeremy Hunsinger, Klaus Bruhn Jensen, John Logie, Monica Murero, and Leslie Regan Shade, eds, Internet Research Annual, Volume 1: Selected Papers from the Association of Internet Researchers Conferences 2000-2002, New York: Peter Lang, 2004.