close
close

Does anyone need an AI-powered social network?

Photo illustration: Intelligencer; Screenshot: SocialAI

Here’s the idea: A social network where every other user is an AI, you can be the “main character” and have “an infinite number of followers.” You post, and a group of bots, powered by generative AI, respond. You can choose what kind of followers you want—supporters, fans, trolls, skeptics, “curious cats”—and if you’re interested in what they’re posting, keep the conversation going.

SocialAI is not a joke or an artistic critique of the AI ​​era. According to its founder and sole employee, Michael Sayman, a young entrepreneur who has worked for Facebook, Google, Robloxand Twitter, “the culmination of everything I’ve thought and cared about and dreamed about for years,” has become possible now that “technology has finally caught up with my vision.”

It’s not hard to imagine what kind of reactions the news of a social network emerging spontaneously might provoke, but here are a few from real people:

• “Consider therapy.”
• “Is this the most embarrassing thing I’ve ever seen?”
• “Dystopian and anti-human”
• “This really makes me sad.”

Early reviewers weren’t particularly impressed. “The bots’ responses lacked nutrients and human mess,” Lauren Goode wrote on Wirewho understandably had difficulty “giving value or meaning” to the responses generated by the AI.

Sayman may not have intended Social.AI to be a work of sharp technological critique, but it acts as one. Nominally human social networks are already filled with bots and people who behave like bots; just beneath the surface of their feeds, automated systems decide what users see, resulting in the creation of human content shaped with AI recommendations in mind. How different is an app that simply lets itself go forth and fill an algorithmic void? Isn’t that where we’re headed?

If SocialAI’s unintentional criticisms aren’t cutting it, though, it’s because the app is too boring: If this is the direction Instagram and TikTok are headed, everyone will give up before they get there. People on social media may be systematically dehumanized, their interactions mediated and sanitized by systems designed to manipulate them into meaningless engagement, but sharing a feed with them is still better, or at least more stimulating, than what happens when AI tries to recreate social media content from statistical scraps:

Photo: Screenshot, Social AI

The app’s founder took the early feedback with a grain of salt, gently suggesting that most critics are missing the point. “The basic premise of SocialAI, to me, is that the LLM broadcast model of interaction offers a ton of use cases that the chat interface simply can’t,” he wrote after the app’s release. “I truly believe that SocialAI is the interface model of the future that many people around the world will use to interact with LLM in the future.”

It’s an interesting argument! Popular chatbots have operated primarily by simulating interactions with a single person, product exchanges that appear direct, intimate, or transactional; chatbots are typically designed to play one-on-one roles, from confidante to intern to, most often, but poorly defined, “assistant.” Many people find these simulations compelling or useful, so reliable that a broader range of simulated social interactions may work well for some people.

As it is, SocialAI doesn’t convincingly replicate the feeling of having an audience or the utility of crowd-sourced advice, and its automated followers produce content that’s too boring to read for any length of time, let alone motivate you to play. (The most frustrating thing for me is that it’s not even fun to use when you’re deliberately trying to play with it.) Its founder suggests that improvements will come, and the product, like many apps built on OpenAI models and AI in general, exists in a kind of contingent speculative state: If If the basic models are improved in the right aspects, then the product can make more sense.

If you’re a forward-thinking AI founder, in other words, this might all seem less like a joke and more like a design or engineering challenge, a matter of improving the illusion with better responses and a more subtle user experience, or something that people just haven’t gotten used to yet—either way, it’s just a matter of time. And you might be right!

In the meantime, though, the app is most valuable as a slightly different and more specific critique: not of social media or overly optimistic AI advocates, but of existing AI tools that have already gained acceptance and are widely used.

A social network full of fake followers trained on real people is just absurd, and collecting automated responses from characters generated on the fly with names like @IdeaGoddess and @TrollMaster3000 borders on the offensive. In the context of the channel, it’s impossible NO you’ll notice that you’re interacting with a group of generated personas whose purpose is to create the illusion of social interaction with different types of people, and the performances aren’t good enough to convince you to join in the fun.

But back to Sayman’s point, the difference between an AI interface, which is a single chatbot, and a channel interface, which is basically a full onslaught chatbots is not as big as it might seem at first: One is designed to simulate a single character (a willing, positive, helpful assistant) in a narrow social context and does it well enough not to break the illusion; the other is designed to simulate multiple characters in a slightly different and broader simulated social context and doesn’t quite succeed. This can be explained as a case where the chat interface is simply better. But it’s worth considering whether the worst interface — the one that fails by paying more attention to how central characters, fantasy, and social skills affect the AI ​​— is also slightly more honest.

What is the fundamental difference between a simulated conversation with a single synthetic character and a simulated conversation with a thousand synthetic characters? In other words, if acting for a machine that performs a task in response, engaging socially through a software interface, and having your expectations set by carefully constructed fictional characters makes SocialAI seem so patently stupid, perhaps a more interesting and valuable question is: Why NO Do you have ChatGPT?