AI can now create a replica of your personality


Led by Joon Sung Park, a Stanford PhD student in computer science, the team recruited 1,000 people who varied by age, gender, race, region, education, and political ideology. They were paid up to $100 for their participation. From interviews with them, the team created agent replicas of those individuals. As a test of how well the agents mimicked their human counterparts, participants did a series of personality tests, social surveys, and logic games, twice each, two weeks apart; then the agents completed the same exercises. The results were 85% similar. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future,” Joon says. 

In the paper the replicas are called simulation agents, and the impetus for creating them is to make it easier for researchers in social sciences and other fields to conduct studies that would be expensive, impractical, or unethical to do with real human subjects. If you can create AI models that behave like real people, the thinking goes, you can use them to test everything from how well interventions on social media combat misinformation to what behaviors cause traffic jams. 

Such simulation agents are slightly different from the agents that are dominating the work of leading AI companies today. Called tool-based agents, those are models built to do things for you, not converse with you. For example, they might enter data, retrieve information you have stored somewhere, or—someday—book travel for you and schedule appointments. Salesforce announced its own tool-based agents in September, followed by Anthropic in October, and OpenAI is planning to release some in January, according to Bloomberg

The two types of agents are different but share common ground. Research on simulation agents, like the ones in this paper, is likely to lead to stronger AI agents overall, says John Horton, an associate professor of information technologies at the MIT Sloan School of Management, who founded a company to conduct research using AI-simulated participants. 

“This paper is showing how you can do a kind of hybrid: use real humans to generate personas which can then be used programmatically/in-simulation in ways you could not with real humans,” he told MIT Technology Review in an email. 

The research comes with caveats, not the least of which is the danger that it points to. Just as image generation technology has made it easy to create harmful deepfakes of people without their consent, any agent generation technology raises questions about the ease with which people can build tools to personify others online, saying or authorizing things they didn’t intend to say. 

The evaluation methods the team used to test how well the AI agents replicated their corresponding humans were also fairly basic. These included the General Social Survey—which collects information on one’s demographics, happiness, behaviors, and more—and assessments of the Big Five personality traits: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. Such tests are commonly used in social science research but don’t pretend to capture all the unique details that make us ourselves. The AI agents were also worse at replicating the humans in behavioral tests like the “dictator game,” which is meant to illuminate how participants consider values such as fairness. 

Leave a Reply

Your email address will not be published. Required fields are marked *