Results for ""
Imagine yourself 30 years in the future....
How vividly can you imagine your future self?
How similar does your future self feel to your present self? How positively do you regard your future self?
AI can answer these questions. MIT’s new AI chatbot lets you talk to your future self!
Titled ”Future You,” an interactive, brief, single-session, digital chat intervention designed to improve future self-continuity—the degree of connection an individual feels with a temporally distant future self—a characteristic that is positively related to mental health and wellbeing. Their system allows users to chat with a relatable yet AI-powered virtual version of their future selves that is tuned to their future goals and personal qualities. To make the conversation realistic, the system generates a “synthetic memory”—a unique backstory for each user—that creates a throughline between the user’s present age (between 18 and 30) and their life at 60.
The “Future You” character also adopts the persona of an age-progressed image of the user’s present self. After a brief interaction with the “Future You” character, users reported decreased anxiety and increased future self-continuity. This is the first study successfully demonstrating the use of personalized AI-generated characters to improve users’ future self-continuity and wellbeing.
By allowing users to chat with a relatable yet virtual character of their future selves in real-time via a large language model (GPT-3.5), the research develops an accessible and effective future self-continuity intervention that allows users to interact with a realistic and believable future self as a conversational partner. To make the conversation realistic, the language model uses the input data from a pre-intervention survey to create a backstory of the user’s personal history—i.e. a synthetic memory—at 60 years old. This synthetic memory generates highly personalized answers to the user’s questions during the session. The system asks the user to upload their portrait to increase the credibility of their future self-character. It applies a generative model that age-progresses the user to create a realistic visual representation of the future self.
First, the user is prompted to answer demographic, life-narrative, and goals-oriented questions about themselves. This information generates an accurate future self-simulation of the person. The questions are separated into two main categories. The first question set is focused on the user’s present and asks for the basic information, including name, age, pronouns, place of living, essential people, and past experiences that make the person who they are today concerning topics such as turning points, high points, and low points. After this, they enter the second phase of the questionnaire, which probes their vision of their ideal future.
After the user inputs their information, the participant is directed to an interface to upload their portrait from local system storage. To generate an accurate and realistic future self, the language model uses the input data from the survey to create the user’s future backstory from the present to age 60. This synthetic memory provides the future self with a continuous past and present experience to draw from, ensuring that the generated responses from the future self are present as a cohesive narrative.
Though this work opens new possibilities for AI-powered, interactive future self-interventions, there are limitations to address. Future research should directly compare the Future You intervention with other validated interventions, examine the longitudinal effects of using the Future You platform, leverage more sophisticated ML models to potentially increase realism, and consider how interacting with a future self might reconstruct personal decisions as interpersonal ones between present and future selves as a psychological mechanism that explains treatment effects.
Potential misuses of AI-generated future selves to be mindful of include inaccurately depicting the future in a way that harmfully influences present behaviour, endorsing negative behaviours, and hyper-personalization that reduces real human relationships and adversely impacts health. These challenges are part of a broader conversation on the ethics of human-AI interaction and AI-generated media at both personal and policy levels.