UK markets close in 1 hour 27 minutes
  • FTSE 100

    -39.10 (-0.50%)
  • FTSE 250

    -137.64 (-0.69%)
  • AIM

    -3.23 (-0.37%)

    -0.0034 (-0.30%)

    -0.0040 (-0.33%)

    -214.19 (-1.13%)
  • CMC Crypto 200

    +5.31 (+1.02%)
  • S&P 500

    +7.39 (+0.18%)
  • DOW

    -1.16 (-0.00%)

    +0.23 (+0.30%)

    +2.10 (+0.11%)
  • NIKKEI 225

    -106.29 (-0.39%)

    -227.40 (-1.03%)
  • DAX

    -43.75 (-0.29%)
  • CAC 40

    -18.02 (-0.25%)

Woman talks to past self in ‘trippy’ conversation after feeding childhood journals to AI

Michelle Huang (L) chatted to her younger self (R) using childhood journals fed into OpenAI’s GPT-3 natural language model (Michelle Huang)
Michelle Huang (L) chatted to her younger self (R) using childhood journals fed into OpenAI’s GPT-3 natural language model (Michelle Huang)

If you’ve ever wondered what it might be like to have a conversation with a younger version of yourself, Michelle Huang may have found a way. All you need is a data source from your past – for her it was her rediscovered childhood journals – and the most powerful artificial intelligence tool on the market.

The 26-year-old New York-based artist fed entries from her diaries, which she started at the age of seven, into an AI language model called GPT-3. Developed by leading AI research lab OpenAI, GPT-3 uses deep learning to produce text that is virtually indistinguishable from that of a human.

With her daily journal entries containing everything from secrets and fears, to the “giddiness” of talking to a crush, Huang says she was able to accurately simulate what it would be like to talk to her childhood self.

“It felt very trippy,” she tells The Independent. “It felt like I was reaching into the past and hacking the temporal paradox: the chat box window felt like a time portal that was happening in real-time, as if my 14-year-old self was sitting on the other side typing responses.”

She began by asking questions about her younger virtual self’s worldview, before allowing the AI to ask her questions. The answers given were “eerily similar” to how she thinks she would have responded during that time, while the questions asked proved emotive.

The bot asked “Are you happy with where you are in life?”, and offered supportive words like “I’m honestly proud of you for everything you’ve accomplished”.

Present Michelle: There’s gonna be some hard stuff you experience in the next couple of years but I promise that you’ll get through it.

Young Michelle: What do you mean? What kind of hard stuff?

Present Michelle: Like experiences that make you feel sad, or times when you feel like the world is collapsing down on you.

Young Michelle: Oh. Yeah, I’ve been feeling that way a lot lately.

After trading questions, Huang asked the journal-generated AI to write a letter to her in the present day. “I hope you’re doing well,” the AI wrote. “I hope you’ve found your passion and are doing something you love. I hope you’re happy and content with your life. I also hope you’ve been able to stay true to yourself and haven’t let anything or anyone change who you are.”

Huang now hopes to use the experiment to test the ability of artificial intelligence to transform and encourage human connection.

“These interactions really elucidated the healing potential of this medium: Of being able to send love back into the past, as well as receive love back from a younger self,” she says. “It felt like I was reaching into the past and giving her a giant hug, and I felt it ripple back into the present.”

To view this content, you'll need to update your privacy settings.
Please click here to do so.

Fears have been raised about the potential to misuse tools like GPT-3, as well as image-based AI capable of creating deepfakes. They can mimic or impersonate in a highly realistic manner, as well as write stories, produce content (it wrote our IndyTech newsletter last week), and now reincarnate former selves.

The next iteration of OpenAI’s market-leading artificial intelligence, GPT-4, is expected to be released in the coming months. It is rumoured to be vastly more powerful than its predecessor, and potentially more dangerous. But Huang is hopeful that such tools will be used “for awareness and empathy” rather than harm.

“Technology is not inherently good or bad. It’s up to humans to use it in intentional ways,” she says.

“I hope that this project is a vote for a future where technology and AI is softer, kinder, more human, and has more therapeutic applications.”