Have you ever come across someone after a long time, and to your surprise, they mention something about you that you yourself don't remember?
Maybe it was a childhood friend who mentions something you did in school or something you did at the park, or it's an ex-colleague who mentions something that happened at the old workplace- but you have no idea what they're talking about?
Or maybe you do remember, eventually. Maybe, as they keep telling you what happened, the memories rush back- yes, you were there! Yes, that did really happened! But as the realisation dawns on you that yes, you did forget about something, something significant that someone else chose to remember, a painful question emerges -
What else did you forget?
This question is painful, mainly because it serves as a timely reminder that perhaps, we're not very good at remembering things we thought we were. And that's okay, because we usually remember just enough that is necessary.
You don't remember how to calculate a derivative of a tangent anymore, because you don't need calculus in your day to day life. An engineer probably remembers it, but they don't remember what that quote from the one movie shown to you in class- the one which really resonated with you.
I think it's entirely plausible that we only remember information that's necessary. This doesn't mean we only remember facts and formulas, but even with the events of our lives- we remember only the information that serves us. We remember the events that we consider important to us, important enough to build a narrative to tell ourselves the story of "who we are", which ends up shaping who we go on to be.
While this sounds like a good generalised framework of memory, it's certainly not the same for people at all. Someone with the same memory of an event may interpret it profoundly differently. Among two similar kids who get bullied, one may channel that hate into more hate for everyone, and decide to grow up to be a bully themselves - while another may channel that hate into strength and compassion, and stand up later for others who get bullied.
It's also scientifically accurate to say that our memories aren't accurate. Every time we remember something, our memories are slightly altered, and this may keep going to the point where the memories we remember are no where close to what actually happened in the past.
Given that our memories are so flawed, shaped by evolutionary pressures to be as efficient and useful as possible, it's a miracle that at the end of life, we can reliably narrate the story of our lives to anyone willing to listen to us.
But we're now growing up in times when AIs exist who are willing to listen to everything we say to them, and now, it increasingly looks like they'll all have infinite memory of us. Just like the fallibility of human memory has its issues, AIs having memory of everything we tell them has its own implication.
Does that have to change the way we interact with AIs?
That is what this essay is about, and it’s timely because ChatGPT with infinite memory rolls out today.
But before we go there, let’s talk about what AI memory means.
II.
Since the time AI models started getting popular, frameworks were launched to enhance the experience with memory.
The most popular framework was called RAG(Retrieval Augmented Generation)- and in simple terms, it gave AIs memory by storing information about you in a database (a vector database, so that the meaning of your memories can be easily fetched).
This was done because AI models usually had a much smaller context window when they launched, and RAG was essential to help AIs remember information about you, across multiple sessions.
As context windows started growing bigger, with current context windows of ChatGPT and others being anywhere from 32,000 tokens to 128,000 tokens, RAG was often declared dead. 128,000 tokens is enough for a big novel's length of information, and surely, in a single session or thread, if you keep talking to an AI, it's going to hopefully remember almost everything in that conversation till the end, right?
Unfortunately, this still seems wrong - and it may continue to be wrong for quite some time. As the single threads of messages get larger with any AI, each message contains the history of all the previous messages- and thus, every subsequent message will start costing more each time.
Thus, RAG is one of those techniques which fills your context with relevant information - information that doesn't take up all your context, but helps the AI understand exactly what it needs to remember about you to answer that specific query.
Almost every "AI memory" framework today uses some version of RAG - where they store data in a database and query it with relevant information to prepare a useful context with your message. (It’s unclear if OpenAI is using RAG for their current implementation of memory, but it’s likely that some version of retrieval is definitely being used)
Some day, as AI's get smarter, cheaper and start having 10M or 100M length context windows, it can be the case that the AIs are so cheap that even sending millions of tokens of tokens costs the same as sending one token today - at which point the fractions of a fraction of a penny won't matter.
Someday, RAG will indeed die as AIs remember everything but for that to happen, context windows need to scale to infinity and prices need to exponentially come down.
Even then, RAG may continue to be used as it may just be a lower latency option to retrieve information rather than sending them in context each time. Bigger context windows still suffer from degraded performance even today, but that will hopefully improve as well in the future.
So hopefully, now you understand that memory is essentially retrieval.
That's what's been happening to different chatbots using memory, and now ChatGPT is fully rolling out its memory feature to hundreds of millions of users who are going to experience AIs that remember everything about you, all the time. A version of memory already existed previously, but it didn’t store everything and neither did it reference all your chats, but the new version of ChatGPT will.
This has some interesting implications.
III.
I won't focus on the obvious pessimistic implication that you're going to hear from the mainstream outlets like privacy concerns-
AIs remembering everything about also means all your important data is being stored, and if this AI isn't self hosted and belongs to a private company, they can share your data with advertisers or use ads to target you. Hopefully, ChatGPT won't do that, but we can't be 100% sure.
However, there are other concerns like the one I started this essay with.
Human memory is fallible. We've always fantasised about what infinitely accurate memory might look like, but those with the gift of infinite memory clearly don't appreciate it. What feels like it might be superpower actually ends up making people miserable as there are always going to be things that we want to forget.
Because features like memory for AIs aren't rolling out on 10-year time horizons (and we should be thankful that this is the case), nobody really knows what the long term implication of having entities that remember everything about us is.
But the implication doesn't have to be bad. Nobody misses the time when calculators weren't available- even though I'm sure pessimists raised their tired voices against it back then, AIs integrating across our lives requires some version of version to be really personalized, so it can definitely have large positive effects.
Perhaps, really smart AIs having access to all our memories, even things we forget with the passage of time, will help us make better sense of who we are and our lives. It would certainly know more about ourselves than we do now, and with this information it could certainly help us weaknesses and opportunities that we'd have not figured out on our own.
Perhaps with access to such superior memory, humans themselves end up remembering more, forgetting less of what matters because AIs will keep reminding them of what’s important when prompted to.
Perhaps, and more importantly, it can lead to truly better responses because the AIs will remember what you want.
Thus, with infinite data and infinite memory, it suddenly becomes clear what you should be doing differently with AIs that have memory, instead.
IV.
The rest of this section assumes you're among the millions of people with a ChatGPT subscription, but if by the time you're reading this- your preferred AI, or hopefully, your local LLM model has memory features similar to what's rolling out now, it should also be applicable.
Here are some ways to use this -
1. Journal by talking to AIs every day.
At the end of every day, tell an AI that you want to talk about your day and talk to it. Do this every day and soon, AIs will have a better understanding of who you are, and they'll be able to provide more personalised advice to all of your queries in the future.
Here’s a prompt like this -
"Hi, you are an AI that is tasked with knowing and understanding me. I'm going to talk about my day and your job is to ask me thoughtful questions that helps us both process the events of today, and helps you get a clearer picture of who I am. You can ask follow up questions, but remember that this whole interaction should feel like a friendly conversation between us, and not like an interview. At any point if you remember something relevant about me based on our past conversations, you can gently remind me if that helps you keep the conversation going and understand me better. Finally, if you feel like I'm missing something or if there's any advice you can give me regarding any event that occurred today, please mention it, but again, remember to keep your tone as warm and friendly as possible."
You can modify the above overly verbose prompt, but it should serve as a good first message that you send to your AI every day and help it understand you better. The rest of the prompts mentioned in this essay will be simpler, but it is your job to instruct the AI as specifically as the above prompt shows.
2. Remember that everything is remembered, so use anonymous messages.
When ChatGPT first introduced incognito mode, it seemed like it won't have much of a use case. Now, it's essential that you use it.
There are going to be several occasions in your life when you decide that something you're asking ChatGPT shouldn't definitely be remembered. Maybe it's opinions on a razor you're looking to buy for a friend, but you don't want it to suggest razors as a gift idea later. Maybe it's just something very personal that you don't want the AI to remember.
For all such occasions, be very liberal about what you want the AI to remember because there's a good chance that not only is it going to remember, the memory feature as a whole will get so good that in 20 years, ChatGPT will be able to write a 100k word biography based on your life more accurately than yourself.
3. Ask ChatGPT explicitly to remember.
Just because ChatGPT says that it can reference your chats, doesn't necessarily mean that it's going to remember everything. If you say that your favourite flavour of ice cream is chocolate, it may choose not to remember that if it's used to not storing such information for you.
But, you can ask ChatGPT to remember the fact explicitly. Once instructed, you can be confident it has committed that to memory.
You can also do the opposite and remove chats, and remove explicit memory & OpenAI, to their credit, has made it pretty clear that you’ll be always able to control what the AI remembers- to the point of telling it to forget something about you, and it’ll just work.
V.
Finally, you should leverage the infinite memory as a tool. The following are just a simplified version of prompts you may use once you know ChatGPT knows a lot about you, and the results may just be magical enough that you'll be able to understand why memory is such a useful feature-
"Based on our interactions so far, what can you tell me about myself that I might myself not know?"
"Based on our interactions so far, who do you think cares the most about me, which I may not be myself aware about?"
"Based on our interactions so far, what lies am I telling myself and how do I fix them?"
"Based on our interactions so far, how am I different from other people and what does that say about me?"
"Based on our interactions so far, what company do you think I should start?"
"Based on our interactions so far, what book do you think I should write?"
"Based on our interactions so far, what lesser known movie do you think I should watch which will really resonate with me?"
"Based on our interactions so far, how far away am I from reaching my true potential and what do I have to do to get there?"
"Based on our interactions so far, how is my worldview incorrect?"
"Based on our interactions so far, estimate when I graduated and from that, list some new discoveries that have happened which are different from what I studied and knew growing up."
"Based on our interactions so far, make me an optimal daily routine that prioritises things that you think I most need"
"Based on our interactions so far, if I were to radically improve my life, what are the three most important things I should be doing?"
"Based on our interactions so far, what is one place I should visit in my lifetime? It can be any place that you think will resonate most with me"
You can enhance each of the prompts above based on your preferences, or even go with prompts entirely new, but hopefully, the answers or portions of some of the answers you get will be surprising.
Like an old friend that remembers details about your life that you've forgotten, you're now going to have access to an advanced form of intelligence that not only remembers everything, but can integrate everything to give you the perfect, personalised answers.
We've only dreamt of having this power for decades. As of today, all of us possess it.