An interesting subject, sometimes called a brain fart. These are not my answers, but I thought it would be interesting until you can’t recall it.
While it is not known for sure what is happening, this is how current models of memory recall would explain it:
Memory recall in the brain is not like retrieving a file from disk on a computer. In the brain, memories are reconstructed rather than retrieved. The brain is constantly augmenting what is in “working memory” with related information from the past. This is why stream of consciousness and memory recall often work by free association: The information association process is already there and we just make use of it.
When attempting to recall something specific, like a name, we “trick” the name into appearing in working memory by thinking about concepts related to it: the person’s identity, when we saw them last, what they look like. Normally this process automatically brings the information into working memory as a side-effect of filling in related facts.
When a word is missing but you “think you know it,” what is probably happening is that a lot of information about that word has been reconstructed in working memory, but not enough to trigger the production of the word itself. The presence of related information signals that you’ve “almost recalled it,” but the failure to produce the word shows that the recall is incomplete.
Often when people can’t recall a word, someone else can fill it in for them. But sometimes the “tip of the tongue” word does not actually exist. Related words may come to mind and it may seem like there “should be a word” for whatever it is. Thus the tip of the tongue feeling is not infallible.
Or: you can use this one….
A Neural Network (computer software) is just a simple model of the brain – not sure if the brain has something to do with it, but NN is composed of interconnected neurons with synapses (software model artifacts.)
Each neuron is an adder with a threshold, and each synapse has a weight. Both the threshold and the weight holds a small unit of information (could be digital or analog.) The entire NN has a certain information capacity, and used wisely (as in VOT (voice to text) or OCR (optical character recognition)) they do quite a job!
However, NN theory (and practice) shows (if I recall well) that when this capacity has been used/filled more than 11% (or something like that) while ‘learning‘, the network starts ‘ forgetting!’
I want to stress again that I’m not aware of any evidence that the real brain works like a computer neural network – even more a computer NN would be to a brain like a dog house to New York city – but here there is something to think about…