The human brain is capable of long-term and short-term memory retention. We can rewind time to recall the sequence of events that occurred and predict what will occur next by using the sequence of events that came before. The goal of Recurrent Neural Networks (RNN) is to imitate this function.
Think about the following situation: When reading a book, you make sense of a chapter’s events by referring to those of earlier chapters. You almost turn back time in your mind to consult the earlier sequence of events that clarifies the current ones. Events from two or three chapters prior are mixed in with those from the previous chapter. Each of these occurrences is imprinted in memory with a temporal sense, indicating when it occurred—recently or far ago.
Because they are still “fresh” in your memory, recent events are easier for you to recall than ones that happened a long time ago. Therefore, the events of the past have an impact on your comprehension of the present situation or aid in your ability to “predict” what will happen next.