_     _             
                        _ __ | |__ | | ___   __ _ 
                       | '_ \| '_ \| |/ _ \ / _` |
                       | |_) | | | | | (_) | (_| |
                       | .__/|_| |_|_|\___/ \__, |
                       |_|    ...2023-07-22 |___/ 

Project Perfect Recall
 

A proposal to solve the problem of information retrieval by combining existing

technologies in a novel way.

It's currently possible within a reasonable budget, to record in audio and

video, everything that takes place around a person. Furthermore, we're able to mark (by time, for instance) every digital piece of information that a person accesses.

Such excessive data-collections is nearly practical, however, efficient

retrieval by a that person is impractical, they may be able to recall roughly by time, however, this is inefficient and cumbersome.

The most practical and intuitive way to recall such information would be to try

and think about it, that is, attempt to remember it. Creating hardware and software to allow this might not be very far fetched.

The processes that takes place in a brain having experiences are manifested

in, among other phenomena, tiny electrical charges which can be picked up using scalp electrodes. Similar electrical takes place during memory recall.

Imagine now, collecting all the audiovisual and digital content in real-time

along with relatively high resolution EEG from a cap, like Braincap.

This allows the user to activate the "recall" functionality of the system

and attempt to remember the thing they want more detailed information about. For example, the user remembers a time and place was mentioned, but forgot the actual time.. They also remember it was their friend telling it to them in the park and there was music playing, now that part they recall more vividly, and the system uses the EEG data from this recall to find the relevant place in the audiovisual stream, which can then be played back by conventional means to re-play that part of the conversation.

Another example may be thinking over a disagreement, wanting to remember

exactly which words were said, and how.. That is very difficult to recall for some people, however, the feeling they had at the moment may be easier to remember, and so by thinking back on the argument and reliving that feeling, the system finds the relevant part of the conversation and makes it available for playback, not by some magic, but more or less using the EEG data as an indexing key into the stream of data..

The stream of data is simply the EEG data along with whatever else is recorded

at that same time.

Finding the right place in the stream happens by the neural network being

trained on that stream of EEG data and then presented with a new EEG data to must correlate with previous data, and simply recall whatever else was recorded at that time. In this model, the network only needs to correlate EEG data and won't need training on the actual audiovisual stream.

The only actual research to be done is finding out if it's possible to train

a neural network to recognize distinct-enough EEG patterns (for instance, the pattern of the happy user thinking about a bird from their childhood and the pattern of a sad user thinking about a bird from yesterday).