Computational Models of Memory Search
Last updated
Last updated
give participant a list of words to learn
learn associations between item and its neighbours
strength of association decay monotonically with increasing distance between item presentation
learn representation of the item's position in the list (index)
position cues item: house -> position 1 -> position 2 -> shoe
two-dimensional matrix, each column is a memory vector
static
Although one can model such memories as a vector function of time, theorists usually eschew this added complexity, adopting a unitization assumption that underlies nearly all modern memory models.
each item vector has a single, unique, nonzero element
each element corresponds to a unique item in memory
features representing an item distributed across many or all of the elements
The unitization assumption dovetails nicely with the classic list recall method in which the presentation of known items constitutes the miniexperiences to be stored and retrieved. But one can also create sequences out of unitary items, and by recalling and reactivating these sequences of items, one can model memories that include temporal dynamics.
This model assumes that each item vector (memory) occupies its own “address,” much like memory stored on a computer is indexed by an address in the computer’s random-access memory. Repeating an item does not strengthen its existing entry but rather lays down a new memory trace.
retrieval of encoded item create new memory trace
this model implies number of traces can increase without bound, but brain capacity is finite
if search for an item is serial would take forever, if parallel would cause high demand on nervous systems
for recognition memory
storage equation:
Rather than summing item vectors directly, which results in substantial loss of information, we can first expand an item’s representation into a matrix form, and then sum the resultant matrices.
context and item are retrieval cue for each other