🌻
Models
  • Step by step intro
  • Bash
  • Git
    • Remove folder
  • Embedding
    • Normalize Input
    • One-hot
  • Hyperparameter tuning
    • Test vs Validation
    • Bias vs Variance
    • Input
      • Normalize input
      • Initialize weight
    • Hidden Layer
      • Hidden layer size
    • Learning Rate
      • Oscillate learning rate
      • Learning rate finder
    • Batch Size
    • Epoch
    • Gradient
      • Vanishing / Exploding Gradients
      • Gradient Checking
    • Cost Function
      • Loss Binary Cross Entropy
    • Regularization
      • L₂ regularization
      • L₁ regularization
      • Dropout regularization
      • Data augmentation
      • Early stopping
  • Fine-tuning
    • Re-train on new data
    • Freeze layer/weight
  • Common Graphing Stats
    • Confidence interval (CI) and error bar
    • Confusion matrix and type I type II error
    • Effect size
  • Models
    • Inverted Pendulum Model
    • Recurrent Neural Networks
      • GRU and LSTM
      • Neural Turing Machines
    • Hopfield
    • Attention
      • Re-attention
      • Enformer
    • Differential Equations
      • Ordinary Differential Equations
        • Language Ordinary Differential Equations (ODE)
        • Neural Ordinary Differential Equations (ODE)
          • Adjoint Sensitive Method
          • Continuous Backpropagation
          • Adjoint ODE
      • Partial Differential Equations
      • Stochastic Differential Equations
    • Knowledge Tracing Models
      • Bayesian Knowledge Tracing
    • trRosetta
    • Curve up grades
  • deeplearning.ai
    • Neural Networks and Deep Learning
      • Wk2 - Python Basics with Numpy
      • Wk2 - Logistic Regression with a Neural Network mindset
      • Wk3 - Planar data classification with a hidden layer
      • Wk4 - Building your Deep Neural Network: Step by Step
      • Wk4 - Deep Neural Network - Application
    • Hyperparameter Tuning, Regularization and Optimization
      • Wk1 - Initialization
      • Wk1 - Regularization
      • Wk1 - Gradient Checking
    • Structuring Machine Learning Projects
    • Convolutional Neural Networks
    • Sequence Models
  • Neuroscience Paper
    • Rotation and Head Direction
    • Computational Models of Memory Search
    • Bayesian Delta-Rule Model Explains the Dynamics of Belief Updating
    • Sensory uncertainty and spatial decisions
    • A Neural Implementation of the Kalman Filter
    • Place cells, spatial maps and the population code for memory (Hopfield)
    • Spatial Cognitive Map
    • Event Perception and Memory
    • Interplay of Hippocampus and Prefrontal Cortex in Memory
    • The Molecular and Systems Biology of Memory
    • Reconsidering the Evidence for Learning in Single Cells
    • Single Cortical Neurons as Deep Artificial Neural Networks
    • Magnetic resonance-based eye tracking using deep neural networks
Powered by GitBook
On this page
  • Memory models
  • Serial learning
  • Representational assumptions
  • Multi-trace theory
  • Composite memories
  • Summed similarity
  • Pattern completion
  • Contextual coding
  • Associative models
  • Recognition and recall
  • Serial learning
  • Recall phenomena
  • Serial position effects
  • Contiguity and similarity effects
  • Recall errors
  • Inter-response times
  • Memory search models
  • Dual-store theory
  • Retrieved context theory

Was this helpful?

  1. Neuroscience Paper

Computational Models of Memory Search

PreviousRotation and Head DirectionNextBayesian Delta-Rule Model Explains the Dynamics of Belief Updating

Last updated 3 years ago

Was this helpful?

Memory models

Serial learning

give participant a list of words to learn

Associative chaining theory: forward bias

  1. learn associations between item and its neighbours

  2. strength of association decay monotonically with increasing distance between item presentation

Positional encoding theory: no bias

  1. learn representation of the item's position in the list (index)

  2. position cues item: house -> position 1 -> position 2 -> shoe

Representational assumptions

Memory matrix

two-dimensional matrix, each column is a memory vector

static

Although one can model such memories as a vector function of time, theorists usually eschew this added complexity, adopting a unitization assumption that underlies nearly all modern memory models.

Localist models

each item vector has a single, unique, nonzero element

each element corresponds to a unique item in memory

Distributed models

features representing an item distributed across many or all of the elements

The unitization assumption dovetails nicely with the classic list recall method in which the presentation of known items constitutes the miniexperiences to be stored and retrieved. But one can also create sequences out of unitary items, and by recalling and reactivating these sequences of items, one can model memories that include temporal dynamics.

Multi-trace theory

This model assumes that each item vector (memory) occupies its own “address,” much like memory stored on a computer is indexed by an address in the computer’s random-access memory. Repeating an item does not strengthen its existing entry but rather lays down a new memory trace.

retrieval of encoded item create new memory trace

this model implies number of traces can increase without bound, but brain capacity is finite

if search for an item is serial would take forever, if parallel would cause high demand on nervous systems

Composite memories

composite storage model

for recognition memory

storage equation: mt=αmt−1+Btftmt = αm_{t−1} + B_t f_tmt=αmt−1​+Bt​ft​

Rather than summing item vectors directly, which results in substantial loss of information, we can first expand an item’s representation into a matrix form, and then sum the resultant matrices.

Summed similarity

Pattern completion

Contextual coding

Associative models

Recognition and recall

Serial learning

Recall phenomena

Serial position effects

Contiguity and similarity effects

Recall errors

Inter-response times

Memory search models

Dual-store theory

Retrieved context theory

context and item are retrieval cue for each other

paper
a. associative and b. positional