Spatial Cognitive Map

Real world navigation requires movement of the body through space, produced a continuous stream of visual and self-motion signals, including proprioceptive, vestibular and motor efference cues. These multimodal cues are integrated to form a spatial cognitive map.

visual + idiothetic (body-based / self-motion / internal cues vs external landmarks)

vestibular cues in rodent require

  1. place cell

  2. grid cell

  3. head direction cell


PS. orientation task (not moving body thru space) vs navigation task (moving body thru space)

Tasks should design to encourage the use of body-based cue to dissociate the tasks on behavioural level

Neural representation of the map can only be inferred during recall not encoding as participant as immobile during fMRI scan -> how do body-based cues affect formation of cognitive map

Maps Are Modality Dependent or Independent

Huffman and Ekstrom

modality independent spatial representation during judging relative direction (JRD) task

Spatial navigation tasks

  1. perceived spatial orientation

  2. spatial manipulation of 3D object

  3. distance estimation

  4. navigation

Spatial navigation process:

  1. perception of one's spatial orientation relative to the surrounding environment

  2. computation of a route to a goal

  3. implementation of that route based on one's current location and directional heading

Remaining Questions

  1. how are body-based cues integrate with visual cues

  2. which cue to encode when there is a conflict

  3. how do body-based cues and visual cues update to correct errors, what brain region signal that an error has occurred such as misorientation

  4. how to measure head direction cell activation in scanner

  5. how are multiple reference frames maintained using VR vs real world cue (eg., the task room)

Last updated