Context Maintenance and Retrieval (CMR)

A computational model of episodic memory search that uses temporal context to guide free recall.

The Context Maintenance and Retrieval model implements retrieved-context theory, which posits that episodic retrieval is driven by a representation that evolves over time, tying each item to the contextual features present during encoding and later accessing those associations to guide retrieval.

CMR explains several benchmark phenomena in free recall:

This notebook walks through a simulated free recall trial, introducing each CMR method as it becomes relevant to the unfolding dynamics.

Code
import matplotlib.pyplot as pltfrom jaxcmr.models.cmr import CMRfrom jaxcmr.state_analysis import plot_model_state

Creating a Model

CMR has a compositional architecture: the model is assembled from four pluggable components that can be swapped to create model variants.

  • Context — the temporal context representation that drifts during encoding and retrieval
  • \(M^{FC}\) (item-to-context memory) — associates items with their encoding contexts, enabling context reinstatement at recall
  • \(M^{CF}\) (context-to-item memory) — associates contexts with items, enabling context-cued retrieval
  • Termination policy — governs when recall stops

The default implementations use linear associative memories and an exponential stopping rule, but alternative implementations (e.g., instance-based memory, different termination dynamics) can be substituted via factory functions.

CMR.__init__

CMR(
    list_length: int,
    parameters: Mapping[str, float],
    mfc_create_fn = init_mfc,       # item-to-context memory factory
    mcf_create_fn = init_mcf,       # context-to-item memory factory
    context_create_fn = init,       # temporal context factory
    termination_policy_create_fn = PositionalTermination,
)

The parameters dictionary configures the model’s cognitive dynamics:

Symbol Parameter Description
\(\beta_{\text{enc}}\) encoding_drift_rate Rate of context drift during item encoding
\(\beta_{\text{start}}\) start_drift_rate Amount of start-of-list context reinstated before recall
\(\beta_{\text{rec}}\) recall_drift_rate Rate of context drift during recall
\(\gamma\) learning_rate Learning rate for updates to \(M^{FC}\)
\(\phi_s\) primacy_scale Scale of the primacy gradient for \(M^{CF}\) learning
\(\phi_d\) primacy_decay Decay rate of the primacy gradient
\(\delta\) item_support Pre-experimental self-association in \(M^{CF}\)
\(\alpha\) shared_support Uniform pre-experimental cross-item support in \(M^{CF}\)
\(\tau\) choice_sensitivity Exponential scaling during the recall competition
\(\theta_s\) stop_probability_scale Baseline of the stopping probability
\(\theta_r\) stop_probability_growth Growth rate of stopping probability over outputs
Code
params = {
    "encoding_drift_rate": 0.7,
    "start_drift_rate": 0.6,
    "recall_drift_rate": 0.85,
    "learning_rate": 0.4,
    "primacy_scale": 8.0,
    "primacy_decay": 1.0,
    "item_support": 6.0,
    "shared_support": 0.02,
    "choice_sensitivity": 1.5,
    "stop_probability_scale": 0.005,
    "stop_probability_growth": 0.4,
    "learn_after_context_update": True,
    "allow_repeated_recalls": False,
}

Memory Initialization

At initialization, context is a vector of length list_length + 1 with only the start-of-list unit active. The memory components contain pre-experimental associations:

Item-to-context memory (\(M^{FC}\)):

\[M^{FC}_{\text{pre}(ij)} = \begin{cases} 1 - \gamma & \text{if } i=j \\ 0 & \text{otherwise} \end{cases}\]

Each item links to a unique context unit. The parameter \(\gamma\) determines how much room remains for new associations.

Context-to-item memory (\(M^{CF}\)):

\[M^{CF}_{\text{pre}(ij)} = \begin{cases} \delta & \text{if } i=j \\ \alpha & \text{otherwise} \end{cases}\]

Here \(\delta\) (item support) gives each item a baseline association with its context unit, while \(\alpha\) (shared support) provides uniform associations across all items. No items are recallable until they have been studied.

Code
model = CMR(list_length=16, parameters=params)
plot_model_state(model, "Initialized")

Encoding Phase

With our model initialized, we can simulate the study phase of a free recall experiment. Each item presentation triggers two key processes: the temporal context drifts to incorporate the new item, and bidirectional associations form between the item and its encoding context.

CMR.experience_item

model.experience_item(item_index: int) -> CMR

Simulate encoding of an item during study, updating context and memories. Takes a 0-indexed item index and returns the updated model state.

When we call experience_item(0), the item’s feature representation probes \(M^{FC}\) to retrieve associated context, which is then integrated into the current state:

\[c_i = \rho_i \, c_{i-1} + \beta_{\text{enc}} \, c_i^{\text{IN}}\]

where \(\rho_i = \sqrt{1 + \beta^2 \left[ (c_{i-1} \cdot c_i^{\text{IN}})^2 - 1 \right]} - \beta (c_{i-1} \cdot c_i^{\text{IN}})\) maintains unit length. Simultaneously, the model strengthens bidirectional associations: \(M^{FC}\) learns with rate \(\gamma\), while \(M^{CF}\) learns with a primacy-scaled rate \(\phi_i = \phi_s e^{-\phi_d(i-1)} + 1\) that gives early items stronger context-to-item associations.

Let’s encode the first item and observe the changes:

Code
model = model.experience_item(0)
plot_model_state(model, "After encoding item 1")

As encoding proceeds, context accumulates a recency gradient—recent items have stronger activation than early items. Meanwhile, the primacy gradient in \(M^{CF}\) ensures early items retain strong associations despite their fading contextual activation.

Code
for i in range(1, 16):
    model = model.experience_item(i)
plot_model_state(model, "After encoding all items")

Retrieval Phase

With all items encoded, the model is ready for retrieval. Before recall begins, CMR partially reinstates the start-of-list context.

CMR.start_retrieving

model.start_retrieving() -> CMR

Transition from study to retrieval mode by reinstating start context.

This operation integrates the initial context vector \(c_0\) with the current context at rate \(\beta_{\text{start}}\):

\[c_{\text{start}} = \rho \, c_N + \beta_{\text{start}} \, c_0\]

The resulting retrieval cue reflects two competing influences: recency from the end-of-list context, and primacy from stronger \(M^{CF}\) associations. Together these produce the characteristic U-shaped serial position curve:

Code
model = model.start_retrieving()
plot_model_state(model, "Retrieval onset")

Now we can simulate recall. When an item is retrieved, its encoding context is reinstated via \(M^{FC}\), the recall is recorded, and the item is marked as no longer recallable.

CMR.retrieve_item

model.retrieve_item(item_index: int) -> CMR

Simulate retrieval of a specific item, updating context and records. Takes a 0-indexed item index.

Code
model = model.retrieve_item(15)
plot_model_state(model, "After recalling item 16")

Recalling item 16 reinstated a context similar to when item 16 was studied, which was temporally close to item 15. This is the contiguity effect in action.

Code
model = model.retrieve_item(14)
model = model.retrieve_item(13)
model = model.retrieve_item(12)
plot_model_state(model, "After recalling 16 \u2192 15 \u2192 14 \u2192 13")

Each recall shifted probability toward the next earlier item, producing the backward recall pattern 16 → 15 → 14 → 13.


Probability Computations

The visualizations above show outcome probabilities, but how are these computed? CMR uses a Luce choice rule over item activations, combined with a termination probability.

CMR.activations

model.activations() -> Array

Compute retrieval activations for all items from context-to-item memory. Probes \(M^{CF}\) with the current context to get raw support for each item.

Code
activations = model.activations()
colors = ["#e74c3c" if not model.recallable[i] else "#3498db" for i in range(16)]

fig, ax = plt.subplots(figsize=(10, 3))
ax.bar(range(1, 17), activations, color=colors)
ax.set_xlabel("Item")
ax.set_ylabel("Activation")
ax.set_xticks(range(1, 17))
plt.tight_layout()
plt.show()

CMR.stop_probability

model.stop_probability() -> Array

Compute probability of terminating recall. The default termination policy uses an exponential rule:

\[P(\text{stop}, j) = \theta_s e^{j \theta_r}\]

where \(j\) is the output position.

CMR.outcome_probabilities

model.outcome_probabilities() -> Array

Compute probabilities for all possible retrieval outcomes. Combines activations and stop probability via the Luce choice rule:

\[P(i) = (1 - P(\text{stop})) \frac{A_i^\tau}{\sum_k A_k^\tau}\]

The returned vector has list_length + 1 elements: index 0 is \(P(\text{stop})\), indices 1–N are item probabilities:

Code
probs = model.outcome_probabilities()
labels = ["Stop"] + [str(i) for i in range(1, 17)]
colors = ["#95a5a6"] + ["#e74c3c" if not model.recallable[i] else "#3498db" for i in range(16)]

fig, ax = plt.subplots(figsize=(10, 3))
ax.bar(labels, probs, color=colors)
ax.set_xlabel("Outcome")
ax.set_ylabel("P(outcome)")
plt.tight_layout()
plt.show()

How Context Creates Memory Effects

Recency Effect

At the end of study, context most closely matches the final items. The drift equation ensures that recent context inputs have the largest influence—earlier items contribute exponentially less to the final context.

Contiguity Effect

When you recall item 8, its context is reinstated: \(\mathbf{c}^{\text{IN}} = M^{FC} \mathbf{f}_8\). This retrieved context contains traces from when item 8 was studied—particularly items 7 and 9, which shared similar contexts. When this reinstated context cues \(M^{CF}\), items 7 and 9 receive high activation.

Forward Asymmetry

Forward transitions (e.g., 8→9) are more likely than backward transitions (8→7) because during study, context drifts forward. The reinstated context for item 8 resembles the context after item 8 was encoded, not before—so item 9’s stored context has higher similarity than item 7’s.


Additional Methods

Convenience interfaces

retrieve(choice) and experience(choice) provide 1-indexed interfaces matching experimental data formats, where 0 means “stop” or “no event”:

model.retrieve(choice: int) -> CMR   # 1-indexed retrieval
model.experience(choice: int) -> CMR  # 1-indexed encoding

Probability queries

item_probability and outcome_probability return individual probabilities rather than the full vector:

model.item_probability(item_index: int) -> float  # 0-indexed, assumes recall continues
model.outcome_probability(choice: int) -> float    # 1-indexed, includes stop probability

Model factories

make_factory creates model factories with custom component implementations—useful for fitting models to experimental data:

make_factory(
    mfc_create_fn,                    # item-to-context memory factory
    mcf_create_fn,                    # context-to-item memory factory
    context_create_fn,                # temporal context factory
    termination_policy_create_fn,     # recall termination factory
) -> Type[CMR]

References

  • Howard, M. W., & Kahana, M. J. (2002). A distributed representation of temporal context. Journal of Mathematical Psychology, 46, 269-299.
  • Polyn, S. M., Norman, K. A., & Kahana, M. J. (2009). A context maintenance and retrieval model of organizational processes in free recall. Psychological Review, 116(1), 129-156.
  • Morton, N. W., & Polyn, S. M. (2016). A predictive framework for evaluating models of semantic organization in free recall. Journal of Memory and Language, 86, 119-140.