Code
import matplotlib.pyplot as pltfrom jaxcmr.models.cmr import CMRfrom jaxcmr.state_analysis import plot_model_stateA computational model of episodic memory search that uses temporal context to guide free recall.
The Context Maintenance and Retrieval model implements retrieved-context theory, which posits that episodic retrieval is driven by a representation that evolves over time, tying each item to the contextual features present during encoding and later accessing those associations to guide retrieval.
CMR explains several benchmark phenomena in free recall:
This notebook walks through a simulated free recall trial, introducing each CMR method as it becomes relevant to the unfolding dynamics.
CMR has a compositional architecture: the model is assembled from four pluggable components that can be swapped to create model variants.
The default implementations use linear associative memories and an exponential stopping rule, but alternative implementations (e.g., instance-based memory, different termination dynamics) can be substituted via factory functions.
CMR.__init__The parameters dictionary configures the model’s cognitive dynamics:
| Symbol | Parameter | Description |
|---|---|---|
| \(\beta_{\text{enc}}\) | encoding_drift_rate |
Rate of context drift during item encoding |
| \(\beta_{\text{start}}\) | start_drift_rate |
Amount of start-of-list context reinstated before recall |
| \(\beta_{\text{rec}}\) | recall_drift_rate |
Rate of context drift during recall |
| \(\gamma\) | learning_rate |
Learning rate for updates to \(M^{FC}\) |
| \(\phi_s\) | primacy_scale |
Scale of the primacy gradient for \(M^{CF}\) learning |
| \(\phi_d\) | primacy_decay |
Decay rate of the primacy gradient |
| \(\delta\) | item_support |
Pre-experimental self-association in \(M^{CF}\) |
| \(\alpha\) | shared_support |
Uniform pre-experimental cross-item support in \(M^{CF}\) |
| \(\tau\) | choice_sensitivity |
Exponential scaling during the recall competition |
| \(\theta_s\) | stop_probability_scale |
Baseline of the stopping probability |
| \(\theta_r\) | stop_probability_growth |
Growth rate of stopping probability over outputs |
params = {
"encoding_drift_rate": 0.7,
"start_drift_rate": 0.6,
"recall_drift_rate": 0.85,
"learning_rate": 0.4,
"primacy_scale": 8.0,
"primacy_decay": 1.0,
"item_support": 6.0,
"shared_support": 0.02,
"choice_sensitivity": 1.5,
"stop_probability_scale": 0.005,
"stop_probability_growth": 0.4,
"learn_after_context_update": True,
"allow_repeated_recalls": False,
}At initialization, context is a vector of length list_length + 1 with only the start-of-list unit active. The memory components contain pre-experimental associations:
Item-to-context memory (\(M^{FC}\)):
\[M^{FC}_{\text{pre}(ij)} = \begin{cases} 1 - \gamma & \text{if } i=j \\ 0 & \text{otherwise} \end{cases}\]
Each item links to a unique context unit. The parameter \(\gamma\) determines how much room remains for new associations.
Context-to-item memory (\(M^{CF}\)):
\[M^{CF}_{\text{pre}(ij)} = \begin{cases} \delta & \text{if } i=j \\ \alpha & \text{otherwise} \end{cases}\]
Here \(\delta\) (item support) gives each item a baseline association with its context unit, while \(\alpha\) (shared support) provides uniform associations across all items. No items are recallable until they have been studied.
With our model initialized, we can simulate the study phase of a free recall experiment. Each item presentation triggers two key processes: the temporal context drifts to incorporate the new item, and bidirectional associations form between the item and its encoding context.
CMR.experience_itemSimulate encoding of an item during study, updating context and memories. Takes a 0-indexed item index and returns the updated model state.
When we call experience_item(0), the item’s feature representation probes \(M^{FC}\) to retrieve associated context, which is then integrated into the current state:
\[c_i = \rho_i \, c_{i-1} + \beta_{\text{enc}} \, c_i^{\text{IN}}\]
where \(\rho_i = \sqrt{1 + \beta^2 \left[ (c_{i-1} \cdot c_i^{\text{IN}})^2 - 1 \right]} - \beta (c_{i-1} \cdot c_i^{\text{IN}})\) maintains unit length. Simultaneously, the model strengthens bidirectional associations: \(M^{FC}\) learns with rate \(\gamma\), while \(M^{CF}\) learns with a primacy-scaled rate \(\phi_i = \phi_s e^{-\phi_d(i-1)} + 1\) that gives early items stronger context-to-item associations.
Let’s encode the first item and observe the changes:
As encoding proceeds, context accumulates a recency gradient—recent items have stronger activation than early items. Meanwhile, the primacy gradient in \(M^{CF}\) ensures early items retain strong associations despite their fading contextual activation.
With all items encoded, the model is ready for retrieval. Before recall begins, CMR partially reinstates the start-of-list context.
CMR.start_retrievingTransition from study to retrieval mode by reinstating start context.
This operation integrates the initial context vector \(c_0\) with the current context at rate \(\beta_{\text{start}}\):
\[c_{\text{start}} = \rho \, c_N + \beta_{\text{start}} \, c_0\]
The resulting retrieval cue reflects two competing influences: recency from the end-of-list context, and primacy from stronger \(M^{CF}\) associations. Together these produce the characteristic U-shaped serial position curve:
Now we can simulate recall. When an item is retrieved, its encoding context is reinstated via \(M^{FC}\), the recall is recorded, and the item is marked as no longer recallable.
CMR.retrieve_itemSimulate retrieval of a specific item, updating context and records. Takes a 0-indexed item index.
Recalling item 16 reinstated a context similar to when item 16 was studied, which was temporally close to item 15. This is the contiguity effect in action.
Each recall shifted probability toward the next earlier item, producing the backward recall pattern 16 → 15 → 14 → 13.
The visualizations above show outcome probabilities, but how are these computed? CMR uses a Luce choice rule over item activations, combined with a termination probability.
CMR.activationsCompute retrieval activations for all items from context-to-item memory. Probes \(M^{CF}\) with the current context to get raw support for each item.
activations = model.activations()
colors = ["#e74c3c" if not model.recallable[i] else "#3498db" for i in range(16)]
fig, ax = plt.subplots(figsize=(10, 3))
ax.bar(range(1, 17), activations, color=colors)
ax.set_xlabel("Item")
ax.set_ylabel("Activation")
ax.set_xticks(range(1, 17))
plt.tight_layout()
plt.show()CMR.stop_probabilityCompute probability of terminating recall. The default termination policy uses an exponential rule:
\[P(\text{stop}, j) = \theta_s e^{j \theta_r}\]
where \(j\) is the output position.
CMR.outcome_probabilitiesCompute probabilities for all possible retrieval outcomes. Combines activations and stop probability via the Luce choice rule:
\[P(i) = (1 - P(\text{stop})) \frac{A_i^\tau}{\sum_k A_k^\tau}\]
The returned vector has list_length + 1 elements: index 0 is \(P(\text{stop})\), indices 1–N are item probabilities:
probs = model.outcome_probabilities()
labels = ["Stop"] + [str(i) for i in range(1, 17)]
colors = ["#95a5a6"] + ["#e74c3c" if not model.recallable[i] else "#3498db" for i in range(16)]
fig, ax = plt.subplots(figsize=(10, 3))
ax.bar(labels, probs, color=colors)
ax.set_xlabel("Outcome")
ax.set_ylabel("P(outcome)")
plt.tight_layout()
plt.show()At the end of study, context most closely matches the final items. The drift equation ensures that recent context inputs have the largest influence—earlier items contribute exponentially less to the final context.
When you recall item 8, its context is reinstated: \(\mathbf{c}^{\text{IN}} = M^{FC} \mathbf{f}_8\). This retrieved context contains traces from when item 8 was studied—particularly items 7 and 9, which shared similar contexts. When this reinstated context cues \(M^{CF}\), items 7 and 9 receive high activation.
Forward transitions (e.g., 8→9) are more likely than backward transitions (8→7) because during study, context drifts forward. The reinstated context for item 8 resembles the context after item 8 was encoded, not before—so item 9’s stored context has higher similarity than item 7’s.
retrieve(choice) and experience(choice) provide 1-indexed interfaces matching experimental data formats, where 0 means “stop” or “no event”:
item_probability and outcome_probability return individual probabilities rather than the full vector:
make_factory creates model factories with custom component implementations—useful for fitting models to experimental data: