Distinct Contexts CMR

Positional encoding with item-level recall

Distinct Contexts CMR combines positional encoding (each presentation gets a distinct trace) with item-level recallability (once you recall an item, all its presentations become non-recallable).

The Hybrid Approach

Aspect Positional CMR Distinct Contexts CMR
Encoding Position-based Position-based
MCF structure Position → Position Position → Position
Recallability Per-position Per-item
Context reinstatement Weighted by position All positions pooled

This is a middle ground: distinct traces during encoding, but unified recall behavior.

Why Item-Level Recallability?

In Positional CMR, recalling any presentation of an item makes all presentations non-recallable. But the context reinstated depends on which trace “won” the competition.

In Distinct Contexts CMR: - Recalling an item marks the item as recalled (not individual positions) - Context reinstatement pools across all the item’s positions equally

This simplifies the retrieval process while maintaining distinct encoding.

Mathematical Specification

Encoding

Same as Positional CMR—position-based: \[\mathbf{c}^{IN}_i = M^{FC} \mathbf{p}_i\] \[\Delta M^{FC} = \gamma \mathbf{p}_i \mathbf{c}_j\] \[\Delta M^{CF} = \phi_i \mathbf{c}_j \mathbf{p}_i\]

Recallability

Tracked at the item level, not position level:

Code
# After recall of item k:
self.recallable = self.recallable.at[item_index].set(False)
# NOT: self.recallable * (self.studied != item_k)

Retrieval: Pooled Context Reinstatement

When item \(k\) is recalled:

  1. Find all positions where \(k\) was studied: \[\text{positions}_k = \{i : \text{studied}[i] = k\}\]

  2. Pool as a binary mask (not weighted): \[\mathbf{p}_{cue} = \sum_{i \in \text{positions}_k} \mathbf{p}_i\]

  3. Probe MFC with pooled cue: \[\mathbf{c}^{IN} = M^{FC} \mathbf{p}_{cue}\]

This equally weights all presentations, unlike Positional CMR which weights by activation.

Item Activations

Pool position activations by item: \[a_k = \sum_{i : \text{studied}[i] = k} (M^{CF} \mathbf{c})_i\]

Then apply sensitivity: \[a'_k = a_k^\tau \cdot \mathbf{1}[\text{recallable}[k]]\]

Comparison: Position vs Item Recallability

Positional CMR

Code
Studied:     [A, B, A, C]  (A at positions 0, 2)
Recallable:  [1, 1, 1, 1]  (per position)

After recalling A:
Recallable:  [0, 1, 0, 1]  (positions 0 and 2 gone)

But if allow_repeated_recalls=True:
Recallable:  [1, 1, 1, 1]  (still all available)

Distinct Contexts CMR

Code
Studied:     [A, B, A, C]  (A at positions 0, 2)
Recallable:  [1, 1, 0, 1]  (per item: A, B, C... no extra slot for repeated A)

After recalling A:
Recallable:  [0, 1, 0, 1]  (item A gone, positions tracked via `studied`)

Usage

Code
from jaxcmr.models.distinct_contexts_cmr import CMR

params = {
    "encoding_drift_rate": 0.5,
    "start_drift_rate": 0.5,
    "recall_drift_rate": 0.5,
    "learning_rate": 0.5,
    "primacy_scale": 2.0,
    "primacy_decay": 0.8,
    "shared_support": 0.05,
    "item_support": 0.25,
    "choice_sensitivity": 0.6,
    "stop_probability_scale": 0.05,
    "stop_probability_growth": 0.2,
    "learn_after_context_update": True,
    "allow_repeated_recalls": False,
}

model = CMR(list_length=16, parameters=params)

Predictions

Context After Recall

When recalling a repeated item: - Positional CMR: Strongly reinstates the “winning” presentation’s context - Distinct Contexts CMR: Reinstates a blend of all presentations equally

This affects which neighbors are cued next.

First vs Second Presentation Advantage

Both models can show first-presentation advantages (due to primacy). But: - Positional CMR: Explicit competition between traces - Distinct Contexts CMR: Pooled activation, less explicit competition

Implementation

Key differences from Positional CMR:

Code
def experience_item(self, item_index):
    # ... positional encoding as before ...

    # Recallability is at item level
    recallable=self.recallable.at[item_index].set(True),  # item_index, not study_index

def retrieve_item(self, item_index):
    # Pool across all positions of this item (binary, not weighted)
    positions_mask = self.studied == item_index + 1
    new_context = self.context.integrate(
        self.mfc.probe(positions_mask),  # Binary mask
        self.recall_drift_rate,
    )

    return self.replace(
        context=new_context,
        recallable=self.recallable.at[item_index].set(self.allow_repeated_recalls),
        # Item-level toggle, not position-level
        ...
    )

When To Use

Use Distinct Contexts CMR when:

  • You want distinct encoding but simpler retrieval
  • The weighted reinstatement of Positional CMR seems too complex
  • You’re modeling item-level recognition (was this item recalled?) not source memory (which presentation?)

Theoretical Position

This model represents a view that:

  1. Encoding is event-specific: Each presentation creates its own trace
  2. Retrieval aggregates: Accessing an item pulls from all its traces
  3. Recallability is about items: You recall items, not specific presentations