Code
# After recall of item k:
self.recallable = self.recallable.at[item_index].set(False)
# NOT: self.recallable * (self.studied != item_k)Positional encoding with item-level recall
Distinct Contexts CMR combines positional encoding (each presentation gets a distinct trace) with item-level recallability (once you recall an item, all its presentations become non-recallable).
| Aspect | Positional CMR | Distinct Contexts CMR |
|---|---|---|
| Encoding | Position-based | Position-based |
| MCF structure | Position → Position | Position → Position |
| Recallability | Per-position | Per-item |
| Context reinstatement | Weighted by position | All positions pooled |
This is a middle ground: distinct traces during encoding, but unified recall behavior.
In Positional CMR, recalling any presentation of an item makes all presentations non-recallable. But the context reinstated depends on which trace “won” the competition.
In Distinct Contexts CMR: - Recalling an item marks the item as recalled (not individual positions) - Context reinstatement pools across all the item’s positions equally
This simplifies the retrieval process while maintaining distinct encoding.
Same as Positional CMR—position-based: \[\mathbf{c}^{IN}_i = M^{FC} \mathbf{p}_i\] \[\Delta M^{FC} = \gamma \mathbf{p}_i \mathbf{c}_j\] \[\Delta M^{CF} = \phi_i \mathbf{c}_j \mathbf{p}_i\]
Tracked at the item level, not position level:
When item \(k\) is recalled:
Find all positions where \(k\) was studied: \[\text{positions}_k = \{i : \text{studied}[i] = k\}\]
Pool as a binary mask (not weighted): \[\mathbf{p}_{cue} = \sum_{i \in \text{positions}_k} \mathbf{p}_i\]
Probe MFC with pooled cue: \[\mathbf{c}^{IN} = M^{FC} \mathbf{p}_{cue}\]
This equally weights all presentations, unlike Positional CMR which weights by activation.
Pool position activations by item: \[a_k = \sum_{i : \text{studied}[i] = k} (M^{CF} \mathbf{c})_i\]
Then apply sensitivity: \[a'_k = a_k^\tau \cdot \mathbf{1}[\text{recallable}[k]]\]
from jaxcmr.models.distinct_contexts_cmr import CMR
params = {
"encoding_drift_rate": 0.5,
"start_drift_rate": 0.5,
"recall_drift_rate": 0.5,
"learning_rate": 0.5,
"primacy_scale": 2.0,
"primacy_decay": 0.8,
"shared_support": 0.05,
"item_support": 0.25,
"choice_sensitivity": 0.6,
"stop_probability_scale": 0.05,
"stop_probability_growth": 0.2,
"learn_after_context_update": True,
"allow_repeated_recalls": False,
}
model = CMR(list_length=16, parameters=params)When recalling a repeated item: - Positional CMR: Strongly reinstates the “winning” presentation’s context - Distinct Contexts CMR: Reinstates a blend of all presentations equally
This affects which neighbors are cued next.
Both models can show first-presentation advantages (due to primacy). But: - Positional CMR: Explicit competition between traces - Distinct Contexts CMR: Pooled activation, less explicit competition
Key differences from Positional CMR:
def experience_item(self, item_index):
# ... positional encoding as before ...
# Recallability is at item level
recallable=self.recallable.at[item_index].set(True), # item_index, not study_index
def retrieve_item(self, item_index):
# Pool across all positions of this item (binary, not weighted)
positions_mask = self.studied == item_index + 1
new_context = self.context.integrate(
self.mfc.probe(positions_mask), # Binary mask
self.recall_drift_rate,
)
return self.replace(
context=new_context,
recallable=self.recallable.at[item_index].set(self.allow_repeated_recalls),
# Item-level toggle, not position-level
...
)Use Distinct Contexts CMR when:
This model represents a view that: