Multiplicative Semantic CMR

Semantic similarity as multiplicative factor in retrieval

Multiplicative Semantic CMR incorporates semantic associations through multiplication rather than addition. Semantic similarity acts as a gating factor that modulates temporal context support, inspired by optimal foraging models of memory search.

The Mechanism

In Additive Semantic CMR: \[a_i = (a^{temp}_i + s \cdot a^{sem}_i)^\tau\]

In Multiplicative Semantic CMR: \[a_i = (a^{temp}_i)^\tau \times (a^{sem}_i)^s\]

The semantic term multiplies the temporal activation, acting as a gate or filter.

Why Multiplicative?

The multiplicative model reflects a foraging-inspired view of memory search:

  1. Patch quality (temporal context) determines baseline support
  2. Local similarity (semantic) modulates exploration within a patch
  3. High semantic similarity keeps you in a “semantic neighborhood”
  4. Low similarity encourages jumping to new regions

This captures the idea that you search within semantic clusters before switching.

Mathematical Specification

Retrieval Activations

Code
def activations(self):
    # Temporal support from MCF
    base_support = self.mcf.probe(self.context.state) * self.recallable

    # Semantic support from last recalled item
    if self.recall_total == 0:
        semantic_support = ones  # Neutral multiplier
    else:
        last_item = self.recalls[self.recall_total - 1] - 1
        semantic_support = self.msem[last_item] * self.recallable

    # Scale each separately, then multiply
    scaled_temporal = power_scale(base_support, self.mcf_sensitivity)
    scaled_semantic = power_scale(semantic_support, self.semantic_scale)

    return (scaled_temporal * scaled_semantic) * self.recallable

The key: scale separately, then multiply. This means: - Each factor undergoes its own winner-take-all sharpening - An item needs support from both sources to have high activation - Zero in either factor → zero combined activation

Parameters

Parameter Symbol Description
semantic_scale \(s\) Exponent for semantic similarity
choice_sensitivity \(\tau\) Exponent for temporal support

The Semantic Scale (as Exponent)

In the multiplicative model, semantic_scale acts as an exponent:

Value Effect
0.0 Semantic term = 1 (pure temporal)
0.5 Gentle semantic gating
1.0 Linear semantic influence
>1.0 Strong semantic gating (winner-take-all)

Higher values make the model more sensitive to semantic similarity differences.

Gating Behavior

The multiplicative interaction creates gating:

Code
Temporal: [0.8, 0.3, 0.5, 0.2]  (item activations)
Semantic: [0.9, 0.1, 0.6, 0.4]  (similarity to last recall)

Additive:       [1.7, 0.4, 1.1, 0.6]  → Item 1 wins
Multiplicative: [0.72, 0.03, 0.30, 0.08]  → Item 1 wins more strongly

Items need both temporal and semantic support. This prevents transitions to semantically unrelated items even if they’re temporally close.

Usage

Code
from jaxcmr.models.multiplicative_semantic_cmr import CMR, make_factory
import jaxcmr.components.linear_memory as LinearMemory
import jaxcmr.components.context as TemporalContext
from jaxcmr.components.termination import PositionalTermination

# Create factory with semantic features
Factory = make_factory(
    LinearMemory.init_mfc,
    LinearMemory.init_mcf,
    TemporalContext.init,
    PositionalTermination,
)

factory = Factory(dataset, word_embeddings)

params = {
    "encoding_drift_rate": 0.5,
    "start_drift_rate": 0.5,
    "recall_drift_rate": 0.5,
    "learning_rate": 0.5,
    "primacy_scale": 2.0,
    "primacy_decay": 0.8,
    "shared_support": 0.05,
    "item_support": 0.25,
    "choice_sensitivity": 0.6,  # Temporal exponent
    "semantic_scale": 0.8,      # Semantic exponent
    "stop_probability_scale": 0.05,
    "stop_probability_growth": 0.2,
    "learn_after_context_update": True,
    "allow_repeated_recalls": False,
}

model = factory.create_trial_model(trial_index=0, parameters=params)

Predictions

Strong Category Clustering

The multiplicative model predicts: - Strong clustering within semantic categories - Abrupt transitions between categories (when semantic support depletes) - Temporal transitions gated by semantic relevance

Foraging Patterns

Following Hills et al. (2012), the model predicts behavior similar to animal foraging: - Exploit: Stay in a semantic “patch” while resources remain - Explore: Switch patches when local resources deplete - Transitions follow semantic gradients within patches

Comparison: Additive vs Multiplicative

Behavior Additive Multiplicative
Zero semantic Temporal alone Zero activation
High temp, low sem Moderate activation Low activation
Low temp, high sem Moderate activation Low activation
Category transitions Gradual Abrupt
Semantic clustering Moderate Strong

Key Difference

Additive: Semantic and temporal provide independent “votes”—each can succeed alone.

Multiplicative: Semantic gates temporal—you need both to have high activation.

First Recall (No Prior Item)

At the first recall, there’s no “last recalled item” to provide semantic similarity. The model handles this by setting semantic support to ones (neutral multiplier):

Code
semantic_support = lax.cond(
    self.recall_total == 0,
    lambda: jnp.ones_like(base_support),  # First recall: no semantic filter
    lambda: self.msem[self.recalls[self.recall_total - 1] - 1],
)

This means the first recall is purely temporal (recency/primacy driven), and semantic effects emerge only after the first item.

Theoretical Background

This model draws from:

  • Hills, Jones & Todd (2012): Optimal foraging in semantic memory
  • Search of Associative Memory (SAM): Multiplicative cue combination
  • Random walk models: Semantic space as a landscape

The foraging metaphor suggests that memory search optimizes a tradeoff between exploitation (staying in a productive area) and exploration (moving to new areas).

References

  • Hills, T. T., Jones, M. N., & Todd, P. M. (2012). Optimal foraging in semantic memory. Psychological Review, 119(2), 431-440.
  • Raaijmakers, J. G., & Shiffrin, R. M. (1981). Search of associative memory. Psychological Review, 88(2), 93-134.