Skip to content
Go back 2405.15523 arXiv logo

The Mosaic Memory of Large Language Models

Published:  at  11:08 AM
89.04 🤔

This paper introduces the concept of ‘mosaic memory’ in Large Language Models, demonstrating through experiments on canaries and real-world datasets like SlimPajama that LLMs memorize training data via fuzzy duplicates with partial overlaps, predominantly syntactically, challenging existing deduplication practices and raising concerns for privacy, model utility, and benchmark fairness.

Large Language Model, Pre-training, Privacy-Preserving Machine Learning, Robustness, Reasoning

Igor Shilov, Matthieu Meeus, Yves-Alexandre de Montjoye

Imperial College London

Generated by grok-3

Background Problem

Large Language Models (LLMs) are pivotal in automating tasks and extracting insights from data, but their memorization of training data poses risks such as privacy breaches, copyright violations, and inflated benchmark performance. Traditionally, memorization has been understood as verbatim repetition of sequences, leading to mitigation strategies like deduplication of exact duplicates. This paper challenges this view by introducing the concept of ‘mosaic memory,’ where LLMs memorize information from partially overlapping, fuzzy duplicates, revealing a gap in current understanding and mitigation practices for privacy, confidentiality, and fair evaluation.

Method

The core idea is to demonstrate that LLMs exhibit ‘mosaic memory,’ memorizing training data not just through exact duplicates but also via fuzzy duplicates with partial overlaps. The methodology involves a framework using artificially crafted sequences (canaries) injected into training data to measure memorization through Membership Inference Attacks (MIAs). Key steps include:

  1. Generating reference canaries and their fuzzy duplicates by modifying tokens through replacement (Areplace), insertion (Ainsert), and shuffling (Ashuffle), as well as semantic paraphrasing (Aparaphrase).
  2. Injecting these sequences into training datasets and further training target LLMs (e.g., Llama-3.2, Phi-2, Gemma-2, GPT-Neo).
  3. Quantifying memorization using the exact duplicate equivalent (ρ), which measures the contribution of fuzzy duplicates to memorization relative to exact duplicates via MIA performance (ROC AUC).
  4. Analyzing real-world datasets like SlimPajama to assess the prevalence of fuzzy duplicates using metrics like Levenshtein and Hamming distances. This approach highlights the syntactic nature of memorization over semantic, as token overlap drives memorization more than shared meaning.

Experiment

Experiments were conducted on four major LLMs (Llama-3.2, Phi-2, Gemma-2, GPT-Neo) using synthetic canaries to evaluate mosaic memory. Datasets included controlled training sets with injected canaries and the real-world SlimPajama dataset (627 billion tokens, deduplicated at document level). The setup involved varying fuzzy duplicate modifications (token replacement, insertion, shuffling, and paraphrasing) and measuring memorization via MIA performance, reported as exact duplicate equivalent (ρ). Results showed significant memorization from fuzzy duplicates, with ρ values as high as 0.8 for minor modifications (10% token replacement) and 0.15-0.19 for heavy modifications (50% replacement), consistent across models. Memorization was robust to noise (insertions) and partial shuffling, but predominantly syntactic, with semantic similarity having minimal impact (e.g., paraphrasing yielded low ρ of 0.11-0.30). In SlimPajama, fuzzy duplicates were abundant despite deduplication, with sequences having 1,000 exact duplicates also having 4,000-20,000 fuzzy duplicates at varying distances, contributing significantly to memorization (ρ > 0.2). The setup is comprehensive for controlled settings but lacks diversity in model architectures and datasets, and real-world extrapolation assumes uniform distribution, which may not hold. Results match the expectation of challenging verbatim memorization assumptions but reveal limitations in current deduplication practices.

Further Thoughts

The concept of mosaic memory opens up critical discussions on how LLMs process and retain information, particularly the surprising dominance of syntactic over semantic memorization. This finding could be linked to the attention mechanisms in Transformer architectures, which may prioritize token-level patterns over deeper contextual understanding during training. It would be insightful to explore whether newer architectures or training paradigms, such as those emphasizing semantic embeddings or contrastive learning, could shift this balance towards semantic memorization, potentially improving model generalization while reducing privacy risks. Additionally, the prevalence of fuzzy duplicates in deduplicated datasets like SlimPajama suggests a need for hybrid deduplication strategies combining syntactic and semantic approaches, perhaps inspired by techniques in natural language processing for paraphrase detection. I am also intrigued by the ethical implications of designing deduplication-resistant canaries, as mentioned in the paper. While useful for copyright protection, this could be exploited to embed harmful content or misinformation in models, necessitating robust alignment and safety mechanisms in LLM development. Finally, connecting this work to federated learning contexts, where data privacy is paramount, could reveal whether mosaic memory exacerbates privacy leakage in distributed training scenarios, prompting novel mitigation strategies.



Previous Post
Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation
Next Post
REFINE-AF: A Task-Agnostic Framework to Align Language Models via Self-Generated Instructions using Reinforcement Learning from Automated Feedback