Research

SEDIM Architecture Papers

Empirical results and open research from Nage AI and Autorite.

Results

What the numbers show

0.0265
Val Loss
Beats FFT (0.0281) & LoRA (0.0278)
120×
Convergence
CLAST init improvement
Over random initialization
94%+
Routing
STEMMA accuracy
94.5% Turkish, 94.1% English
80.0
SEDIM-Bench
Overall benchmark score
5 dimensions, perfect isolation
Transparency Note Initial results (N=1, single domain). Multi-seed, multi-domain validation in progress per rigorous experimental protocol.
SEDIM Paper

Sedimentary Intelligence
Model

SEDIM: Sedimentary Intelligence Model — VARVE, FACIES, CLAST, STRIA, and STEMMA for Source-Attributed Compositional LLMs


CENTO = FACIES + Σi STEMMAi(x) · VARVEi

SEDIM decomposes any final weight matrix into a permanent base (FACIES) and a set of source-attributed deltas (VARVEs), each traceable to a specific training phase and knowledge origin. At inference, the STEMMA routing function dynamically weights each VARVE's contribution based on query semantics.


Key Contributions
Source attribution at the architecture level
Knowledge isolation across VARVE layers
CONFLUX cross-architecture transfer protocol
4 inference modes (single-VARVE, multi-VARVE, STEMMA-routed, full CENTO)
SEDIM-Bench evaluation standard
Paper Status
StatusPublished on ArXiv
ArxivApril 2026
VenueArxiv → NeurIPS 2026
Citation (ArXiv)
Öztürk, Ö.A. (2026). SEDIM: Sedimentary Intelligence Model — VARVE, FACIES, CLAST, STRIA, and STEMMA for Source-Attributed Compositional LLMs. arXiv preprint. github.com/NageAI/sedim
CONFLUX

Cross-Architecture
SVD Transfer

CONFLUX is a cross-architecture SVD transfer framework developed by Autorite. It uses Centered Kernel Alignment (CKA) as the primary quality gate for knowledge transfer between models trained on different domain corpora.


Core finding: CKA > 0.8 indicates strong transfer potential. CKA < 0.3 yields minimal benefit. This threshold relationship is the primary quality gate for SEDIM training.


The asymmetric SVD initialization approach initializes only the A-matrix at scale 0.01 while setting B to zeros — a deliberate asymmetry that preserves the decomposed structure of source weights without over-constraining the target model.

Transfer Threshold
CKA > 0.8Strong transfer
CKA < 0.3Minimal benefit
Open Source
github.com/NageAI/conflux — Apache 2.0
SEDIM-Bench

Evaluation Standard

SEDIM-Bench is the five-dimensional evaluation standard for source-attributed models. It measures attribution accuracy, knowledge isolation, compositional coherence, transfer efficiency, and inference routing quality.


Benchmark Dimensions
Attribution accuracy (STEMMA routing precision)
Knowledge isolation (cross-VARVE contamination)
Compositional coherence (CENTO output quality)
Transfer efficiency (CKA-guided CONFLUX scores)
Inference routing quality (mode-specific benchmarks)
Release Plan
StatusReleased (open source)
Score80.0 / 100 (nm/fehm)
Timeline

Research roadmap

April 2026 ✓
Paper on ArXiv + SEDIM production model
Val loss 0.0265 (beats FFT and LoRA). STEMMA routing 94%+ accuracy. Model on HuggingFace. SEDIM-Bench released (80/100). Platform live beta.
May 2026
Multi-seed validation + community benchmarks
N≥5 seed validation with statistical significance tests. HuggingFace leaderboard for source-attributed model comparison. Domain-separated training.
June 2026
Model portfolio + NeurIPS submission
Ming (code generation), Cortex (reasoning) SEDIM models. NeurIPS 2026 paper submission with full multi-domain results.
July 2026
LAUNCH — Public platform + marketplace
Full platform public access. VARVE marketplace for third-party knowledge layers. Dynamic VARVE insertion via SEP protocol.