Research imprint

Outcome-first research for the age of deployed AI.

Everwhy Research publishes framework papers and applied studies on what AI systems actually change in the world: outcomes, structural magnitude, and evidence strength.

What the surface is for

Benchmarks count potential under controlled tasks.

Analytics count interaction and workflow activity.

Research here classifies realized outcomes with explicit evidence.

Measurement stack

Benchmarks

Measure capability under designed tasks and constrained prompts.

Analytics

Measure retention, interaction volume, and workflow behavior.

Research here

Classify outcome type, structural scale, and evidence strength.

Publishing posture

Web-native summaries stay fast to scan, while companion PDFs remain linked where the canonical paper already lives elsewhere.

Featured publication

Framework paper · March 2026 · Live PDF

VCF: A Canonical Framework for Classifying Realized AI Value

VCF (Value Classification Framework) defines the value episode as the atomic unit of AI value measurement and represents each episode through Outcome Primitive, Outcome Intent, Outcome Magnitude, and dual-axis evidence semantics. The framework is designed to make realized outcomes comparable across systems and time while staying honest about uncertainty.

Atomic unit

Value episode

Canonical components

OP + OI + OM + evidence

Public projection

3 visible axes

Why this paper matters

Benchmarks measure potential and analytics measure interaction, but neither classifies the realized outcome itself.

The minimum public reporting projection is `OP × OM_goal × claim_strength_tier`, preserving outcome type, structural scale, and evidence strength.

The framework is explicitly source-grounded: outcome theory, validity, quasi-experimental design, and causal language constraints each map to concrete VCF design decisions.

Publication list

Current papers

The research surface stays intentionally compact: one framework paper, one companion application, and a reusable publication layer that can grow without turning into a catalog wall.

Framework paper · March 2026

VCF: A Canonical Framework for Classifying Realized AI Value

A canonical framework for classifying what people actually accomplish through deployed AI systems, rather than only what models can do on tests.

Status

Live PDF

Authors

Vitaliy Soultan

Open publication

Companion application · March 2026

Outcome Primitives: A Framework for Measuring AGI Value in the World

The first large-scale application of a public VCF projection to in-vivo AI usage, measuring 17,921 value episodes across 1,305 participants.

Status

Live PDF

Authors

Hamudi Naanaa, Vitaliy Soultan, Volodymyr Panchenko

Open publication