The Commitment
If this succeeds, we'll scale it.
If it fails, we'll say so.
These are the gates that decide.
Validation Gates
We've defined specific, measurable criteria at three checkpoints. Failure on any criterion triggers the halt path. No exceptions, no reinterpretation.
Can we detect meaningful signal in the data? Does DLI correlate with anything real?
- DLI Variance Standard deviation > 10 points across cohort (proves the metric differentiates)
- Completion Rate > 40% of participants complete weekly check-ins
- Self-Correlation DLI correlates with self-reported overwhelm > 0.5 r
If failed: DLI doesn't measure anything real. Publish findings. Halt experiment.
Does awareness plus tooling change actual behavior? Can participants observe differences?
- Retention > 50% of Day-30 participants still active
- Observable Change At least 3 participants report measurable work changes
- Test-Retest Reliability DLI stability > 0.7 for same conditions
If failed: Awareness doesn't drive change. The tool doesn't work. Publish findings. Halt experiment.
Is there external willingness to pay for this signal? Can this become sustainable?
- Retention > 50% of original cohort still active at Day 90
- Willingness to Pay At least 20% indicate they'd pay to continue
- Sponsor Interest At least 3 sponsor conversations initiated
If failed: No viable business model exists. Publish aggregate findings. Halt experiment.
What Success Looks Like
If we pass all gates:
- DLI is a validated signal that predicts cognitive load
- Participants report measurable changes in how they work
- There's demonstrated willingness to pay for the tool
- We have evidence to support scaling responsibly
- Published findings contribute to productivity research
What Failure Looks Like
If we fail any gate:
- We publish exactly what we learned (including why it failed)
- We return any unused funds to participants
- We halt the experiment publicly and transparently
- We do NOT pivot to a different model or reframe the failure
- The data becomes public research for others to build on
Why We're Publishing This
Most productivity tools launch with bold claims and vague success metrics. If they don't work, they quietly pivot or shut down. Nobody learns anything.
We think that's backwards.
By publishing our validation gates in advance, we're committing to a specific, falsifiable hypothesis. If we're wrong, the world learns something. If we're right, the evidence is credible because it was defined before we knew the outcome.
This is what research-first actually means.
Published Framework
The Decision Load Index methodology is publicly documented and citable:
Saleme, M. K. (2026). Decision Load Index: A conceptual framework for measuring cognitive burden in knowledge work. Zenodo. https://doi.org/10.5281/zenodo.18207848
This preprint establishes our theoretical foundations, component definitions, and planned validation approach. We publish the methodology before validation so our framework can be scrutinized independently.
Understand the stakes. Ready to participate in the experiment?
Apply to the Founding Cohort Learn the Method