Why Am I Building ?

Causal inference is now expected from modern data teams—but the tools, datasets, and workflows have not kept up. exists to close that gap.

Built for data scientists and product teams who want real cause-and-effect, not just correlations.

The reality: running experiments is easy, getting causal answers is hard

Over the last decade, experimentation platforms, dashboards, and feature flags have become standard. Most product teams can run an A/B test, log a metric, and ship a winner. But when it comes to causal inference— understanding the true impact of changes across complex products and markets—the tooling is still fragmented and fragile.

Data scientists end up stitching together one-off notebooks, custom regressions, and half-documented scripts for each new question. Assumptions are rarely formalized. Diagnostics are buried. Reuse is low. The result: the process takes a lot of effort and Data Scientists are not able to spend time thinking about the more interesting questions.

Common failure modes in real teams

AB tests used where they don't fit

Many questions cannot be answered by a clean randomized test: geo launches, policy changes, pricing shifts, infra migrations, or long-run retention questions. Teams still run "A/B-like" analyses but ignore violations, leading to overconfident decisions.

Misapplied methods

Difference-in-Differences, Propensity Score Matching, Synthetic Control, and uplift models are powerful—but easy to misuse. Parallel trends are assumed but not checked, matching settings are guessed, and panel structure is an afterthought.

Boilerplate instead of insight

Data scientists spend hours on tasks that should be mechanical: reshaping data into panels, plumbing models, formatting plots, hand-crafting significance tests, and rewriting the same diagnostics over and over.

Knowledge locked in notebooks

Critical decisions live in scattered notebooks: no consistent interface, no shared assumptions, no way for non-technical partners to interrogate the logic. Re-running an analysis a few months later becomes a mini archaeology project.

What data scientists actually want

If you ask experienced data scientists how causal workflows should feel, the answers are consistent:

The gap between that ideal and the status quo is exactly where lives.

What makes different

Dataset-aware, not just model-aware

Most libraries assume you already know how to prepare your data. starts earlier. It inspects the structure: cross-sectional, time series, panel, fixed treatment, staggered treatment. From there it suggests appropriate methods and transformations instead of forcing you into a single template.

Assumption-aware by design

Every causal method lives or dies by its assumptions. explicitly encodes and checks them where the raw data allows it—parallel trends where there is pre-period history, overlap for PSM, pre-treatment fit for Synthetic Control—and makes those checks part of the core workflow, not buried in optional cells.

One interface, many methods

Product questions rarely map neatly to a single method. makes it easy to run more than one design on the same dataset—A/B test where possible, DiD where needed, PSM as a robustness check, Synthetic Control for special geos—and see how results compare.

Fast for experts, approachable for partners

Data scientists get control over roles, covariates, and configuration. Product and business partners get outputs that are readable, explainable, and consistent: effect sizes, uncertainty, assumptions, and caveats in the same place every time.

Where fits in the stack

is not a replacement for your experimentation platform, data warehouse, or notebook environment. It sits on top of them as a causal inference layer:

The opportunity

As more organizations move from "run experiments" to "make causal decisions," the gap between what teams need and what their current tools provide will only grow. The winners will be the teams that:

exists to make that standardization and legibility practical.