Research Tool Design · Data Visualization · University Project

synth.

building a tool that makes causal inference feel like a conversation, not a statistics exam

UX Design Research Tools Data Visualization Shiny R
Project
Synth — Research App
Context
CS130 University Project
Role
Designer & Developer
Year
2025
Palette

"Policy researchers know what happened. Synth helps them figure out what would have happened instead — without writing a single line of R."

Why it exists What we built How it works The outputs Reflection
Synth — Synthetic Control Analysis App

Synth is a research web application built for social scientists, policy analysts, and economists who need to measure whether an intervention — a law, a program, a reform — actually changed anything. It wraps a rigorous statistical method called synthetic control analysis in an interface that feels more like a guided conversation than a statistics package.

01 — Why it exists

you can't run a controlled experiment on a country.

the problem researchers actually have

Imagine you're studying whether a new minimum wage policy reduced poverty in a particular state. The only way to be certain would be to show the policy to some states and hide it from identical ones — a controlled experiment. In the real world, that's impossible. You can't un-enact a law for a control group.

The counterfactual problem

To know if a policy worked, you need to compare what happened to what would have happened without the policy. But "what would have happened" is by definition unobservable. You need to construct it.

Synthetic control is the answer

The synthetic control method builds a weighted combination of similar units — other states, countries, or regions — to construct a plausible "what if." The gap between reality and the synthetic version is the estimated effect of the intervention.

The access problem: Running a synthetic control analysis requires the Synth package in R — a statistical environment with a steep learning curve. Policy researchers who are domain experts, not programmers, were locked out of a method designed precisely for their questions.

Synth analysis output — the synthetic control plotted against the treated unit
What the output looks like — the treated unit (solid) versus its synthetic counterpart (dashed). The gap between them is the effect.
why a visual tool changes things

When you can see the synthetic control plotted alongside the real data, the method becomes intuitive. The counterfactual isn't an abstract statistical concept anymore — it's a line on a chart, and the story is immediately legible.

Design principle: Don't explain the method in the interface. Let the output explain itself. If researchers understand what they're looking at, the tool disappears.

02 — What we built

a six-step research companion.

the whole flow, no R required

Synth is a Shiny web application — meaning it runs in a browser, no installation required. A researcher uploads their panel data and walks through six steps: upload, review, configure, analyze, validate with placebo tests, and export. Every decision has a default. Every input is a dropdown or a slider, not a function call.

1
Upload

Drop in panel data

2
Review

Confirm columns & types

3
Configure

Set unit, time, outcome

4
Analyze

Run the synthetic control

5
Validate

Placebo tests in-app

6
Export

Save results & weights

Upload screen — the entry point into Synth
The upload screen — clean entry point. Drop a CSV file with panel data to begin
Data uploaded and previewed
After upload — the app previews your data so you can confirm it looks right before doing anything
03 — How it works

each step removes a barrier.

Step 01
Tell it what you're studying — no syntax, just dropdowns

The hardest part of running a synthetic control in R is getting the function call right — specifying the treatment unit, the time of intervention, the outcome variable, the predictors, and the donor pool in exactly the right format. Synth replaces all of that with a set of clearly labeled dropdowns. You pick your variables, the app builds the function call in the background.

Setting parameters — treatment unit, intervention year, outcome variable
Parameter setup — treatment unit, year of intervention, and outcome variable. All dropdowns, no syntax
Selecting predictor variables for the synthetic control
Variable selection — choose which columns to use as predictors. The app explains what each choice means
Step 02
Run the analysis — watch the counterfactual appear

Once parameters are set, a single button runs the full synthetic control algorithm. The output is a time-series chart overlaying the real treated unit with its synthetic counterpart. The pre-intervention period shows how well the synthetic control matches the real unit — if the lines are close, the counterfactual is credible. The post-intervention gap is the effect estimate.

The main synthetic control output chart
Synthetic control output — treated unit (solid) vs. synthetic counterpart (dashed). The gap after the intervention line is the estimated effect
Run synth and save parameters panel
Run & save — one button to execute, another to save your parameter set for later
Step 03
Validate with placebo tests — does your finding stand out?

A synthetic control result only means something if it's unusual compared to what you'd find by chance. Placebo tests run the same analysis on every other unit in the dataset — states or countries that weren't actually treated. If the real finding is genuine, the treated unit's gap should be substantially larger than any of the placebos. If it looks similar to the placebos, the result is unreliable.

Synth runs both types of placebo test: in time (apply the intervention date to the pre-intervention period — if you still see an effect, your result is spurious) and in space (apply the method to each donor unit — your unit's gap should stand out).

Running placebo tests
Placebo test runner — choose in-time or in-space, then compare your result against the distribution
Placebo plots panel
Placebo plots panel — all donor units overlaid. Your treated unit should show the largest gap to be credible
Placebo in time — testing at a false intervention date
In-time placebo — running the analysis as if the intervention happened earlier. No effect = your real result is trustworthy
Placebo in space — applying to donor units
In-space placebo — applying the same method to every other unit. Your unit should show the largest estimated gap
Step 04
Save results — everything a research appendix needs

Synth exports what researchers actually need: the donor weights (which units were used to construct the synthetic counterpart, and how much weight each received), the placebo test plots, and the full results table. The exports are formatted to drop directly into an academic paper appendix.

Donor unit weights — which units contributed to the synthetic control
Donor weights — a table showing which units were used to construct the synthetic control and how much each contributed
Save results screen
Save screen — export charts, weights table, and parameter log. Everything organized for a research appendix
Save parameters panel
Parameter log — save your configuration so you can reproduce the exact analysis later or share it with collaborators
04 — The outputs

what researchers walk away with.

findings you can actually defend

Synth doesn't just produce a chart — it produces a defensible research finding. The combination of the primary analysis, the in-time and in-space placebo tests, and the donor weight table gives researchers everything they need to include a synthetic control analysis in a paper, a report, or a presentation.

The main synthetic control chart — the core output
Primary output — synthetic control chart. The pre-intervention fit validates the counterfactual; the post-intervention gap is the effect estimate
Placebo test results
Placebo results — all donor units shown for comparison. A meaningful result stands clearly apart from the noise
Output 01

A synthetic control you can explain

The main chart is legible to anyone who's seen a time-series plot. Pre-intervention alignment shows the model is credible; post-intervention gap shows the effect.

Output 02

Placebo tests, built in

Both in-time and in-space placebo tests are available in-app, not as separate analyses requiring additional code. Run them in two clicks after the primary analysis.

Output 03

Export-ready results

Donor weight tables, parameter logs, and charts are formatted for academic papers and policy reports — not just for internal analysis.

05 — What I learned

designing for experts who aren't experts in everything.

Variable selection — the key design decision
The variable selection step — the interface had to feel like guidance, not a form to fill out
the sequence is the design

The biggest design challenge wasn't making the interface look clean — it was deciding what comes first. If you ask researchers to specify the donor pool before they understand what a donor pool is, you've already lost them. Getting the order right meant thinking about what each step builds on, not just what information the algorithm needs.

Lesson 01

Research methods are design problems

A statistical method that lives only in a paper or a package reaches only the people who can read the paper or run the package. Designing access is as important as designing accuracy.

Lesson 02

Experts have different blind spots

Policy researchers are deeply expert in their domain and completely non-expert in R. Designing for that split — domain expertise plus tool inexperience — requires very different decisions than designing for general audiences.

Lesson 03

Validation belongs in the tool

Placebo tests are what separate a credible synthetic control from a spurious one. Putting them in the same interface as the analysis — not as an optional extra — changed the tool from a convenience into a research partner.

"The most interesting design constraint wasn't the interface — it was the researcher. They know everything about their data and nothing about the algorithm. The tool had to be fluent in both languages: the language of causal inference and the language of 'I just need to answer this question about my policy.'"
— Synth, 2025
← → navigate · Esc close