Skip to content

Add dCDH estimator LinkedIn carousel#305

Merged
igerber merged 4 commits intomainfrom
docs/dcdh-carousel
Apr 15, 2026
Merged

Add dCDH estimator LinkedIn carousel#305
igerber merged 4 commits intomainfrom
docs/dcdh-carousel

Conversation

@igerber
Copy link
Copy Markdown
Owner

@igerber igerber commented Apr 14, 2026

Summary

  • Add LinkedIn carousel PDF announcing the dCDH (de Chaisemartin-D'Haultfoeuille) estimator
  • 9-slide carousel with slate+teal palette: hook, problem statement with treatment timeline visual, real-world scenarios, DID_M equation, feature grid, dynamic event study plot, code example, R validation parity, and CTA
  • Generator script produces the PDF via fpdf2 + matplotlib

Methodology references (required if estimator / math changes)

  • Method name(s): N/A - no methodology changes
  • Paper / source link(s): N/A
  • Any intentional deviations from the source (and why): N/A

Validation

  • Tests added/updated: No test changes
  • Backtest / simulation / notebook evidence (if applicable): N/A

Security / privacy

  • Confirm no secrets/PII in this PR: Yes

Generated with Claude Code

9-slide carousel (slate+teal palette) covering: hook, problem statement
with treatment timeline visual, real-world scenarios, DID_M method,
feature grid, dynamic event study plot, code example, R validation
parity, and CTA. Marketing campaign hero example threads through.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

Overall Assessment

⚠️ Needs changes

Executive Summary

  • No estimator/runtime code changed in this PR; the review findings are about the new carousel’s public methodology/API claims.
  • Slide 8 overstates validation evidence: it claims 245 dCDH tests and blanket ATT/SE parity with R, but the cited files currently define 173 tests by source count (158 + 15), and the parity suite intentionally does not assert mixed-direction SE parity.
  • Slide 7’s code sample is not accurate as shown: with L_max=3, dCDH suppresses the joiner/leaver decomposition, and the repo’s plotting entrypoint is plot_event_study(results), not results.plot_event_study().
  • Slide 5/6’s placebo wording (“validate/validated”) is stronger than the project’s own documented guidance on parallel-trends diagnostics.
  • I did not find performance or security issues in the generator/PDF beyond a normal PDF open action.

Methodology

Code Quality

  • No findings.

Performance

  • No findings.

Maintainability

  • No findings.

Tech Debt

  • Severity: P3. Impact: The mixed-direction SE-parity limitation is already tracked in TODO.md:L59-L60, so I am not treating the estimator implementation difference itself as a blocker. Concrete fix: none required in estimator code for this PR; just align the new slide copy with the tracked limitation.

Security

  • No findings. I did not see PDF markers for JavaScript, launch actions, URIs, or embedded files; only a standard /OpenAction entry was present.

Documentation/Tests

Path to Approval

  1. Update slide 8 copy and regenerate the PDF so the validation claims match the committed evidence: correct the hardcoded test count and remove blanket SE-parity wording.
  2. Fix slide 7’s code sample/caption to match the current dCDH API and L_max behavior.
  3. Reword the placebo/pre-trend language to diagnostic phrasing rather than proof-style phrasing.

igerber and others added 2 commits April 14, 2026 19:12
…anguage

- Slide 8: "ATT and SE parity" -> "Point estimates match R" + scope SE
  parity to pure-direction scenarios (mixed-direction SE limitation is
  tracked in TODO.md)
- Slide 7: fix plot_event_study(results) API (not results.plot_event_study()),
  add import, update caption to match L_max=3 behavior (no joiners/leavers)
- Slides 5/6: "validates parallel trends" -> "consistent with parallel
  trends" (placebos are diagnostic, not proof)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber
Copy link
Copy Markdown
Owner Author

igerber commented Apr 14, 2026

/ai-review

@github-actions
Copy link
Copy Markdown

🔁 AI review rerun (requested by @igerber)

Head SHA: 881f50b0d6c0566b2e01b5dd42a5eebc7f7ca868


Overall Assessment

⚠️ Needs changes

Executive Summary

  • Previous re-review items on placebo wording and the plotting API look addressed: the carousel now uses “consistent with parallel trends,” and the code sample uses plot_event_study(results).
  • Slide 8 still overstates R validation. It now scopes SE parity to pure-direction cases, but it still presents point-estimate parity as an unqualified “match,” which is broader than the registry and parity suite support.
  • Slide 7’s sample is still not runnable as shown: L_max is passed to the estimator constructor instead of fit(), and the caption promises HonestDiD from a call that does not enable honest_did=True.
  • No estimator/runtime code changed in this PR. I did not find performance or security issues in the generator or checked-in PDF artifact.

Methodology

Code Quality

  • No findings.

Performance

  • No findings.

Maintainability

  • No findings.

Tech Debt

  • Severity: P3. Impact: The estimator-side mixed-direction SE-comparison limitation is already tracked in TODO.md:L60-L60, so that limitation itself is not a blocker under the project’s deferred-work rules. Concrete fix: none in estimator code for this PR; keep the carousel copy inside the documented validation scope.

Security

  • No findings. The checked-in PDF appears to contain only a standard page-fit open action and no JavaScript, launch actions, URIs, or embedded files.

Documentation/Tests

Path to Approval

  1. Reword slide 8 so the R-validation claim matches the documented scope: either say “validated against committed R parity fixtures” or qualify the claim to the scenarios where exact parity is documented.
  2. Fix slide 7’s sample to use fit(..., L_max=3) and align the caption with the actual call; if HonestDiD stays in the caption, add honest_did=True.
  3. Regenerate carousel/diff-diff-dcdh-carousel.pdf after updating the slide source so the checked-in PDF matches the corrected copy.

- Slide 8: "ATT matches R across joiners, leavers, and controls" ->
  "ATT validated against R reference outputs"
- Slide 7: move L_max and placebo from constructor to fit() where they
  belong, drop HonestDiD from caption (covered on slide 5)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@igerber igerber added the ready-for-ci Triggers CI test workflows label Apr 15, 2026
@igerber igerber merged commit dbff8a5 into main Apr 15, 2026
1 of 2 checks passed
@igerber igerber deleted the docs/dcdh-carousel branch April 15, 2026 00:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-ci Triggers CI test workflows

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant