statistics

An Illustrated Guide to TMLE, Part III: Properties, Theory, and Learning More

The is the third and final post in a three-part series to help beginners and/or visual learners understand Targeted Maximum Likelihood Estimation (TMLE). In this section, I discuss more statistical properties of TMLE, offer a brief explanation for the theory behind TMLE, and provide resources for learning more. Properties of TMLE 📈 To reiterate a point from Parts I and II, a main motivation for TMLE is that it allows the use of machine learning algorithms while still yielding asymptotic properties for inference.

An Illustrated Guide to TMLE, Part II: The Algorithm

The second post of a three-part series to help beginners and/or visual learners understand Targeted Maximum Likelihood Estimation (TMLE). This section walks through the TMLE algorithm for the mean difference in outcomes for a binary treatment and binary outcome. This post is an expansion of a printable “visual guide” available on my Github. I hope it helps analysts who feel out-of-practice reading mathematical notation follow along with the TMLE algorithm.

An Illustrated Guide to TMLE, Part I: Introduction and Motivation

The introductory post of a three-part series to help beginners and/or visual learners understand Targeted Maximum Likelihood Estimation (TMLE). This section contains a brief overview of the targeted learning framework and motivation for semiparametric estimation methods for inference, including causal inference. Table of Contents This blog post series has three parts: Part I: Motivation TMLE in three sentences 🎯 An Analyst’s Motivation for Learning TMLE 👩🏼‍💻 Is TMLE Causal Inference?

Become a Superlearner! An Illustrated Guide to Superlearning

Why use one machine learning algorithm when you could use all of them?! This post contains a step-by-step walkthrough of how to build a superlearner prediction algorithm in R. HTML Image as link A Visual Guide… Over the winter, I read Targeted Learning by Mark van der Laan and Sherri Rose. This “visual guide” I made for Chapter 3: Superlearning by Rose, van der Laan, and Eric Polley is a condensed version of the following tutorial.

Rethinking Conditional and Iterated Expectations with Linear Regression Models

An “aha!” moment: the day I realized I should rethink all the probability theorems using linear regressions. TL;DR You can a regress an outcome on a grouping variable plus any other variable(s) and the unadjusted and adjusted group means will be identical. We can see this in a simple example using the palmerpenguins data: #remotes::install_github("allisonhorst/palmerpenguins") library(palmerpenguins) library(tidyverse) library(gt) # use complete cases for simplicity penguins <- drop_na(penguins) penguins %>% # fit a linear regression for bill length given bill depth and species # make a new column containing the fitted values for bill length mutate(preds = predict(lm(bill_length_mm ~ bill_depth_mm + species, data = .

A Condensed Key for A Visual Guide to Targeted Maximum Likelihood Estimation (TMLE)

A condensed key for my corresponding TMLE tutorial blog post. Initial set up Estimand of interest: \[ATE = \Psi = E_W[\mathrm{E}[Y|A=1,\mathbf{W}] - \mathrm{E}[Y|A=0,\mathbf{W}]]\] Step 1: Estimate the Outcome First, estimate the expected value of the outcome using treatment and confounders as predictors. \[Q(A,\mathbf{W}) = \mathrm{E}[Y|A,\mathbf{W}]\] Then use that fit to obtain estimates of the expected outcome under varying three different treatment conditions: