true

Using {sl3} for superlearning

R
Author

Katherine Hoffman

Published

September 12, 2019

A short and sweet guide to using the R package {sl3} for superlearning. This is part of a tutorial created for an R Ladies NYC Talk in 2019.

January 10, 2020.

In September 2019, I gave an R-Ladies NYC presentation about using the package sl3 to implement the superlearner algorithm for predictions. You can download the slides for it here. This post is a modification to the original demo I gave.

For a more in-depth background on what the superlearner algorithm is, please see my more recent blog post.

Step 0: Load your libraries, set a seed, and load the data

You’ll likely need to install sl3 from the tlverse github page, as it was not yet on CRAN at the time of writing this post.

#devtools::install_github("tlverse/sl3")
library(sl3)
library(dplyr)

Attaching package: 'dplyr'
The following objects are masked from 'package:stats':

    filter, lag
The following objects are masked from 'package:base':

    intersect, setdiff, setequal, union
library(gt)
set.seed(7)

We will use the same WASH Benefits data set as the TLverse does in their Hitchhiker’s Guide. We will be predicting children in rural Kenya and Bengladesh’s weight to height z-scores.

washb_data <- read.csv("https://raw.githubusercontent.com/tlverse/tlverse-data/master/wash-benefits/washb_data.csv")
gt(head(washb_data))
whz tr fracode month aged sex momage momedu momheight hfiacat Nlt18 Ncomp watmin elec floor walls roof asset_wardrobe asset_table asset_chair asset_khat asset_chouki asset_tv asset_refrig asset_bike asset_moto asset_sewmach asset_mobile
0.00 Control N05265 9 268 male 30 Primary (1-5y) 146.40 Food Secure 3 11 0 1 0 1 1 0 1 1 1 0 1 0 0 0 0 1
-1.16 Control N05265 9 286 male 25 Primary (1-5y) 148.75 Moderately Food Insecure 2 4 0 1 0 1 1 0 1 0 1 1 0 0 0 0 0 1
-1.05 Control N08002 9 264 male 25 Primary (1-5y) 152.15 Food Secure 1 10 0 0 0 1 1 0 0 1 0 1 0 0 0 0 0 1
-1.26 Control N08002 9 252 female 28 Primary (1-5y) 140.25 Food Secure 3 5 0 1 0 1 1 1 1 1 1 0 0 0 1 0 0 1
-0.59 Control N06531 9 336 female 19 Secondary (>5y) 150.95 Food Secure 2 7 0 1 0 1 1 1 1 1 1 1 0 0 0 0 0 1
-0.51 Control N06531 9 304 male 20 Secondary (>5y) 154.20 Severely Food Insecure 0 3 1 1 0 1 1 0 0 0 0 1 0 0 0 0 0 1

Step 1: Specify outcome and predictors

We need the columns for the outcome and predictors to be specified as strings.

outcome <- "whz"
covars <- washb_data %>%
  select(-whz) %>%
  names()

Step 2: Make an sl3 task

This is the object we’ll use whenever we want to fit a statistical model in sl3.

washb_task <- make_sl3_Task(
  data = washb_data,
  covariates = covars,
  outcome = outcome
)
Warning in process_data(data, nodes, column_names = column_names, flag = flag, : Character covariates found: tr, fracode, sex, momedu, hfiacat;
Converting these to factors
Warning in process_data(data, nodes, column_names = column_names, flag = flag, :
Missing covariate data detected: imputing covariates.

Note that we can’t have missing data in most statistical learning algorithms, so sl3’s default pre-processing imputes at the median and adds a column for missingness (in case the missingness is informative).

washb_task
An sl3 Task with 4695 obs and these nodes:
$covariates
 [1] "tr"              "fracode"         "month"           "aged"           
 [5] "sex"             "momage"          "momedu"          "momheight"      
 [9] "hfiacat"         "Nlt18"           "Ncomp"           "watmin"         
[13] "elec"            "floor"           "walls"           "roof"           
[17] "asset_wardrobe"  "asset_table"     "asset_chair"     "asset_khat"     
[21] "asset_chouki"    "asset_tv"        "asset_refrig"    "asset_bike"     
[25] "asset_moto"      "asset_sewmach"   "asset_mobile"    "delta_momage"   
[29] "delta_momheight"

$outcome
[1] "whz"

$id
NULL

$weights
NULL

$offset
NULL

$time
NULL

An aside: Exploring sl3’s many options

There’s a ton of different aspects of model fitting sl3 has the capabilities to address. For example, we can look into algorithms for when the outcome is binomial, categorical, or continuous. There are also options for when you have clustered data, or if you need to preprocess/screen your data before implementing base learners.

sl3_list_properties()
 [1] "binomial"      "categorical"   "continuous"    "cv"           
 [5] "density"       "h2o"           "ids"           "importance"   
 [9] "offset"        "preprocessing" "sampling"      "screener"     
[13] "timeseries"    "weights"       "wrapper"      

We can learn more about each of these properties on this reference page.

Another aside: looking at available “learners”

We’ll need to pick out base learners for our stack, as well as pick a metalearner. Since we are trying to predict z-scores, a continuous variable, let’s look at our potential learners for a continuous variable.

sl3_list_learners("continuous") 
 [1] "Lrnr_arima"                     "Lrnr_bartMachine"              
 [3] "Lrnr_bayesglm"                  "Lrnr_bilstm"                   
 [5] "Lrnr_bound"                     "Lrnr_caret"                    
 [7] "Lrnr_cv_selector"               "Lrnr_dbarts"                   
 [9] "Lrnr_earth"                     "Lrnr_expSmooth"                
[11] "Lrnr_ga"                        "Lrnr_gam"                      
[13] "Lrnr_gbm"                       "Lrnr_glm"                      
[15] "Lrnr_glm_fast"                  "Lrnr_glmnet"                   
[17] "Lrnr_grf"                       "Lrnr_gru_keras"                
[19] "Lrnr_gts"                       "Lrnr_h2o_glm"                  
[21] "Lrnr_h2o_grid"                  "Lrnr_hal9001"                  
[23] "Lrnr_HarmonicReg"               "Lrnr_hts"                      
[25] "Lrnr_lightgbm"                  "Lrnr_lstm_keras"               
[27] "Lrnr_mean"                      "Lrnr_multiple_ts"              
[29] "Lrnr_nnet"                      "Lrnr_nnls"                     
[31] "Lrnr_optim"                     "Lrnr_pkg_SuperLearner"         
[33] "Lrnr_pkg_SuperLearner_method"   "Lrnr_pkg_SuperLearner_screener"
[35] "Lrnr_polspline"                 "Lrnr_randomForest"             
[37] "Lrnr_ranger"                    "Lrnr_rpart"                    
[39] "Lrnr_rugarch"                   "Lrnr_screener_correlation"     
[41] "Lrnr_solnp"                     "Lrnr_stratified"               
[43] "Lrnr_svm"                       "Lrnr_tsDyn"                    
[45] "Lrnr_xgboost"                  

You’ll notice each learner starts with Lrnr and seems to correspond to a package in R.

Step 3: Choosing the base learners

Let’s pick just a few base learners to match the examples in my slides: a random forest, a generalized boosting model, and a generalized linear model. Let’s keep their default parameters for now.

make_learner_stack() is an easy way to create a stack of default baselearners. It takes the names of the learners as strings and you’re good to go!

stack <- make_learner_stack(
  "Lrnr_randomForest", 
  "Lrnr_gbm",
  "Lrnr_glm"
)

Step 4: Choose a metalearner

There are many models we can choose from but we’ll keep it simple and use a generalized linear model. We are again using the make_learner() function.

metalearner <- make_learner(Lrnr_glm)

Step 5: Make a superlearner object

Remember, under-the-hood Lrnr_sl takes the cross-validated predictions from the base models and uses them to predict the true outcome. That prediction model then is used to fit the predictions from base learners fit on the whole data set.

sl <- make_learner(Lrnr_sl, 
                   learners = stack,
                   metalearner = metalearner)

A superlearner object has different functions built into it, such as train(). We can train our superlearner shell model on the task we made earlier.

Step 6: Train your superlearner

sl_fit <- sl$train(washb_task)

Step 7: Examine the results of the superlearner

Examine coefficients and CV-risk

The default risk is MSE (Mean Squared Error). The coefficients show you how the metalearner decided to weight each base model for the final ensemble.

sl_fit$print() %>% gt()
[1] "SuperLearner:"
List of 3
 $ : chr "Lrnr_randomForest_500_TRUE_5"
 $ : chr "Lrnr_gbm_10000_2_0.001"
 $ : chr "Lrnr_glm_TRUE"
[1] "Lrnr_glm_TRUE"
$coefficients
                   intercept Lrnr_randomForest_500_TRUE_5 
                -0.037630882                  0.056299184 
      Lrnr_gbm_10000_2_0.001                Lrnr_glm_TRUE 
                 0.876353346                  0.005369642 

$R
                             intercept Lrnr_randomForest_500_TRUE_5
intercept                    -68.52007                     39.84692
Lrnr_randomForest_500_TRUE_5   0.00000                     21.59036
Lrnr_gbm_10000_2_0.001         0.00000                      0.00000
Lrnr_glm_TRUE                  0.00000                      0.00000
                             Lrnr_gbm_10000_2_0.001 Lrnr_glm_TRUE
intercept                                  40.07621     40.091631
Lrnr_randomForest_500_TRUE_5               14.60067     13.860606
Lrnr_gbm_10000_2_0.001                     10.05776      9.862373
Lrnr_glm_TRUE                               0.00000     -8.642721

$rank
[1] 4

$family

Family: gaussian 
Link function: identity 


$deviance
[1] 4723.324

$aic
[1] 13362.07

$null.deviance
[1] 5000.347

$iter
[1] 2

$df.residual
[1] 4691

$df.null
[1] 4694

$converged
[1] TRUE

$boundary
[1] FALSE

$linkinv_fun
function (eta) 
eta
<environment: namespace:stats>

$link_fun
function (mu) 
mu
<environment: namespace:stats>

$training_offset
[1] FALSE

[1] "Cross-validated risk:"
                        learner coefficients      MSE        se    fold_sd
1: Lrnr_randomForest_500_TRUE_5  0.056299184 1.034798 0.0237462 0.07053924
2:       Lrnr_gbm_10000_2_0.001  0.876353346 1.006393 0.0233679 0.07188694
3:                Lrnr_glm_TRUE  0.005369642 1.021995 0.0238539 0.06474016
4:                 SuperLearner           NA 1.006033 0.0233913 0.07107934
   fold_min_MSE fold_max_MSE
1:    0.9099614     1.165088
2:    0.8753148     1.121065
3:    0.8900577     1.116998
4:    0.8772851     1.119882
learner coefficients MSE se fold_sd fold_min_MSE fold_max_MSE
Lrnr_randomForest_500_TRUE_5 0.056299184 1.034798 0.0237462 0.07053924 0.9099614 1.165088
Lrnr_gbm_10000_2_0.001 0.876353346 1.006393 0.0233679 0.07188694 0.8753148 1.121065
Lrnr_glm_TRUE 0.005369642 1.021995 0.0238539 0.06474016 0.8900577 1.116998
SuperLearner NA 1.006033 0.0233913 0.07107934 0.8772851 1.119882

Look at the predictions

predict() allows you to see what the model predicts on any given task. Here we look at predictions from the same data we trained the superlearner on, so the predicted weight to height z-scores of the first six children in our data set.

sl_fit$predict(washb_task) %>% head()
[1] -0.6641946 -0.7508882 -0.7014290 -0.7542267 -0.6456398 -0.6791542

Extras

  • Cross validate your entire ensembled superlearner using the cross-validation package origami, written by the same authors as sl3. Or just hold out a testing data set to evaluate performance.

  • Use make_learner() to customize the tuning parameters of your base learners or metalearner. Ex: lrnr_RF_200trees <- make_lrnr(Lrnr_randomForest, ntree = 200)

  • Add many layers to your superlearner and organize it into a “pipeline”

For more demos, check out the following teaching materials from the authors of sl3. My tutorial uses one of their example data sets in case you’d like to extend your learning via their training resources.

  • https://tlverse.org/tlverse-handbook/ensemble-machine-learning.html

  • https://tlverse.org/acic2019-workshop/ensemble-machine-learning.html

  • https://github.com/tlverse/sl3_lecture