diff --git a/README-example-1.png b/README-example-1.png index c9f9c55..afcf9ac 100644 Binary files a/README-example-1.png and b/README-example-1.png differ diff --git a/docs/CONTRIBUTING.html b/docs/CONTRIBUTING.html index 08fc68c..20b08f7 100644 --- a/docs/CONTRIBUTING.html +++ b/docs/CONTRIBUTING.html @@ -1,35 +1,42 @@ - + -/Users/nimahejazi/git/survtmle/CONTRIBUTING.md • survtmle +Contributing to survtmle development • survtmle - + - - + + - + - + + + + - - - - + + + + + + + - + + @@ -95,26 +108,27 @@ -
-
+
+
-
-

Contributing to survtmle development

+
+

We, the authors of the survtmle R package, use the same guide as is used for contributing to the development of the popular ggplot2 R package. This document is simply a formal re-statement of that fact.

The goal of this guide is to help you get up and contributing to survtmle as quickly as possible. The guide is divided into two main pieces:

  • Filing a bug report or feature request in an issue.
  • Suggesting a change via a pull request.
-
-

Issues

+
+

+Issues

When filing an issue, the most important thing is to include a minimal reproducible example so that we can quickly verify the problem, and then figure out how to fix it. There are three things you need to include to make your example reproducible: required packages, data, code.

  1. Packages should be loaded at the top of the script, so it’s easy to see which ones the example needs.

  2. -
  3. The easiest way to include data is to use dput() to generate the R code to recreate it.

  4. +
  5. The easiest way to include data is to use dput() to generate the R code to recreate it.

  6. Spend a little bit of time ensuring that your code is easy for others to read:

      @@ -125,10 +139,11 @@

      Issues

You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in.

-

(Unless you’ve been specifically asked for it, please don’t include the output of sessionInfo().)

+

(Unless you’ve been specifically asked for it, please don’t include the output of sessionInfo().)

-
-

Pull requests

+
+

+Pull requests

To contribute a change to survtmle, you follow these steps:

  1. Create a branch in git and make your changes.
  2. @@ -150,7 +165,7 @@

    Pull requests

    Each PR corresponds to a git branch, so if you expect to submit multiple changes make sure to create multiple branches. If you have multiple changes that depend on each other, start with the first one and don’t submit any others until the first one has been processed.

  3. Use survtmle coding style. Please follow the official tidyverse style guide. Maintaining a consistent style across the whole code base makes it much easier to jump into the code. If you’re modifying existing survtmle code that doesn’t follow the style guide, a separate pull request to fix the style would be greatly appreciated. To lower the burden on contributors, we’ve included a recipe make style that will re-format code to follow these conventions, provided that you’ve installed the styler package.

  4. -
  5. If you’re adding new parameters or a new function, you’ll also need to document them with roxygen. Make sure to re-run devtools::document() on the code before submitting.

  6. +
  7. If you’re adding new parameters or a new function, you’ll also need to document them with roxygen. Make sure to re-run devtools::document() on the code before submitting.

This seems like a lot of work but don’t worry if your pull request isn’t perfect. It’s a learning process. A pull request is a process, and unless you’ve submitted a few in the past it’s unlikely that your pull request will be accepted as is. Please don’t submit pull requests that change existing behaviour. Instead, think about how you can add a new feature in a minimally invasive way.

@@ -167,12 +182,13 @@

Pull requests

-

Site built with pkgdown.

+

Site built with pkgdown 1.3.0.

-
+ + diff --git a/docs/LICENSE-text.html b/docs/LICENSE-text.html new file mode 100644 index 0000000..a207b77 --- /dev/null +++ b/docs/LICENSE-text.html @@ -0,0 +1,141 @@ + + + + + + + + +License • survtmle + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+ +
+
+ + +
YEAR: 2016
+COPYRIGHT HOLDER: David C. Benkeser
+
+ +
+ +
+ + +
+ + +
+

Site built with pkgdown 1.3.0.

+
+
+
+ + + + + + diff --git a/docs/articles/index.html b/docs/articles/index.html index 19b7837..baeba93 100644 --- a/docs/articles/index.html +++ b/docs/articles/index.html @@ -1,6 +1,6 @@ - + @@ -9,27 +9,34 @@ Articles • survtmle - + - - + + - + - + + + + - - - + + + + + + - + +
@@ -95,12 +108,12 @@ - -
-
+
+ +

All vignettes

@@ -118,12 +131,13 @@

All vignettes

-

Site built with pkgdown.

+

Site built with pkgdown 1.3.0.

-
+ + diff --git a/docs/articles/survtmle_intro.html b/docs/articles/survtmle_intro.html index 864e48a..48f17c9 100644 --- a/docs/articles/survtmle_intro.html +++ b/docs/articles/survtmle_intro.html @@ -1,34 +1,39 @@ - + Targeted Learning for Survival Analysis with Competing Risks • survtmle - - - - + + + + - -
+
-
+
+

2019-04-16

+ + Source: vignettes/survtmle_intro.Rmd + + +
-

Introduction

@@ -96,32 +104,35 @@

Single failure type

We examine the use of survtmle in a variety of simple examples. The package can be loaded as follows:

- +
library(survtmle)
## survtmle: Targeted Learning for Survival Analysis
## Version: 1.1.0

We simulate a simple data with no censoring and a single cause of failure to illustrate the machinery of the survtmle package.

-

The simple data structure contains a set of baseline covariates (adjustVars), a binary treatment variable (trt), a failure time that is a function of the treatment, adjustment variables, and a random error (ftime), and a failure type (ftype), which denotes the cause of failure (0 means no failure, 1 means failure). The first few rows of data can be viewed as follows.

+
## Warning: `as_data_frame()` is deprecated, use `as_tibble()` (but mind the new semantics).
+## This warning is displayed once per session.
## # A tibble: 200 x 5
 ##    ftype ftime   trt    W1    W2
 ##    <dbl> <dbl> <int> <dbl> <dbl>
-##  1    0.    5.     0    1.    1.
-##  2    1.    4.     1    1.    1.
-##  3    1.    4.     1    0.    2.
-##  4    0.    6.     1    1.    2.
-##  5    0.    4.     1    1.    1.
-##  6    1.    3.     1    1.    0.
-##  7    0.    7.     0    0.    2.
-##  8    0.    3.     0    0.    0.
-##  9    1.    4.     1    0.    1.
-## 10    0.    4.     1    1.    1.
-## # ... with 190 more rows
+## 1 0 5 0 1 1 +## 2 1 4 1 1 1 +## 3 1 4 1 0 2 +## 4 0 6 1 1 2 +## 5 0 4 1 1 1 +## 6 1 3 1 1 0 +## 7 0 7 0 0 2 +## 8 0 3 0 0 0 +## 9 1 4 1 0 1 +## 10 0 4 1 1 1 +## # … with 190 more rows +

It is important to note that the current survtmle distribution only supports integer-valued failure times. If failure times are continuous-valued, then, unfortunately, we require the user to perform an additional pre-processing step to convert the observed failure times to ranked integers prior to applying the survtmle function. We hope to build support for this situation in future versions of the package.


@@ -130,19 +141,19 @@

Covariate adjustment via logistic regression

A common goal is to compare the incidence of failure at a fixed time between the two treatment groups. Covariate adjustment is often desirable in this comparison to improve efficiency (Moore and Laan 2009). This covariate adjustment may be facilitated by estimating a series of iterated covariate-conditional means (Robins 1999,@bang2005doubly,@vdlgruber:2012:ijb). The final iterated covariate-conditional mean is marginalized over the empirical distribution of baseline covariates to obtain an estimate of the marginal cumulative incidence.

-

Here, we invoke the eponymous survtmle function to compute the iterated mean-based (method = "mean") covariate-adjusted estimates of the cumulative incidence at time six (t0 = 6) in each of the treatment groups using quasi-logistic regression (formula specified via glm.ftime) to estimate the iterated means. The glm.ftime argument should be a valid right-hand-side formula specification based on colnames(adjustVars) and "trt". Here we use a simple main terms regression.

- +

Here, we invoke the eponymous survtmle function to compute the iterated mean-based (method = "mean") covariate-adjusted estimates of the cumulative incidence at time six (t0 = 6) in each of the treatment groups using quasi-logistic regression (formula specified via glm.ftime) to estimate the iterated means. The glm.ftime argument should be a valid right-hand-side formula specification based on colnames(adjustVars) and "trt". Here we use a simple main terms regression.

+
## Warning in checkInputs(ftime = ftime, ftype = ftype, trt = trt, t0 = t0, :
 ## glm.trt and SL.trt not specified. Proceeding with glm.trt = '1'
## Warning in checkInputs(ftime = ftime, ftype = ftype, trt = trt, t0 = t0, :
 ## glm.ctime and SL.ctime not specified. Computing Kaplan-Meier estimates.
## Warning in chol.default(B, pivot = TRUE): the matrix is either rank-
 ## deficient or indefinite
- +
## $est
 ##          [,1]
 ## 0 1 0.5667312
@@ -151,19 +162,19 @@ 

## $var ## 0 1 1 1 ## 0 1 0.0261215098 0.0002153689 -## 1 1 0.0002153689 0.0185303036

+## 1 1 0.0002153689 0.0185303037

Internally, survtmle estimates the covariate-conditional treatment probability (via glm.trt or SL.trt, see below) and covariate-conditional censoring distribution (via glm.ctime or SL.ctime, see below). In the above example, the treatment probability does not depend on covariates (as in e.g., a randomized trial) and so we did not specify a way to adjust for covariates in estimating the treatment probability. In this case, survtmle sets glm.trt = "1", which corresponds with empirical estimates of treatment probability, and sets glm.ctime to be equivalent to the Kaplan-Meier censoring distribution estimates.

In practice, we may wish to adjust for covariates when computing estimates of the covariate-conditional treatment and censoring probabilities. In observational studies, the distribution of treatment may differ by measured covariates, while in almost any study (including randomized trials) it is possible that censoring differs by covariates. Thus, we often wish to adjust for covariates to account for measured confounders of treatment receipt and censoring.

-

This adjustment may be accomplished using logistic regression through the glm.trt and glm.ctime arguments, respectively. The glm.trt argument should be a valid right-hand-side formula specification based on colnames(adjustVars). The glm.ctime argument should be a valid right-hand-side formula specification based on colnames(adjustVars), "trt", and "t" used to model the hazard function for censoring. By including "trt" and "t", the function allows censoring probabilities to depend on treatment assignment and time, respectively. Here we call survtmle again, now adjusting for covariates in the treatment and censoring fits.

- +

This adjustment may be accomplished using logistic regression through the glm.trt and glm.ctime arguments, respectively. The glm.trt argument should be a valid right-hand-side formula specification based on colnames(adjustVars). The glm.ctime argument should be a valid right-hand-side formula specification based on colnames(adjustVars), "trt", and "t" used to model the hazard function for censoring. By including "trt" and "t", the function allows censoring probabilities to depend on treatment assignment and time, respectively. Here we call survtmle again, now adjusting for covariates in the treatment and censoring fits.

+
## $est
 ##          [,1]
 ## 0 1 0.5657950
@@ -179,15 +190,15 @@ 

Covariate adjustment via Super Learner

While we can certainly use logistic regression to model the treatment, censoring, and iterated means, a large benefit afforded by the survtmle package is how it leverages SuperLearner ensemble machine learning to estimate these quantities in a more flexible manner. The Super Learner method is a generalization of stacked regression (Breiman 1996) that uses cross-validation to select the best-performing estimator from a library of candidate estimators (Laan, Polley, and Hubbard 2007). Many popular machine learning algorithms have been implemented in the SuperLearner.

To utilize SuperLearner estimates, we can utilize options SL.trt, SL.ctime, and SL.ftime to estimate conditional treatment, censoring, and iterated means, respectively. See ?SuperLearner for details on options for correctly specifying a super learner library and see listWrappers() to print the methods implemented in the SuperLearner package. Here we demonstrate a call to survtmle using a simple library that includes simple algorithms that are included in base R.

- +
# Fit 3: SuperLearner estimators for treatment, failure, and censoring.
+fit3 <- survtmle(ftime = ftime, ftype = ftype,
+                 trt = trt, adjustVars = adjustVars,
+                 SL.trt = c("SL.glm","SL.mean","SL.step"),
+                 SL.ftime = c("SL.glm","SL.mean","SL.step"),
+                 SL.ctime = c("SL.glm","SL.mean","SL.step"),
+                 method = "mean", t0 = t_0)
## Loading required package: nnls
- +
## $est
 ##          [,1]
 ## 0 1 0.5541546
@@ -197,7 +208,7 @@ 

## 0 1 1 1 ## 0 1 0.0027940998 0.0001872402 ## 1 1 0.0001872402 0.0017399509

-

Remark: Invoking survtmle with method = "mean" and SL.ftime requires fitting a Super Learner for each time point from seq_len(t0). If there are many unique time points observed in the data, this can become a computationally intensive process. In such cases, we recommend either redefining the ftime variable to pool across time points or using method = "hazard" (see below).

+

Remark: Invoking survtmle with method = "mean" and SL.ftime requires fitting a Super Learner for each time point from seq_len(t0). If there are many unique time points observed in the data, this can become a computationally intensive process. In such cases, we recommend either redefining the ftime variable to pool across time points or using method = "hazard" (see below).


@@ -205,15 +216,15 @@

Using the method of cause-specific hazards

An alternative method to the iterated mean-based TMLE for estimating cumulative incidence is based on estimated the (cause-specific) hazard function. This estimator is implemented by specifying method = "hazard" in a call to survtmle. Just as with method = "mean", we can use either glm. or SL. to adjust for covariates. However, now the glm.ftime formula may additionally include functions of time, as this formula is now being used in a pooled regression to estimate cause-specific hazards over time.

- +
## $est
 ##          [,1]
 ## 0 1 0.5864610
@@ -224,16 +235,16 @@ 

## 0 1 0.0028138563 0.0002638809 ## 1 1 0.0002638809 0.0018133124

Here’s an example using Super Learner.

- +
## $est
 ##          [,1]
 ## 0 1 0.5836815
@@ -243,7 +254,7 @@ 

## 0 1 1 1 ## 0 1 0.0026784034 0.0002269052 ## 1 1 0.0002269052 0.0020455780

-

Remark: The TMLE algorithm for the hazard-based estimator differs from the iterated mean-based TMLE. In particular, the algorithm is iterative and has no guarantee of convergence. While we have not identified instances where convergence is a serious problem, we encourage users to submit any such situations as GitHub issues or to write directly to benkeser@emory.edu. The stopping criteria for the iteration may be adjusted via tol and maxIter options. Increasing tol or decreasing maxIter will lead to faster convergence; however, it is recommended that tol be set no larger than 1 / sqrt(length(ftime)). If maxIter is reached without convergence, one should check that fit$meanIC are all less than 1 / sqrt(length(ftime)).

+

Remark: The TMLE algorithm for the hazard-based estimator differs from the iterated mean-based TMLE. In particular, the algorithm is iterative and has no guarantee of convergence. While we have not identified instances where convergence is a serious problem, we encourage users to submit any such situations as GitHub issues or to write directly to . The stopping criteria for the iteration may be adjusted via tol and maxIter options. Increasing tol or decreasing maxIter will lead to faster convergence; however, it is recommended that tol be set no larger than 1 / sqrt(length(ftime)). If maxIter is reached without convergence, one should check that fit$meanIC are all less than 1 / sqrt(length(ftime)).


@@ -251,27 +262,27 @@

Multiple failure types

In all of the preceding examples, we have restricted our attention to the case where there is only a single failure type of interest. Now we consider more scenarios where we observe multiple failure types. First, we simulate data with two types of failure.

- +
set.seed(1234)
+n <- 200
+trt <- rbinom(n, 1, 0.5)
+adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2)))
+ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2)
+ftype <- round(runif(n, 0, 2))

This simulated data structure is similar to the single failure type data; however, now the failure type variable (ftype) now contains two distinct types of failure (with 0 still reserved for no failure).

## # A tibble: 200 x 5
 ##    ftype ftime   trt    W1    W2
 ##    <dbl> <dbl> <int> <dbl> <dbl>
-##  1    0.    5.     0    1.    1.
-##  2    2.    4.     1    1.    1.
-##  3    1.    4.     1    0.    2.
-##  4    1.    6.     1    1.    2.
-##  5    1.    4.     1    1.    1.
-##  6    1.    3.     1    1.    0.
-##  7    0.    7.     0    0.    2.
-##  8    1.    3.     0    0.    0.
-##  9    1.    4.     1    0.    1.
-## 10    1.    4.     1    1.    1.
-## # ... with 190 more rows
+## 1 0 5 0 1 1 +## 2 2 4 1 1 1 +## 3 1 4 1 0 2 +## 4 1 6 1 1 2 +## 5 1 4 1 1 1 +## 6 1 3 1 1 0 +## 7 0 7 0 0 2 +## 8 1 3 0 0 0 +## 9 1 4 1 0 1 +## 10 1 4 1 1 1 +## # … with 190 more rows

When multiple failure types are present, a common goal is to compare the cumulative incidence of a particular failure type at a fixed time between the two treatment groups, while accounting for the fact that participants may fail due to other failure types. Covariate adjustment is again desirable to improve efficiency and account for measured confounders of treatment and censoring.

@@ -280,14 +291,14 @@

Covariate adjustment via logistic regression

The call to invoke survtmle is exactly the same as in the single failure type case.

- +
## $est
 ##          [,1]
 ## 0 1 0.4229114
@@ -303,15 +314,15 @@ 

## 1 2 5.957709e-05 -2.382514e-03 7.728650e-05 2.615476e-03

The output object contains cumulative incidence estimates for each of the four groups defined by the two failure types and treatments.

There are sometimes failure types that are not of direct interest to out study. Because survtmle invoked with method = "mean" computes an estimate of the cumulative incidence of each failure type separately, we can save on computation time by specifying which failure types we care about via the ftypeOfInterest option.

- +
## $est
 ##          [,1]
 ## 0 1 0.4229114
@@ -326,15 +337,15 @@ 

Covariate adjustment via Super Learner

As before, we can use the SuperLearner ensemble learning algorithm to adjust for covariates in multiple failure type settings as well.

- +
## $est
 ##          [,1]
 ## 0 1 0.4347158
@@ -358,14 +369,14 @@ 

Covariate adjustment via logistic regression

The TMLE based on cause-specific hazards can also be used to compute cumulative incidence estimates in settings with multiple failure types. As above, the glm.ftime formula may additionally include functions of time, as this formula is now being used in a pooled regression to estimate cause-specific hazard of each failure type over time.

- +
## $est
 ##          [,1]
 ## 0 1 0.4590612
@@ -380,14 +391,14 @@ 

## 0 2 5.883916e-05 -5.379741e-05 3.877006e-03 -3.037918e-03 ## 1 2 -4.525266e-05 8.922855e-05 -3.037918e-03 3.410066e-03

We can also leverage the SuperLearner algorithm when using the method of cause-specific hazards with multiple failure types of interest.

- +
# Fit 10: same as Fit 7 above, but using the "hazard" method
+fit10 <- survtmle(ftime = ftime, ftype = ftype,
+                  trt = trt, adjustVars = adjustVars,
+                  SL.trt = c("SL.glm","SL.mean","SL.step"),
+                  SL.ftime = c("SL.glm","SL.mean","SL.step"),
+                  SL.ctime = c("SL.glm","SL.mean","SL.step"),
+                  method = "hazard", t0 = t_0)
+fit10
## $est
 ##          [,1]
 ## 0 1 0.4561936
@@ -401,7 +412,7 @@ 

## 1 1 -1.309696e-03 2.011762e-03 -1.172315e-05 6.302634e-05 ## 0 2 4.811715e-05 -1.172315e-05 3.036662e-03 -2.161558e-03 ## 1 2 5.362138e-05 6.302634e-05 -2.161558e-03 2.523458e-03

-

As with the iterated-mean based TMLE, we can obtain estimates of cumulative incidence of only certain failure types (via ftypeOfInterest); however, this does not necessarily result in faster computation, as it did in the case above. In situations where the convergence of the algorithm is an issue, it may be useful to invoke multiple calls to survtmle with singular ftypeOfInterest. If such convergence issues arise, please report them as GitHub issues or contact us at benkeser@emory.edu.

+

As with the iterated-mean based TMLE, we can obtain estimates of cumulative incidence of only certain failure types (via ftypeOfInterest); however, this does not necessarily result in faster computation, as it did in the case above. In situations where the convergence of the algorithm is an issue, it may be useful to invoke multiple calls to survtmle with singular ftypeOfInterest. If such convergence issues arise, please report them as GitHub issues or contact us at .

@@ -411,8 +422,8 @@

In certain situations, we have knowledge that the incidence of an event is bounded below/above for every strata in the population. It is possible to incorporate these bounds into the TMLE estimation procedure to ensure that any resulting estimate of cumulative incidence is compatible with these bounds. Please refer to Benkeser, Carone, and Gilbert (2017) for more on bounded TMLEs and their potential benefits.

Bounds can be passed to survtmle by creating a data.frame that contains columns with specific names. In particular, there should be a column named "t". There should additionally be columns for the lower and upper bound for each type of failure. For example if there is only one type of failure (ftype = 1 or ftype = 0) then the bounds data.frame can contain columns "l1", and "u1" denote the lower and upper bounds, respectively, on the iterated conditional mean (for method = "mean") or the conditional hazard function (for method = "hazard"). If there are two types of failure (ftype = 1, ftype = 2, or ftype = 0) then there can additionally be columns "l2" and "u2" denoting the lower and upper bounds, respectively, on the iterated conditional mean for type two failures (for method = "mean") or the conditional cause-specific hazard function for type two failures (for method = "hazard").

Here is a simple example.

- +
bf1 <- data.frame(t = seq_len(t_0), l1 = rep(0.01, t_0), u1 = rep(0.99, t_0))
+bf1
##   t   l1   u1
 ## 1 1 0.01 0.99
 ## 2 2 0.01 0.99
@@ -421,15 +432,15 @@ 

## 5 5 0.01 0.99 ## 6 6 0.01 0.99

Now that we have specified our bounds, we can invoke survtmle repeating our first example (“Fit 1”), but now restricting the iterated conditional means to follow the bounds specified above.

- +
## $est
 ##          [,1]
 ## 0 1 0.4230616
@@ -444,14 +455,14 @@ 

## 0 2 -1.688921e-03 -2.394680e-05 2.544824e-03 7.730107e-05 ## 1 2 5.968034e-05 -2.382745e-03 7.730107e-05 2.615547e-03

When there are multiple failure types of interest, we can still provide bounds for the iterated conditional means (or the conditional hazard function, whichever is appropriate based on our specification of the method argument).

- +
##   t   l1   u1   l2   u2
 ## 1 1 0.01 0.99 0.02 0.99
 ## 2 2 0.01 0.99 0.02 0.99
@@ -460,15 +471,15 @@ 

## 5 5 0.01 0.99 0.02 0.99 ## 6 6 0.01 0.99 0.02 0.99

Now, we invoke survtmle, passing in the specified bounds using the appropriate argument:

- +
## $est
 ##          [,1]
 ## 0 1 0.4230616
@@ -495,23 +506,23 @@ 

The survtmle function provides the function timepoints to compute the estimated cumulative incidence over multiple timepoints. This function is invoked after an initial call to survtmle with option returnModels = TRUE. By setting this option, the timepoints function is able to recycle fits for the conditional treatment probability, censoring distribution, and, in the case of method = "hazard", the hazard fits. Thus, invoking timepoints is faster than making repeated calls to survtmle with different t0.

There is some subtlety involved to properly leveraging this facility. Recall that the censoring distribution fit (and cause-specific hazard fit) pools over all time points. Thus, in order to most efficiently use timepoints, the initial call to survtmle should be made setting option t0 equal to the final time point at which one wants estimates of cumulative incidence. This allows these hazard fitting procedures to utilize all of the data to estimate the conditional hazard function.

We demonstrate the use of timepoints below based on the following simulated data.

- -

Imagine that we would like cumulative incidence estimates at times seq_len(t_0) based on fit2 above (mean-based TMLE using glm covariate adjustment). However, note that when we originally called fit2 the option returnModels was set to its default value FALSE. Thus, we must refit this object setting the function to return the model fits.

- +
set.seed(1234)
+n <- 200
+t_0 <- 6
+trt <- rbinom(n, 1, 0.5)
+adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2)))
+ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2)
+ftype <- round(runif(n, 0, 1))
+

Imagine that we would like cumulative incidence estimates at times seq_len(t_0) based on fit2 above (mean-based TMLE using glm covariate adjustment). However, note that when we originally called fit2 the option returnModels was set to its default value FALSE. Thus, we must refit this object setting the function to return the model fits.

+
## $est
 ##          [,1]
 ## 0 1 0.5657950
@@ -521,10 +532,10 @@ 

## 0 1 1 1 ## 0 1 0.0029470633 0.0002170549 ## 1 1 0.0002170549 0.0016411250

-

Now we can call timepoints to return estimates of cumulative incidence at each time seq_len(t_0).

-
tp.fit2 <- timepoints(fit2_rm, times = seq_len(t_0))
-# print the object
-tp.fit2
+

Now we can call timepoints to return estimates of cumulative incidence at each time seq_len(t_0).

+
## $est
 ##   t1         t2         t3        t4        t5        t6
 ## 1  0 0.01616033 0.08976902 0.1610393 0.3918143 0.5657950
@@ -535,18 +546,18 @@ 

## 1 NA 0.0001427635 0.0007369614 0.001131293 0.002291745 0.002947063 ## 2 NA 0.0006741195 0.0015590438 0.002276863 0.002277642 0.001641125

Internally, timepoints is making calls to survtmle, but is passing in the fitted treatment and censoring fits from fit2_rm$trtMod and fit2_rm$ctimeMod. However, for method = "mean" the function is still fitting the iterated means separately for each time required by the call to timepoints. Thus, the call to timepoints may be quite slow if method = "mean", SL.ftime is specified (as opposed to glm.ftime), and/or many times are passed in via times. Future implementations may attempt to avoid this extra model fitting. For now, if many times are required, we recommend using method = "hazard", which is able to recycle all of the model fits. Below is an example of this.

- +
## $est
 ##   t1         t2         t3        t4        t5        t6
 ## 1  0 0.03457851 0.09883419 0.2215276 0.3966049 0.5864610
@@ -557,66 +568,57 @@ 

## 1 NA 0.0001481315 0.000799149 0.001273434 0.002852316 0.002813856 ## 2 NA 0.0007267294 0.001906479 0.003232722 0.003374974 0.001813312

There is a plotting method available for timepoints to plot cumulative incidence over time in each treatment group and for each failure type.

- -

+
# plot raw cumulative incidence
+plot(tp.fit4, type = "raw")
+

Because the cumulative incidence function is being invoked pointwise, it is possible that the resulting curve is not monotone. However, it is possible to show that projecting this curve onto a monotone function via isotonic regression results in an estimate with identical asymptotic properties to the pointwise estimate. Therefore, we additionally provide an option type = "iso" (the default) that provides these smoothed curves.

- +

- +

Session Information

-
## R version 3.4.4 (2018-03-15)
-## Platform: x86_64-apple-darwin17.3.0 (64-bit)
-## Running under: macOS High Sierra 10.13.4
+
## R version 3.5.3 (2019-03-11)
+## Platform: x86_64-apple-darwin15.6.0 (64-bit)
+## Running under: macOS Mojave 10.14
 ## 
 ## Matrix products: default
-## BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
-## LAPACK: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib
+## BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
+## LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
 ## 
 ## locale:
 ## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
 ## 
 ## attached base packages:
-## [1] stats     graphics  grDevices utils     datasets  base     
+## [1] stats     graphics  grDevices utils     datasets  methods   base     
 ## 
 ## other attached packages:
-## [1] nnls_1.4       survtmle_1.1.0 tibble_1.4.2  
+## [1] nnls_1.4       survtmle_1.1.0 tibble_2.1.1  
 ## 
 ## loaded via a namespace (and not attached):
-##  [1] Rcpp_0.12.16             pillar_1.2.1            
-##  [3] compiler_3.4.4           plyr_1.8.4              
-##  [5] bindr_0.1.1              prettyunits_1.0.2       
-##  [7] progress_1.1.2.9003      methods_3.4.4           
-##  [9] tools_3.4.4              digest_0.6.15           
-## [11] evaluate_0.10.1          gtable_0.2.0            
-## [13] lattice_0.20-35          pkgconfig_2.0.1         
-## [15] rlang_0.2.0.9000         Matrix_1.2-13           
-## [17] cli_1.0.0.9002           ggsci_2.8               
-## [19] yaml_2.1.18              speedglm_0.3-2          
-## [21] bindrcpp_0.2.2           xml2_1.2.0              
-## [23] withr_2.1.2              SuperLearner_2.0-23-9000
-## [25] dplyr_0.7.4              stringr_1.3.0           
-## [27] knitr_1.20               hms_0.4.2               
-## [29] rprojroot_1.3-2          grid_3.4.4              
-## [31] glue_1.2.0               R6_2.2.2                
-## [33] rmarkdown_1.9.8          selectr_0.4-1           
-## [35] ggplot2_2.2.1            tidyr_0.8.0             
-## [37] purrr_0.2.4              magrittr_1.5            
-## [39] ansistrings_1.0.0.9000   backports_1.1.2         
-## [41] scales_0.5.0.9000        htmltools_0.3.6         
-## [43] MASS_7.3-49              assertthat_0.2.0        
-## [45] colorspace_1.3-2         labeling_0.3            
-## [47] utf8_1.1.3               stringi_1.1.7           
-## [49] lazyeval_0.2.1           munsell_0.4.3           
-## [51] crayon_1.3.4
+## [1] Rcpp_1.0.1 plyr_1.8.4 compiler_3.5.3 +## [4] pillar_1.3.1 tools_3.5.3 digest_0.6.18 +## [7] evaluate_0.13 memoise_1.1.0 gtable_0.3.0 +## [10] lattice_0.20-38 pkgconfig_2.0.2 rlang_0.3.3 +## [13] Matrix_1.2-17 cli_1.1.0 ggsci_2.9 +## [16] commonmark_1.7 yaml_2.2.0 speedglm_0.3-2 +## [19] pkgdown_1.3.0 xfun_0.6 SuperLearner_2.0-24 +## [22] stringr_1.4.0 dplyr_0.8.0.1 roxygen2_6.1.1 +## [25] xml2_1.2.0 knitr_1.22 desc_1.2.0 +## [28] fs_1.2.7 rprojroot_1.3-2 grid_3.5.3 +## [31] tidyselect_0.2.5 glue_1.3.1 R6_2.4.0 +## [34] fansi_0.4.0 rmarkdown_1.12 tidyr_0.8.3 +## [37] purrr_0.3.2 ggplot2_3.1.0 magrittr_1.5 +## [40] backports_1.1.3 scales_1.0.0 htmltools_0.3.6 +## [43] MASS_7.3-51.4 assertthat_0.2.1 colorspace_1.4-1 +## [46] labeling_0.3 utf8_1.1.4 stringi_1.4.3 +## [49] lazyeval_0.2.2 munsell_0.5.0 crayon_1.3.4

@@ -624,26 +626,25 @@

References

-

Bang, Heejung, and James M Robins. 2005. “Doubly Robust Estimation in Missing Data and Causal Inference Models.” Biometrics 61 (4). Wiley Online Library:962–73. https://doi.org/10.1111/j.1541-0420.2005.00377.x.

+

Bang, Heejung, and James M Robins. 2005. “Doubly Robust Estimation in Missing Data and Causal Inference Models.” Biometrics 61 (4): 962–73. https://doi.org/10.1111/j.1541-0420.2005.00377.x.

-

Benkeser, David, Marco Carone, and Peter B Gilbert. 2017. “Improved Estimation of the Cumulative Incidence of Rare Outcomes.” Statistics in Medicine. Wiley Online Library. https://doi.org/10.1002/sim.7337.

+

Benkeser, David, Marco Carone, and Peter B Gilbert. 2017. “Improved Estimation of the Cumulative Incidence of Rare Outcomes.” Statistics in Medicine. https://doi.org/10.1002/sim.7337.

-

Breiman, Leo. 1996. “Stacked Regressions.” Machine Learning 24 (1). Springer:49–64. https://doi.org/10.1007/BF00117832.

+

Breiman, Leo. 1996. “Stacked Regressions.” Machine Learning 24 (1): 49–64. https://doi.org/10.1007/BF00117832.

-

Laan, Mark J van der, and Susan Gruber. 2012. “Targeted Minimum Loss Based Estimation of Causal Effects of Multiple Time Point Interventions.” Journal Article. The International Journal of Biostatistics 8 (1):1–34. https://doi.org/10.1515/1557-4679.1370.

+

Laan, Mark J van der, and Susan Gruber. 2012. “Targeted Minimum Loss Based Estimation of Causal Effects of Multiple Time Point Interventions.” Journal Article. The International Journal of Biostatistics 8 (1): 1–34. https://doi.org/10.1515/1557-4679.1370.

-

Laan, Mark J van der, Eric C Polley, and Alan E Hubbard. 2007. “Super Learner.” Journal Article. Statistical Applications in Genetics and Molecular Biology 6 (1):1–23. https://doi.org/10.2202/1544-6115.1309.

+

Laan, Mark J van der, Eric C Polley, and Alan E Hubbard. 2007. “Super Learner.” Journal Article. Statistical Applications in Genetics and Molecular Biology 6 (1): 1–23. https://doi.org/10.2202/1544-6115.1309.

-

Moore, Kelly L, and Mark J van der Laan. 2009. “Increasing Power in Randomized Trials with Right Censored Outcomes Through Covariate Adjustment.” Journal of Biopharmaceutical Statistics 19 (6). Taylor & Francis:1099–1131. https://doi.org/10.1080/10543400903243017.

+

Moore, Kelly L, and Mark J van der Laan. 2009. “Increasing Power in Randomized Trials with Right Censored Outcomes Through Covariate Adjustment.” Journal of Biopharmaceutical Statistics 19 (6): 1099–1131. https://doi.org/10.1080/10543400903243017.

-

Robins, Jamie M. 1999. “Robust Estimation in Sequentially Ignorable Missing Data and Causal Inference Models.” Proceedings of the American Statistical Association Section on Bayesian Statistical Science. http://www.biostat.harvard.edu/robins/jsaprocpat1.pdf.

-
+

Robins, Jamie M. 1999. “Robust Estimation in Sequentially Ignorable Missing Data and Causal Inference Models.” Proceedings of the American Statistical Association Section on Bayesian Statistical Science. http://www.biostat.harvard.edu/robins/jsaprocpat1.pdf.

@@ -659,7 +660,7 @@

  • Multiple failure types
  • Estimation in bounded models
  • Utility functions
  • -
  • +
  • Session Information
  • References
  • @@ -674,11 +675,12 @@

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-5-1.png b/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-5-1.png index 9de90b1..576c432 100644 Binary files a/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-5-1.png and b/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-5-1.png differ diff --git a/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-6-1.png b/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-6-1.png index 922d47d..ae82a25 100644 Binary files a/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-6-1.png and b/docs/articles/survtmle_intro_files/figure-html/unnamed-chunk-6-1.png differ diff --git a/docs/authors.html b/docs/authors.html index 3a579d7..053c271 100644 --- a/docs/authors.html +++ b/docs/authors.html @@ -1,6 +1,6 @@ - + @@ -9,27 +9,34 @@ Authors • survtmle - + - - + + - + - + + + + - - - + + + + + + - + +
    @@ -95,19 +108,19 @@ -
    -
    +
    +
    • -

      David Benkeser. Author, maintainer, copyright holder. +

      David Benkeser. Author, maintainer, copyright holder. ORCID

    • -

      Nima Hejazi. Author. +

      Nima Hejazi. Author. ORCID

    @@ -123,12 +136,13 @@

    Authors

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/docsearch.css b/docs/docsearch.css new file mode 100644 index 0000000..e5f1fe1 --- /dev/null +++ b/docs/docsearch.css @@ -0,0 +1,148 @@ +/* Docsearch -------------------------------------------------------------- */ +/* + Source: https://github.com/algolia/docsearch/ + License: MIT +*/ + +.algolia-autocomplete { + display: block; + -webkit-box-flex: 1; + -ms-flex: 1; + flex: 1 +} + +.algolia-autocomplete .ds-dropdown-menu { + width: 100%; + min-width: none; + max-width: none; + padding: .75rem 0; + background-color: #fff; + background-clip: padding-box; + border: 1px solid rgba(0, 0, 0, .1); + box-shadow: 0 .5rem 1rem rgba(0, 0, 0, .175); +} + +@media (min-width:768px) { + .algolia-autocomplete .ds-dropdown-menu { + width: 175% + } +} + +.algolia-autocomplete .ds-dropdown-menu::before { + display: none +} + +.algolia-autocomplete .ds-dropdown-menu [class^=ds-dataset-] { + padding: 0; + background-color: rgb(255,255,255); + border: 0; + max-height: 80vh; +} + +.algolia-autocomplete .ds-dropdown-menu .ds-suggestions { + margin-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion { + padding: 0; + overflow: visible +} + +.algolia-autocomplete .algolia-docsearch-suggestion--category-header { + padding: .125rem 1rem; + margin-top: 0; + font-size: 1.3em; + font-weight: 500; + color: #00008B; + border-bottom: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--wrapper { + float: none; + padding-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--subcategory-column { + float: none; + width: auto; + padding: 0; + text-align: left +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content { + float: none; + width: auto; + padding: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content::before { + display: none +} + +.algolia-autocomplete .ds-suggestion:not(:first-child) .algolia-docsearch-suggestion--category-header { + padding-top: .75rem; + margin-top: .75rem; + border-top: 1px solid rgba(0, 0, 0, .1) +} + +.algolia-autocomplete .ds-suggestion .algolia-docsearch-suggestion--subcategory-column { + display: block; + padding: .1rem 1rem; + margin-bottom: 0.1; + font-size: 1.0em; + font-weight: 400 + /* display: none */ +} + +.algolia-autocomplete .algolia-docsearch-suggestion--title { + display: block; + padding: .25rem 1rem; + margin-bottom: 0; + font-size: 0.9em; + font-weight: 400 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--text { + padding: 0 1rem .5rem; + margin-top: -.25rem; + font-size: 0.8em; + font-weight: 400; + line-height: 1.25 +} + +.algolia-autocomplete .algolia-docsearch-footer { + width: 110px; + height: 20px; + z-index: 3; + margin-top: 10.66667px; + float: right; + font-size: 0; + line-height: 0; +} + +.algolia-autocomplete .algolia-docsearch-footer--logo { + background-image: url("data:image/svg+xml;utf8,"); + background-repeat: no-repeat; + background-position: 50%; + background-size: 100%; + overflow: hidden; + text-indent: -9000px; + width: 100%; + height: 100%; + display: block; + transform: translate(-8px); +} + +.algolia-autocomplete .algolia-docsearch-suggestion--highlight { + color: #FF8C00; + background: rgba(232, 189, 54, 0.1) +} + + +.algolia-autocomplete .algolia-docsearch-suggestion--text .algolia-docsearch-suggestion--highlight { + box-shadow: inset 0 -2px 0 0 rgba(105, 105, 105, .5) +} + +.algolia-autocomplete .ds-suggestion.ds-cursor .algolia-docsearch-suggestion--content { + background-color: rgba(192, 192, 192, .15) +} diff --git a/docs/docsearch.js b/docs/docsearch.js new file mode 100644 index 0000000..b35504c --- /dev/null +++ b/docs/docsearch.js @@ -0,0 +1,85 @@ +$(function() { + + // register a handler to move the focus to the search bar + // upon pressing shift + "/" (i.e. "?") + $(document).on('keydown', function(e) { + if (e.shiftKey && e.keyCode == 191) { + e.preventDefault(); + $("#search-input").focus(); + } + }); + + $(document).ready(function() { + // do keyword highlighting + /* modified from https://jsfiddle.net/julmot/bL6bb5oo/ */ + var mark = function() { + + var referrer = document.URL ; + var paramKey = "q" ; + + if (referrer.indexOf("?") !== -1) { + var qs = referrer.substr(referrer.indexOf('?') + 1); + var qs_noanchor = qs.split('#')[0]; + var qsa = qs_noanchor.split('&'); + var keyword = ""; + + for (var i = 0; i < qsa.length; i++) { + var currentParam = qsa[i].split('='); + + if (currentParam.length !== 2) { + continue; + } + + if (currentParam[0] == paramKey) { + keyword = decodeURIComponent(currentParam[1].replace(/\+/g, "%20")); + } + } + + if (keyword !== "") { + $(".contents").unmark({ + done: function() { + $(".contents").mark(keyword); + } + }); + } + } + }; + + mark(); + }); +}); + +/* Search term highlighting ------------------------------*/ + +function matchedWords(hit) { + var words = []; + + var hierarchy = hit._highlightResult.hierarchy; + // loop to fetch from lvl0, lvl1, etc. + for (var idx in hierarchy) { + words = words.concat(hierarchy[idx].matchedWords); + } + + var content = hit._highlightResult.content; + if (content) { + words = words.concat(content.matchedWords); + } + + // return unique words + var words_uniq = [...new Set(words)]; + return words_uniq; +} + +function updateHitURL(hit) { + + var words = matchedWords(hit); + var url = ""; + + if (hit.anchor) { + url = hit.url_without_anchor + '?q=' + escape(words.join(" ")) + '#' + hit.anchor; + } else { + url = hit.url + '?q=' + escape(words.join(" ")); + } + + return url; +} diff --git a/docs/index.html b/docs/index.html index 04fa085..53b0348 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1,5 +1,5 @@ - + @@ -7,33 +7,38 @@ Compute Targeted Minimum Loss-Based Estimates in Right-Censored Survival Settings • survtmle - - - - + + + - -
    +
    -
    +
    -
    +


    @@ -228,28 +232,38 @@

    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    -
    @@ -271,11 +286,12 @@

    Dev status

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/news/index.html b/docs/news/index.html index f0c9a74..9934df1 100644 --- a/docs/news/index.html +++ b/docs/news/index.html @@ -1,35 +1,42 @@ - + -All news • survtmle +Changelog • survtmle - + - - + + - + - + + + + - - - - + + + + + + + - + +
    @@ -95,37 +108,48 @@ -
    - -
    +
    +
    -
    -
    -

    survtmle 1.1.0

    +
    +

    +survtmle 1.1.1 Unreleased +

    +
      +
    • Minor bug fixes and documentation updates.
    • +
    +
    +
    +

    +survtmle 1.1.0 2018-04-13 +

    • Adds support for the use of speedglm to fit the numerous regressions fit in the estimation procedure. Users may see warnings when speedglm fails, in which case the code defaults back to standard glm.
    • Fixes problems with the plot.tp.survtmle method induced, by changes in the inner working of tidyr as of tidyr v0.8.0.
    • Adds a method confint.tp.survtmle that computes and provides output tables for statistical inference directly from objects of class tp.survtmle. This provides information equivalent to that output by confint.survtmle.
    -
    -

    survtmle 1.0.0

    +
    +

    +survtmle 1.0.0 2017-07-14 +

    • The first public release made available on CRAN.
    -
    @@ -138,12 +162,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/pkgdown.css b/docs/pkgdown.css index 181fe63..c03fb08 100644 --- a/docs/pkgdown.css +++ b/docs/pkgdown.css @@ -1,13 +1,32 @@ -/* Sticker footer */ +/* Sticky footer */ + +/** + * Basic idea: https://philipwalton.github.io/solved-by-flexbox/demos/sticky-footer/ + * Details: https://github.com/philipwalton/solved-by-flexbox/blob/master/assets/css/components/site.css + * + * .Site -> body > .container + * .Site-content -> body > .container .row + * .footer -> footer + * + * Key idea seems to be to ensure that .container and __all its parents__ + * have height set to 100% + * + */ + +html, body { + height: 100%; +} + body > .container { display: flex; - padding-top: 60px; - min-height: calc(100vh); + height: 100%; flex-direction: column; + + padding-top: 60px; } body > .container .row { - flex: 1; + flex: 1 0 auto; } footer { @@ -16,6 +35,7 @@ footer { border-top: 1px solid #e5e5e5; color: #666; display: flex; + flex-shrink: 0; } footer p { margin-bottom: 0; @@ -38,6 +58,17 @@ img { max-width: 100%; } +/* Fix bug in bootstrap (only seen in firefox) */ +summary { + display: list-item; +} + +/* Typographic tweaking ---------------------------------*/ + +.contents .page-header { + margin-top: calc(-60px + 1em); +} + /* Section anchors ---------------------------------*/ a.anchor { @@ -68,7 +99,7 @@ a.anchor { .contents h1, .contents h2, .contents h3, .contents h4 { padding-top: 60px; - margin-top: -60px; + margin-top: -40px; } /* Static header placement on mobile devices */ @@ -108,13 +139,11 @@ a.anchor { /* Reference index & topics ----------------------------------------------- */ .ref-index th {font-weight: normal;} -.ref-index h2 {font-size: 20px;} .ref-index td {vertical-align: top;} +.ref-index .icon {width: 40px;} .ref-index .alias {width: 40%;} -.ref-index .title {width: 60%;} - -.ref-index .alias {width: 40%;} +.ref-index-icons .alias {width: calc(40% - 40px);} .ref-index .title {width: 60%;} .ref-arguments th {text-align: right; padding-right: 10px;} @@ -192,3 +221,16 @@ a.sourceLine:hover { .hasCopyButton:hover button.btn-copy-ex { visibility: visible; } + +/* mark.js ----------------------------*/ + +mark { + background-color: rgba(255, 255, 51, 0.5); + border-bottom: 2px solid rgba(255, 153, 51, 0.3); + padding: 1px; +} + +/* vertical spacing after htmlwidgets */ +.html-widget { + margin-bottom: 10px; +} diff --git a/docs/pkgdown.js b/docs/pkgdown.js index 64b20df..eb7e83d 100644 --- a/docs/pkgdown.js +++ b/docs/pkgdown.js @@ -1,94 +1,115 @@ -$(function() { - - $("#sidebar") - .stick_in_parent({offset_top: 40}) - .on('sticky_kit:bottom', function(e) { - $(this).parent().css('position', 'static'); - }) - .on('sticky_kit:unbottom', function(e) { - $(this).parent().css('position', 'relative'); +/* http://gregfranko.com/blog/jquery-best-practices/ */ +(function($) { + $(function() { + + $("#sidebar") + .stick_in_parent({offset_top: 40}) + .on('sticky_kit:bottom', function(e) { + $(this).parent().css('position', 'static'); + }) + .on('sticky_kit:unbottom', function(e) { + $(this).parent().css('position', 'relative'); + }); + + $('body').scrollspy({ + target: '#sidebar', + offset: 60 }); - $('body').scrollspy({ - target: '#sidebar', - offset: 60 - }); + $('[data-toggle="tooltip"]').tooltip(); + + var cur_path = paths(location.pathname); + var links = $("#navbar ul li a"); + var max_length = -1; + var pos = -1; + for (var i = 0; i < links.length; i++) { + if (links[i].getAttribute("href") === "#") + continue; + // Ignore external links + if (links[i].host !== location.host) + continue; + + var nav_path = paths(links[i].pathname); + + var length = prefix_length(nav_path, cur_path); + if (length > max_length) { + max_length = length; + pos = i; + } + } - var cur_path = paths(location.pathname); - $("#navbar ul li a").each(function(index, value) { - if (value.text == "Home") - return; - if (value.getAttribute("href") === "#") - return; - - var path = paths(value.pathname); - if (is_prefix(cur_path, path)) { - // Add class to parent
  • , and enclosing
  • if in dropdown - var menu_anchor = $(value); + // Add class to parent
  • , and enclosing
  • if in dropdown + if (pos >= 0) { + var menu_anchor = $(links[pos]); menu_anchor.parent().addClass("active"); menu_anchor.closest("li.dropdown").addClass("active"); } }); -}); -function paths(pathname) { - var pieces = pathname.split("/"); - pieces.shift(); // always starts with / + function paths(pathname) { + var pieces = pathname.split("/"); + pieces.shift(); // always starts with / + + var end = pieces[pieces.length - 1]; + if (end === "index.html" || end === "") + pieces.pop(); + return(pieces); + } - var end = pieces[pieces.length - 1]; - if (end === "index.html" || end === "") - pieces.pop(); - return(pieces); -} + // Returns -1 if not found + function prefix_length(needle, haystack) { + if (needle.length > haystack.length) + return(-1); -function is_prefix(needle, haystack) { - if (needle.length > haystack.lengh) - return(false); + // Special case for length-0 haystack, since for loop won't run + if (haystack.length === 0) { + return(needle.length === 0 ? 0 : -1); + } - for (var i = 0; i < haystack.length; i++) { - if (needle[i] != haystack[i]) - return(false); - } + for (var i = 0; i < haystack.length; i++) { + if (needle[i] != haystack[i]) + return(i); + } - return(true); -} + return(haystack.length); + } -/* Clipboard --------------------------*/ + /* Clipboard --------------------------*/ -function changeTooltipMessage(element, msg) { - var tooltipOriginalTitle=element.getAttribute('data-original-title'); - element.setAttribute('data-original-title', msg); - $(element).tooltip('show'); - element.setAttribute('data-original-title', tooltipOriginalTitle); -} + function changeTooltipMessage(element, msg) { + var tooltipOriginalTitle=element.getAttribute('data-original-title'); + element.setAttribute('data-original-title', msg); + $(element).tooltip('show'); + element.setAttribute('data-original-title', tooltipOriginalTitle); + } -if(Clipboard.isSupported()) { - $(document).ready(function() { - var copyButton = ""; + if(ClipboardJS.isSupported()) { + $(document).ready(function() { + var copyButton = ""; - $(".examples").addClass("hasCopyButton"); + $(".examples, div.sourceCode").addClass("hasCopyButton"); - // Insert copy buttons: - $(copyButton).prependTo(".hasCopyButton"); + // Insert copy buttons: + $(copyButton).prependTo(".hasCopyButton"); - // Initialize tooltips: - $('.btn-copy-ex').tooltip({container: 'body'}); + // Initialize tooltips: + $('.btn-copy-ex').tooltip({container: 'body'}); - // Initialize clipboard: - var clipboardBtnCopies = new Clipboard('[data-clipboard-copy]', { - text: function(trigger) { - return trigger.parentNode.textContent; - } - }); + // Initialize clipboard: + var clipboardBtnCopies = new ClipboardJS('[data-clipboard-copy]', { + text: function(trigger) { + return trigger.parentNode.textContent; + } + }); - clipboardBtnCopies.on('success', function(e) { - changeTooltipMessage(e.trigger, 'Copied!'); - e.clearSelection(); - }); + clipboardBtnCopies.on('success', function(e) { + changeTooltipMessage(e.trigger, 'Copied!'); + e.clearSelection(); + }); - clipboardBtnCopies.on('error', function() { - changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); + clipboardBtnCopies.on('error', function() { + changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); + }); }); - }); -} - + } +})(window.jQuery || window.$) diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index d884ed6..bd742a3 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -1,7 +1,6 @@ -pandoc: 2.1.3 -pkgdown: 0.1.0.9000 -pkgdown_sha: 17f683717d0a1547f13a08a259b34c58927452d6 +pandoc: 2.3.1 +pkgdown: 1.3.0 +pkgdown_sha: ~ articles: - refs.bib: refs.bib survtmle_intro: survtmle_intro.html diff --git a/docs/reference/LogLikelihood.html b/docs/reference/LogLikelihood.html index a331b5b..766691f 100644 --- a/docs/reference/LogLikelihood.html +++ b/docs/reference/LogLikelihood.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ Log-Likelihood — LogLikelihood • survtmle - + - - + + - + - + + + + - - - + + + + + + - + +
  • @@ -98,19 +111,23 @@ -
    +
    +

    Computes the log-likelihood for a model. Used by optim on occasion.

    +
    LogLikelihood(beta, X, Y)
    -

    Arguments

    +

    Arguments

    @@ -150,12 +167,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/LogLikelihood_offset.html b/docs/reference/LogLikelihood_offset.html index 58b3a3b..210dc90 100644 --- a/docs/reference/LogLikelihood_offset.html +++ b/docs/reference/LogLikelihood_offset.html @@ -1,6 +1,6 @@ - + @@ -9,31 +9,38 @@ Log-Likelihood Offset — LogLikelihood_offset • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -99,20 +112,24 @@ -
    +
    +

    Computes the log-likelihood for a logistic regression model with an offset. Used by optim on occasion.

    +
    LogLikelihood_offset(beta, Y, H, offset)
    -

    Arguments

    +

    Arguments

    @@ -156,12 +173,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/checkInputs.html b/docs/reference/checkInputs.html index c635b0b..e9508bd 100644 --- a/docs/reference/checkInputs.html +++ b/docs/reference/checkInputs.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ Check Function Inputs — checkInputs • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -98,24 +111,30 @@ -
    +
    +

    Check the input values of function parameters for errors.

    +
    -
    checkInputs(ftime, ftype, trt, adjustVars, t0 = max(ftime[ftype > 0]),
    -  SL.ftime = NULL, SL.ctime = NULL, SL.trt = NULL, glm.ftime = NULL,
    -  glm.ctime = NULL, glm.trt = "1", returnIC = TRUE, returnModels = TRUE,
    -  ftypeOfInterest = unique(ftype[ftype != 0]), trtOfInterest = unique(trt),
    -  method = "hazard", bounds = NULL, verbose = FALSE,
    -  tol = 1/(length(ftime)), maxIter = 100, Gcomp = FALSE)
    +
    checkInputs(ftime, ftype, trt, adjustVars, t0 = max(ftime[ftype > 0]),
    +  SL.ftime = NULL, SL.ctime = NULL, SL.trt = NULL,
    +  glm.ftime = NULL, glm.ctime = NULL, glm.trt = "1",
    +  returnIC = TRUE, returnModels = TRUE,
    +  ftypeOfInterest = unique(ftype[ftype != 0]),
    +  trtOfInterest = unique(trt), method = "hazard", bounds = NULL,
    +  verbose = FALSE, tol = 1/(length(ftime)), maxIter = 100,
    +  Gcomp = FALSE)
    -

    Arguments

    +

    Arguments

    @@ -145,7 +164,7 @@

    Ar

    +default this is set to max(ftime).

    @@ -155,7 +174,7 @@

    Ar See ?SuperLearner for more information on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will -be called "trt" and names(adjustVars).

    +be called "trt" and names(adjustVars).

    @@ -164,7 +183,7 @@

    Ar estimate of the conditional hazard for censoring. It is expected that the wrappers used in the library will play nicely with the input variables, which will be called "trt" and -names(adjustVars).

    +names(adjustVars).

    @@ -172,7 +191,7 @@

    Ar SL.library argument in the call to SuperLearner for the estimate of the conditional probability of treatment. It is expected that the wrappers used in the library will play nicely with the input -variables, which will be names(adjustVars).

    +variables, which will be names(adjustVars).

    @@ -182,7 +201,7 @@

    Ar conditional mean). Ignored if SL.ftime != NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any variables found in -names(adjustVars).

    +names(adjustVars).

    @@ -191,7 +210,7 @@

    Ar for the estimate of the conditional hazard for censoring. Ignored if SL.ctime != NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any -variables found in names(adjustVars).

    +variables found in names(adjustVars).

    @@ -200,7 +219,7 @@

    Ar for the estimate of the conditional probability of treatment. Ignored if SL.trt != NULL. By default set to "1", corresponding to using empirical estimates of each value of trt. The formula can -include any variables found in names(adjustVars).

    +include any variables found in names(adjustVars).

    @@ -221,14 +240,14 @@

    Ar

    @@ -303,12 +322,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/cleanglm.html b/docs/reference/cleanglm.html index 7b1cabe..5a4b27e 100644 --- a/docs/reference/cleanglm.html +++ b/docs/reference/cleanglm.html @@ -1,6 +1,6 @@ - + @@ -9,31 +9,38 @@ Clean up outputs from GLM — cleanglm • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -99,20 +112,24 @@ -
    +
    +

    Removes superfluous output from the call to glm that is not needed to perform later predictions. It is applied as a space saving technique.

    +
    cleanglm(cm)
    -

    Arguments

    +

    Arguments

    t0

    The time at which to return cumulative incidence estimates. By -default this is set to max(ftime).

    SL.ftime
    SL.ctime
    SL.trt
    glm.ftime
    glm.ctime
    glm.trt
    returnICftypeOfInterest

    An input specifying what failure types to compute estimates of incidence for. The default value computes estimates for -values unique(ftype). Can alternatively be set to a vector of +values unique(ftype). Can alternatively be set to a vector of values found in ftype.

    trtOfInterest

    An input specifying which levels of trt are of interest. The default value computes estimates for values -unique(trt). Can alternatively be set to a vector of values +unique(trt). Can alternatively be set to a vector of values found in trt.

    @@ -145,12 +162,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/confint.survtmle.html b/docs/reference/confint.survtmle.html index b991b3d..b5f3ca4 100644 --- a/docs/reference/confint.survtmle.html +++ b/docs/reference/confint.survtmle.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ confint.survtmle — confint.survtmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -98,21 +111,25 @@ -
    +
    +

    Computes confidence intervals for a fitted survtmle object.

    +
    # S3 method for survtmle
    -confint(object, parm = seq_along(object$est),
    +confint(object, parm = seq_along(object$est),
       level = 0.95, ...)
    -

    Arguments

    +

    Arguments

    @@ -143,12 +160,12 @@

    Value

    Examples

    # simulate data -set.seed(1234) +set.seed(1234) n <- 100 -ftime <- round(runif(n, 1, 4)) -ftype <- round(runif(n, 0, 2)) -trt <- rbinom(n, 1, 0.5) -adjustVars <- data.frame(W1 = rnorm(n), W2 = rnorm(n)) +ftime <- round(runif(n, 1, 4)) +ftype <- round(runif(n, 0, 2)) +trt <- rbinom(n, 1, 0.5) +adjustVars <- data.frame(W1 = rnorm(n), W2 = rnorm(n)) # fit a survtmle object fit <- survtmle(ftime = ftime, ftype = ftype, trt = trt, @@ -156,7 +173,7 @@

    Examp glm.ftime = "trt + W1 + W2", glm.ctime = "trt + W1 + W2", method = "mean", t0 = 4) # get confidence intervals -ci <- confint(fit) +ci <- confint(fit) ci

    #> 2.5 % 97.5 % #> 0 1 0.5278856 0.8163684 #> 1 1 0.5115048 0.8154636 @@ -182,12 +199,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/confint.tp.survtmle.html b/docs/reference/confint.tp.survtmle.html index cc619ba..a3fa31b 100644 --- a/docs/reference/confint.tp.survtmle.html +++ b/docs/reference/confint.tp.survtmle.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ confint.tp.survtmle — confint.tp.survtmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -98,20 +111,24 @@ -
    +
    +

    Computes confidence intervals for a fitted tp.survtmle object.

    +
    # S3 method for tp.survtmle
    -confint(object, parm, level = 0.95, ...)
    +confint(object, parm, level = 0.95, ...)
    -

    Arguments

    +

    Arguments

    @@ -146,12 +163,12 @@

    Value

    Examples

    # simulate data -set.seed(1234) +set.seed(1234) n <- 100 -ftime <- round(runif(n, 1, 4)) -ftype <- round(runif(n, 0, 2)) -trt <- rbinom(n, 1, 0.5) -adjustVars <- data.frame(W1 = rnorm(n), W2 = rnorm(n)) +ftime <- round(runif(n, 1, 4)) +ftype <- round(runif(n, 0, 2)) +trt <- rbinom(n, 1, 0.5) +adjustVars <- data.frame(W1 = rnorm(n), W2 = rnorm(n)) # fit a survtmle object fit <- survtmle(ftime = ftime, ftype = ftype, trt = trt, @@ -159,9 +176,9 @@

    Examp glm.ftime = "trt + W1 + W2", glm.ctime = "trt + W1 + W2", method = "mean", t0 = 4) # extract cumulative incidence at each timepoint -tpfit <- timepoints(fit, times = seq_len(4)) +tpfit <- timepoints(fit, times = seq_len(4)) # get confidence intervals -ci <- confint(tpfit) +ci <- confint(tpfit) ci

    #> $`0 1` #> 2.5 % 97.5 % #> [1,] 0.02051147 0.2203985 @@ -210,12 +227,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/estimateCensoring.html b/docs/reference/estimateCensoring.html index 038f6e7..bc33b75 100644 --- a/docs/reference/estimateCensoring.html +++ b/docs/reference/estimateCensoring.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Estimate Censoring Mechanisms — estimateCensoring • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -111,12 +124,15 @@ -
    +
    +

    Computes an estimate of the hazard for censoring using either glm or SuperLearner based on log-likelihood loss. The function then computes @@ -130,15 +146,16 @@

    Estimate Censoring Mechanisms

    equal to each value of trtOfInterest in turn. One of these columns must be named C that is a counting process for the right-censoring variable. The function will fit a regression with C as the outcome and -functions of trt and names(adjustVars) as specified by +functions of trt and names(adjustVars) as specified by glm.ctime or SL.ctime as predictors.

    +
    estimateCensoring(dataList, adjustVars, t0, SL.ctime = NULL,
       glm.ctime = NULL, glm.family, returnModels = FALSE, verbose = TRUE,
       gtol = 0.001, ...)
    -

    Arguments

    +

    Arguments

    @@ -165,7 +182,7 @@

    Ar See ?SuperLearner for more information on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will -be called "trt" and names(adjustVars).

    +be called "trt" and names(adjustVars).

    @@ -175,7 +192,7 @@

    Ar conditional mean). Ignored if SL.ctime != NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any variables found in -names(adjustVars).

    +names(adjustVars).

    @@ -236,12 +253,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/estimateHazards.html b/docs/reference/estimateHazards.html index fa68e21..03941ee 100644 --- a/docs/reference/estimateHazards.html +++ b/docs/reference/estimateHazards.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Estimation for the Method of Cause-Specific Hazards — estimateHazards • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -119,12 +132,15 @@ -
    +
    +

    This function computes an estimate of the cause-specific hazard functions over all times using either glm or SuperLearner. The structure @@ -138,7 +154,7 @@

    Estimation for the Method of Cause-Specific Hazards

    first entry in dataList to iteratively fit hazard regression models for each cause of failure. Thus, this data.frame needs to have a column called Nj for each value of j in J. The first fit -estimates the hazard of min(J), while subsequent fits estimate the +estimates the hazard of min(J), while subsequent fits estimate the pseudo-hazard of all other values of j, where pseudo-hazard is used to mean the probability of a failure due to type j at a particular timepoint given no failure of any type at any previous timepoint AND no failure due to type @@ -149,11 +165,12 @@

    Estimation for the Method of Cause-Specific Hazards

    This structure ensures that no strata have estimated hazards that sum to more than one over all possible causes of failure at a particular timepoint.

    +
    -
    estimateHazards(dataList, J, adjustVars, SL.ftime = NULL, glm.ftime = NULL,
    -  glm.family, returnModels, bounds, verbose, ...)
    +
    estimateHazards(dataList, J, adjustVars, SL.ftime = NULL,
    +  glm.ftime = NULL, glm.family, returnModels, bounds, verbose, ...)
    -

    Arguments

    +

    Arguments

    glm.ctime
    glm.family
    @@ -177,7 +194,7 @@

    Ar See ?SuperLearner for more information on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will -be called "trt" and names(adjustVars).

    +be called "trt" and names(adjustVars).

    @@ -187,7 +204,7 @@

    Ar conditional mean). Ignored if SL.ftime != NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any variables found in -names(adjustVars).

    +names(adjustVars).

    @@ -246,12 +263,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/estimateIteratedMean.html b/docs/reference/estimateIteratedMean.html index 1379154..9200bb4 100644 --- a/docs/reference/estimateIteratedMean.html +++ b/docs/reference/estimateIteratedMean.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Estimation for the Method of Iterated Means — estimateIteratedMean • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -116,12 +129,15 @@ -
    +
    +

    This function computes an estimate of the G-computation regression at a specified time t using either glm or SuperLearner. The @@ -132,7 +148,7 @@

    Estimation for the Method of Iterated Means

    observed value of trt. The remaining should in turn have all rows set to each value of trtOfInterest in the survtmle call. Currently the code requires each data.frame to have named columns for each name -in names(adjustVars), as well as a column named trt. It must +in names(adjustVars), as well as a column named trt. It must also have a columns named Nj.Y where j corresponds with the numeric values input in allJ. These are the indicators of failure due to the various causes before time t and are necessary for determining who to @@ -140,15 +156,16 @@

    Estimation for the Method of Iterated Means

    column call C.Y where Y is again t - 1, so that right censored observations are not included in the regressions. The function will fit a regression with Qj.star.t+1 (also needed as a column in -wideDataList) on functions of trt and names(adjustVars) +wideDataList) on functions of trt and names(adjustVars) as specified by glm.ftime or SL.ftime.

    +
    estimateIteratedMean(wideDataList, t, whichJ, allJ, t0, adjustVars,
       SL.ftime = NULL, glm.ftime = NULL, verbose, returnModels = FALSE,
       bounds = NULL, ...)
    -

    Arguments

    +

    Arguments

    glm.ftime
    glm.family
    @@ -187,7 +204,7 @@

    Ar See ?SuperLearner for more information on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will -be called "trt" and names(adjustVars).

    +be called "trt" and names(adjustVars).

    @@ -197,7 +214,7 @@

    Ar conditional mean). Ignored if SL.ftime != NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any variables found in -names(adjustVars).

    +names(adjustVars).

    @@ -250,12 +267,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/estimateTreatment.html b/docs/reference/estimateTreatment.html index 977ee3b..c17fadd 100644 --- a/docs/reference/estimateTreatment.html +++ b/docs/reference/estimateTreatment.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Estimate Treatment Mechanisms — estimateTreatment • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -103,12 +116,15 @@ -
    +
    +

    This function computes the conditional probability of having trt for each specified level either using glm or SuperLearner. @@ -117,11 +133,12 @@

    Estimate Treatment Mechanisms

    trt == max(trt) and compute the probability of trt == min(trt) as one minus this probability.

    +
    estimateTreatment(dat, adjustVars, glm.trt = NULL, SL.trt = NULL,
       returnModels = FALSE, verbose = FALSE, gtol = 0.001, ...)
    -

    Arguments

    +

    Arguments

    glm.ftime
    verbose
    @@ -199,12 +216,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/fast_glm.html b/docs/reference/fast_glm.html index c156ed8..8dab0cf 100644 --- a/docs/reference/fast_glm.html +++ b/docs/reference/fast_glm.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Wrapper for faster Generalized Linear Models — fast_glm • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -101,22 +114,26 @@ -
    +
    +

    A convenience utility to fit regression models more quickly in the main internal functions for estimation, which usually require logistic regression. Use of speedglm appears to provide roughly an order of magnitude improvement in speed when compared to glm in custom benchmarks.

    +
    fast_glm(reg_form, data, family, ...)
    -

    Arguments

    +

    Arguments

    @@ -163,12 +180,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/fluctuateHazards.html b/docs/reference/fluctuateHazards.html index 4a0a77b..4e04969 100644 --- a/docs/reference/fluctuateHazards.html +++ b/docs/reference/fluctuateHazards.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Fluctuation for the Method of Cause-Specific Hazards — fluctuateHazards • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -112,12 +125,15 @@ -
    +
    +

    This function performs a fluctuation of an initial estimate of the cause-specific hazard functions using a call to glm (i.e., a logistic @@ -135,11 +151,12 @@

    Fluctuation for the Method of Cause-Specific Hazards

    then obtains predictions based on this fit on each of the data.frame objects in dataList.

    +
    -
    fluctuateHazards(dataList, allJ, ofInterestJ, nJ, uniqtrt, ntrt, t0, verbose,
    -  ...)
    +
    fluctuateHazards(dataList, allJ, ofInterestJ, nJ, uniqtrt, ntrt, t0,
    +  verbose, ...)
    -

    Arguments

    +

    Arguments

    @@ -208,12 +225,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/fluctuateIteratedMean.html b/docs/reference/fluctuateIteratedMean.html index 17d46ab..5383664 100644 --- a/docs/reference/fluctuateIteratedMean.html +++ b/docs/reference/fluctuateIteratedMean.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Fluctuation for the Method of Iterated Means — fluctuateIteratedMean • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -125,12 +138,15 @@ -
    +
    +

    This function performs a fluctuation of an initial estimate of the G-computation regression at a specified time t using a call to @@ -144,7 +160,7 @@

    Fluctuation for the Method of Iterated Means

    will be used to obtain predictions that are then mapped into the estimates of the cumulative incidence function at t0. Currently the code requires each data.frame to have named columns for each name in -names(adjustVars), as well as a column named trt. It must also +names(adjustVars), as well as a column named trt. It must also have a columns named Nj.Y where j corresponds with the numeric values input in allJ. These are the indicators of failure due to the various causes before time t and are necessary for determining who to include @@ -152,8 +168,8 @@

    Fluctuation for the Method of Iterated Means

    a column call C.Y where Y is again t-1, so that right censored observations are not included in the regressions. The function will fit a logistic regression with Qj.star.t + 1 as outcome (also needed as a -column in wideDataList) with offset qlogis(Qj.star.t) and -number of additional covariates given by length(trtOfInterest). These +column in wideDataList) with offset qlogis(Qj.star.t) and +number of additional covariates given by length(trtOfInterest). These additional covariates should be columns in the each data.frame in wideDataList called H.z.t where z corresponds to a each unique value of trtOfInterest. The function returns the same @@ -161,11 +177,12 @@

    Fluctuation for the Method of Iterated Means

    which is the fluctuated initial regression estimate evaluated at the observed data points.

    +
    fluctuateIteratedMean(wideDataList, t, uniqtrt, whichJ, allJ, t0,
       Gcomp = FALSE, bounds = NULL, ...)
    -

    Arguments

    +

    Arguments

    @@ -239,12 +256,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/format.perc.html b/docs/reference/format.perc.html index f3a1923..8c4f615 100644 --- a/docs/reference/format.perc.html +++ b/docs/reference/format.perc.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ format.perc — format.perc • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -98,20 +111,24 @@ -
    +
    +

    Copied from package stats.

    +
    # S3 method for perc
    -format(probs, digits)
    +format(probs, digits) -

    Arguments

    +

    Arguments

    @@ -141,12 +158,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/getHazardInfluenceCurve.html b/docs/reference/getHazardInfluenceCurve.html index 6508b5a..f382012 100644 --- a/docs/reference/getHazardInfluenceCurve.html +++ b/docs/reference/getHazardInfluenceCurve.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Extract Influence Curve for Estimated Hazard Functions — getHazardInfluenceCurve • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -104,12 +117,15 @@ -
    +
    +

    This function computes the hazard-based efficient influence curve at the final estimate of the fluctuated cause-specific hazard functions and @@ -119,11 +135,12 @@

    Extract Influence Curve for Estimated Hazard Functions

    added corresponding to the sum over all timepoints of the estimated efficient influence function evaluated at that observation.

    +
    getHazardInfluenceCurve(dataList, dat, allJ, ofInterestJ, nJ, uniqtrt, t0,
       verbose, ...)
    -

    Arguments

    +

    Arguments

    @@ -194,12 +211,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/grad.html b/docs/reference/grad.html index d24cdce..a8d8f32 100644 --- a/docs/reference/grad.html +++ b/docs/reference/grad.html @@ -1,6 +1,6 @@ - + @@ -9,31 +9,38 @@ Gradient for Logistic Regression — grad • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -99,20 +112,24 @@ -
    +
    +

    A function that computes the gradient of the for a logistic regression model. Used by optim on occasion.

    +
    grad(beta, Y, X)
    -

    Arguments

    +

    Arguments

    @@ -152,12 +169,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/grad_offset.html b/docs/reference/grad_offset.html index d72f99a..dd6c82c 100644 --- a/docs/reference/grad_offset.html +++ b/docs/reference/grad_offset.html @@ -1,6 +1,6 @@ - + @@ -9,31 +9,38 @@ Gradient for Logistic Regression with Offsets — grad_offset • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -99,20 +112,24 @@ -
    +
    +

    A function that computes the gradient of the for a logistic regression model with an offset term. Used by optim on occasion.

    +
    grad_offset(beta, Y, H, offset = NULL)
    -

    Arguments

    +

    Arguments

    @@ -156,12 +173,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/hazard_tmle.html b/docs/reference/hazard_tmle.html index 1be38f7..43c5400 100644 --- a/docs/reference/hazard_tmle.html +++ b/docs/reference/hazard_tmle.html @@ -1,6 +1,6 @@ - + @@ -9,35 +9,42 @@ TMLE for Cause-Specific Hazard Functions — hazard_tmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -103,12 +116,15 @@ -
    +
    +

    This function estimates the marginal cumulative incidence for failures of specified types using targeted minimum loss-based estimation based on the @@ -117,16 +133,17 @@

    TMLE for Cause-Specific Hazard Functions

    method = "hazard" is specified. However, power users could, in theory, make calls directly to this function.

    +
    -
    hazard_tmle(ftime, ftype, trt, t0 = max(ftime[ftype > 0]),
    -  adjustVars = NULL, SL.ftime = NULL, SL.ctime = NULL, SL.trt = NULL,
    -  glm.ftime = NULL, glm.ctime = NULL, glm.trt = "1",
    +    
    hazard_tmle(ftime, ftype, trt, t0 = max(ftime[ftype > 0]),
    +  adjustVars = NULL, SL.ftime = NULL, SL.ctime = NULL,
    +  SL.trt = NULL, glm.ftime = NULL, glm.ctime = NULL, glm.trt = "1",
       glm.family = "binomial", returnIC = TRUE, returnModels = FALSE,
    -  ftypeOfInterest = unique(ftype[ftype != 0]), trtOfInterest = unique(trt),
    -  bounds = NULL, verbose = FALSE, tol = 1/(length(ftime)),
    -  maxIter = 100, gtol = 0.001, ...)
    + ftypeOfInterest = unique(ftype[ftype != 0]), + trtOfInterest = unique(trt), bounds = NULL, verbose = FALSE, + tol = 1/(length(ftime)), maxIter = 100, gtol = 0.001, ...)
    -

    Arguments

    +

    Arguments

    @@ -165,7 +182,7 @@

    Ar on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will be called "trt", -names(adjustVars), and "t" if method = "hazard".

    +names(adjustVars), and "t" if method = "hazard".

    @@ -174,7 +191,7 @@

    Ar estimate of the conditional hazard for censoring. It is expected that the wrappers used in the library will play nicely with the input variables, which will be called "trt" and -names(adjustVars).

    +names(adjustVars).

    @@ -182,7 +199,7 @@

    Ar SL.library argument in the call to SuperLearner for the estimate of the conditional probability of treatment. It is expected that the wrappers used in the library will play nicely with the input -variables, which will be names(adjustVars).

    +variables, which will be names(adjustVars).

    @@ -191,7 +208,7 @@

    Ar for the outcome regression. Ignored if SL.ftime is not equal to NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any variables -found in names(adjustVars).

    +found in names(adjustVars).

    @@ -200,7 +217,7 @@

    Ar for the estimate of the conditional hazard for censoring. Ignored if SL.ctime is not equal to NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can -additionally include any variables found in names(adjustVars).

    +additionally include any variables found in names(adjustVars).

    @@ -208,7 +225,7 @@

    Ar equation passed to the formula option of a call to glm for the estimate of the conditional probability of treatment. Ignored if SL.trt is not equal to NULL. The formula can include -any variables found in names(adjustVars).

    +any variables found in names(adjustVars).

    @@ -237,22 +254,22 @@

    Ar

    @@ -310,10 +327,10 @@

    Value

    ftimeMod

    If returnModels = TRUE the fit object(s) for the call to glm or SuperLearner for the outcome regression models. If method="mean" this will be a - list of length length(ftypeOfInterest) each of length + list of length length(ftypeOfInterest) each of length t0 (one regression for each failure type and for each timepoint). If method = "hazard" this will be a list - of length length(ftypeOfInterest) with one fit + of length length(ftypeOfInterest) with one fit corresponding to the hazard for each cause of failure. If returnModels = FALSE, this entry will be NULL.

    ctimeMod

    If returnModels = TRUE the fit object for the call to @@ -337,13 +354,13 @@

    Examp
    ## Single failure type examples # simulate data -set.seed(1234) +set.seed(1234) n <- 100 -trt <- rbinom(n, 1, 0.5) -adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2))) +trt <- rbinom(n, 1, 0.5) +adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2))) -ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2) -ftype <- round(runif(n, 0, 1)) +ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2) +ftype <- round(runif(n, 0, 1)) # Fit 1 - fit hazard_tmle object with GLMs for treatment, censoring, failure fit1 <- hazard_tmle(ftime = ftime, ftype = ftype, @@ -372,12 +389,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/index.html b/docs/reference/index.html index ae7a7ac..b58f66c 100644 --- a/docs/reference/index.html +++ b/docs/reference/index.html @@ -1,6 +1,6 @@ - + @@ -9,27 +9,34 @@ Function reference • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -95,158 +108,154 @@ -
    -
    +
    +
    -
    -

    SL.ctime
    SL.trt
    glm.ftime
    glm.ctime
    glm.trt
    glm.familyftypeOfInterest

    An input specifying what failure types to compute estimates of incidence for. The default value computes estimates for -values unique(ftype). Can alternatively be set to a vector of +values unique(ftype). Can alternatively be set to a vector of values found in ftype.

    trtOfInterest

    An input specifying which levels of trt are of interest. The default value computes estimates for values -unique(trt). Can alternatively be set to a vector of values +unique(trt). Can alternatively be set to a vector of values found in trt.

    bounds

    A data.frame of bounds on the conditional hazard function. The data.frame should have a column named "t" -that includes values seq_len(t0). The other columns should be -names paste0("l",j) and paste0("u",j) for each unique +that includes values seq_len(t0). The other columns should be +names paste0("l",j) and paste0("u",j) for each unique failure type label j, denoting lower and upper bounds, respectively. See examples.

    +
    - - - - + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    TMLE Procedures for Estimating Cumulative Incidence

    -

    Functions for computing TMLEs for cumulative incidence via the methods of cause-specific hazards and the method of iterated means.

    -
    -

    survtmle

    -

    Compute Targeted Minimum Loss-Based Estimators in Survival Analysis Settings

    -

    timepoints

    -

    Evaluate Results over Time Points of Interest

    -

    mean_tmle

    -

    TMLE for G-Computation of Cumulative Incidence

    -

    hazard_tmle

    -

    TMLE for Cause-Specific Hazard Functions

    -

    Estimation and Fluctuation Procedures for Computing TMLEs

    -

    Functions for computing TMLEs for cumulative incidence via the methods of cause-specific hazards and the method of iterated means.

    -
    -

    estimateCensoring

    -

    Estimate Censoring Mechanisms

    -

    estimateTreatment

    -

    Estimate Treatment Mechanisms

    -

    estimateIteratedMean

    -

    Estimation for the Method of Iterated Means

    -

    fluctuateIteratedMean

    -

    Fluctuation for the Method of Iterated Means

    -

    estimateHazards

    -

    Estimation for the Method of Cause-Specific Hazards

    -

    fluctuateHazards

    -

    Fluctuation for the Method of Cause-Specific Hazards

    -

    getHazardInfluenceCurve

    -

    Extract Influence Curve for Estimated Hazard Functions

    -

    Internal Package Utilities

    -

    Functions for internal convenience functions used by core functions.

    -
    -

    grad

    -

    Gradient for Logistic Regression

    -

    grad_offset

    -

    Gradient for Logistic Regression with Offsets

    -

    LogLikelihood

    -

    Log-Likelihood

    -

    LogLikelihood_offset

    -

    Log-Likelihood Offset

    -

    makeDataList

    -

    Convert Short Form Data to List of Wide Form Data

    -

    makeWideDataList

    -

    Convert Long Form Data to List of Wide Form Data

    -

    updateVariables

    -

    Update TMLEs for Hazard to Cumulative Incidence

    -
    + + + +

    TMLE Procedures for Estimating Cumulative Incidence

    +

    Functions for computing TMLEs for cumulative incidence via the methods of cause-specific hazards and the method of iterated means.

    + + + + + +

    survtmle()

    + +

    Compute Targeted Minimum Loss-Based Estimators in Survival Analysis Settings

    + + + +

    timepoints()

    + +

    Evaluate Results over Time Points of Interest

    + + + +

    mean_tmle()

    + +

    TMLE for G-Computation of Cumulative Incidence

    + + + +

    hazard_tmle()

    + +

    TMLE for Cause-Specific Hazard Functions

    + + + + +

    Estimation and Fluctuation Procedures for Computing TMLEs

    +

    Functions for computing TMLEs for cumulative incidence via the methods of cause-specific hazards and the method of iterated means.

    + + + + + +

    estimateCensoring()

    + +

    Estimate Censoring Mechanisms

    + + + +

    estimateTreatment()

    + +

    Estimate Treatment Mechanisms

    + + + +

    estimateIteratedMean()

    + +

    Estimation for the Method of Iterated Means

    + + + +

    fluctuateIteratedMean()

    + +

    Fluctuation for the Method of Iterated Means

    + + + +

    estimateHazards()

    + +

    Estimation for the Method of Cause-Specific Hazards

    + + + +

    fluctuateHazards()

    + +

    Fluctuation for the Method of Cause-Specific Hazards

    + + + +

    getHazardInfluenceCurve()

    + +

    Extract Influence Curve for Estimated Hazard Functions

    + + + + +

    Internal Package Utilities

    +

    Functions for internal convenience functions used by core functions.

    + + + + + +

    grad()

    + +

    Gradient for Logistic Regression

    + + + +

    grad_offset()

    + +

    Gradient for Logistic Regression with Offsets

    + + + +

    LogLikelihood()

    + +

    Log-Likelihood

    + + + +

    LogLikelihood_offset()

    + +

    Log-Likelihood Offset

    + + + +

    makeDataList()

    + +

    Convert Short Form Data to List of Wide Form Data

    + + + +

    makeWideDataList()

    + +

    Convert Long Form Data to List of Wide Form Data

    + + + +

    updateVariables()

    + +

    Update TMLEs for Hazard to Cumulative Incidence

    + + +
    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/reference/makeDataList.html b/docs/reference/makeDataList.html index 58f38d3..fa9c4b5 100644 --- a/docs/reference/makeDataList.html +++ b/docs/reference/makeDataList.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Convert Short Form Data to List of Wide Form Data — makeDataList • survtmle - + - - + + - + - + + + + - - - + + + + + + - + +
    @@ -105,12 +118,15 @@ -
    +
    +

    The function takes a data.frame of short format right-censored failure times and reshapes the long format into the wide format needed for calls to @@ -121,10 +137,11 @@

    Convert Short Form Data to List of Wide Form Data

    rows for each observation and will set trt column equal to each value of trtOfInterest in turn.

    +
    makeDataList(dat, J, ntrt, uniqtrt, t0, bounds = NULL, ...)
    -

    Arguments

    +

    Arguments

    @@ -181,12 +198,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/makeWideDataList.html b/docs/reference/makeWideDataList.html index 4009596..5c8cbdc 100644 --- a/docs/reference/makeWideDataList.html +++ b/docs/reference/makeWideDataList.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Convert Long Form Data to List of Wide Form Data — makeWideDataList • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -106,12 +119,15 @@ -
    +
    +

    The function takes a data.frame and list consisting of short and long format right-censored failure times. The function reshapes the long @@ -123,10 +139,11 @@

    Convert Long Form Data to List of Wide Form Data

    trt equal to each level of trtOfInterest and set C.t to zero for everyone.

    +
    makeWideDataList(dat, allJ, uniqtrt, adjustVars, dataList, t0, ...)
    -

    Arguments

    +

    Arguments

    @@ -185,12 +202,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/mean_tmle.html b/docs/reference/mean_tmle.html index 5033f7f..d7e260b 100644 --- a/docs/reference/mean_tmle.html +++ b/docs/reference/mean_tmle.html @@ -1,6 +1,6 @@ - + @@ -9,34 +9,41 @@ TMLE for G-Computation of Cumulative Incidence — mean_tmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -102,12 +115,15 @@ -
    +
    +

    This function estimates the marginal cumulative incidence for failures of specified types using targeted minimum loss-based estimation based on the @@ -115,15 +131,17 @@

    TMLE for G-Computation of Cumulative Incidence

    by survtmle whenever method = "mean" is specified. However, power users could, in theory, make calls directly to this function.

    +
    -
    mean_tmle(ftime, ftype, trt, t0 = max(ftime[ftype > 0]), adjustVars = NULL,
    -  SL.ftime = NULL, SL.ctime = NULL, SL.trt = NULL, glm.ftime = NULL,
    -  glm.ctime = NULL, glm.trt = "1", glm.family = "binomial",
    -  returnIC = TRUE, returnModels = FALSE,
    -  ftypeOfInterest = unique(ftype[ftype != 0]), trtOfInterest = unique(trt),
    -  bounds = NULL, verbose = FALSE, Gcomp = FALSE, gtol = 0.001, ...)
    +
    mean_tmle(ftime, ftype, trt, t0 = max(ftime[ftype > 0]),
    +  adjustVars = NULL, SL.ftime = NULL, SL.ctime = NULL,
    +  SL.trt = NULL, glm.ftime = NULL, glm.ctime = NULL, glm.trt = "1",
    +  glm.family = "binomial", returnIC = TRUE, returnModels = FALSE,
    +  ftypeOfInterest = unique(ftype[ftype != 0]),
    +  trtOfInterest = unique(trt), bounds = NULL, verbose = FALSE,
    +  Gcomp = FALSE, gtol = 0.001, ...)
    -

    Arguments

    +

    Arguments

    @@ -162,7 +180,7 @@

    Ar See ?SuperLearner for more information on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will -be called "trt", names(adjustVars), and "t" (if +be called "trt", names(adjustVars), and "t" (if method = "hazard").

    @@ -172,7 +190,7 @@

    Ar estimate of the conditional hazard for censoring. It is expected that the wrappers used in the library will play nicely with the input variables, which will be called "trt" and -names(adjustVars).

    +names(adjustVars).

    @@ -180,7 +198,7 @@

    Ar SL.library argument in the call to SuperLearner for the estimate of the conditional probability of treatment. It is expected that the wrappers used in the library will play nicely with the input -variables, which will be names(adjustVars).

    +variables, which will be names(adjustVars).

    @@ -189,7 +207,7 @@

    Ar for the outcome regression. Ignored if SL.ftime is not equal to NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any variables -found in names(adjustVars).

    +found in names(adjustVars).

    @@ -198,7 +216,7 @@

    Ar for the estimate of the conditional hazard for censoring. Ignored if SL.ctime is not equal to NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can -additionally include any variables found in names(adjustVars).

    +additionally include any variables found in names(adjustVars).

    @@ -206,7 +224,7 @@

    Ar equation passed to the formula option of a call to glm for the estimate of the conditional probability of treatment. Ignored if SL.trt is not equal to NULL. The formula can include -any variables found in names(adjustVars).

    +any variables found in names(adjustVars).

    @@ -235,14 +253,14 @@

    Ar

    @@ -251,7 +269,7 @@

    Ar function (if method = "hazard") or on the iterated conditional means (if method = "mean"). The data.frame should have a column named "t" that includes values 1:t0. The other -columns should be names paste0("l",j) and paste0("u",j) +columns should be names paste0("l",j) and paste0("u",j) for each unique failure type label j, denoting lower and upper bounds, respectively. See examples.

    @@ -300,10 +318,10 @@

    Value

    ftimeMod

    If returnModels=TRUE the fit object(s) for the call to glm or SuperLearner for the outcome regression models. If method="mean" this will be a list of length - length(ftypeOfInterest) each of length t0 (one + length(ftypeOfInterest) each of length t0 (one regression for each failure type and for each timepoint). If method="hazard" this will be a list of length - length(ftypeOfInterest) with one fit corresponding to + length(ftypeOfInterest) with one fit corresponding to the hazard for each cause of failure. If returnModels = FALSE, this entry will be NULL.

    ctimeMod

    If returnModels = TRUE the fit object for the call to @@ -327,13 +345,13 @@

    Examp
    ## Single failure type examples # simulate data -set.seed(1234) +set.seed(1234) n <- 100 -trt <- rbinom(n,1,0.5) -adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2))) +trt <- rbinom(n,1,0.5) +adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2))) -ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2) -ftype <- round(runif(n, 0, 1)) +ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2) +ftype <- round(runif(n, 0, 1)) # Fit 1 - fit mean_tmle object with GLMs for treatment, censoring, failure fit1 <- mean_tmle(ftime = ftime, ftype = ftype, @@ -362,12 +380,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/plot.tp.survtmle-1.png b/docs/reference/plot.tp.survtmle-1.png index e984d5a..c2f8610 100644 Binary files a/docs/reference/plot.tp.survtmle-1.png and b/docs/reference/plot.tp.survtmle-1.png differ diff --git a/docs/reference/plot.tp.survtmle.html b/docs/reference/plot.tp.survtmle.html index 9a1b90c..d85e07d 100644 --- a/docs/reference/plot.tp.survtmle.html +++ b/docs/reference/plot.tp.survtmle.html @@ -1,6 +1,6 @@ - + @@ -9,31 +9,38 @@ Plot Results of Cumulative Incidence Estimates — plot.tp.survtmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -99,22 +112,26 @@ -
    +
    +

    Step function plots for both raw and smoothed (monotonic) estimates, the latter by isotonic regression of the raw estimates, of cumulative incidence.

    +
    # S3 method for tp.survtmle
    -plot(x, ..., type = c("iso", "raw"),
    -  pal = ggsci::scale_color_lancet())
    +plot(x, ..., type = c("iso", "raw"), + pal = ggsci::scale_color_lancet())
    -

    Arguments

    +

    Arguments

    SL.trt
    glm.ftime
    glm.ctime
    glm.trt
    glm.familyftypeOfInterest

    An input specifying what failure types to compute estimates of incidence for. The default value computes estimates for -values unique(ftype). Can alternatively be set to a vector of +values unique(ftype). Can alternatively be set to a vector of values found in ftype.

    trtOfInterest

    An input specifying which levels of trt are of interest. The default value computes estimates for values -unique(trt). Can alternatively be set to a vector of values +unique(trt). Can alternatively be set to a vector of values found in trt.

    @@ -130,7 +147,7 @@

    Ar

    +with the latter being computed by a call to stats::isoreg

    @@ -150,24 +167,24 @@

    Value

    Examples

    -
    library(survtmle) -set.seed(341796) +
    library(survtmle) +set.seed(341796) n <- 100 t_0 <- 10 -W <- data.frame(W1 = runif(n), W2 = rbinom(n, 1, 0.5)) -A <- rbinom(n, 1, 0.5) -T <- rgeom(n,plogis(-4 + W$W1 * W$W2 - A)) + 1 -C <- rgeom(n, plogis(-6 + W$W1)) + 1 -ftime <- pmin(T, C) -ftype <- as.numeric(ftime == T) -suppressWarnings( +W <- data.frame(W1 = runif(n), W2 = rbinom(n, 1, 0.5)) +A <- rbinom(n, 1, 0.5) +T <- rgeom(n,plogis(-4 + W$W1 * W$W2 - A)) + 1 +C <- rgeom(n, plogis(-6 + W$W1)) + 1 +ftime <- pmin(T, C) +ftype <- as.numeric(ftime == T) +suppressWarnings( fit <- survtmle(ftime = ftime, ftype = ftype, adjustVars = W, glm.ftime = "I(W1*W2) + trt + t", trt = A, glm.ctime = "W1 + t", method = "hazard", verbose = TRUE, t0 = t_0, maxIter = 2) ) -tpfit <- timepoints(fit, times = seq_len(t_0)) -plot(tpfit)
    +tpfit <- timepoints(fit, times = seq_len(t_0)) +plot(tpfit)
    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/print.survtmle.html b/docs/reference/print.survtmle.html index 5303892..c6fa8ee 100644 --- a/docs/reference/print.survtmle.html +++ b/docs/reference/print.survtmle.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ print.survtmle — print.survtmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -98,20 +111,24 @@ -
    +
    +

    The print method for an object of class survtmle

    +
    # S3 method for survtmle
    -print(x, ...)
    +print(x, ...) -

    Arguments

    +

    Arguments

    type

    character describing whether to provide a plot of raw ("raw") or monotonic ("iso") estimates in the resultant step function plot, -with the latter being computed by a call to stats::isoreg

    pal
    @@ -148,12 +165,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/print.tp.survtmle.html b/docs/reference/print.tp.survtmle.html index 2a6545c..02f43ee 100644 --- a/docs/reference/print.tp.survtmle.html +++ b/docs/reference/print.tp.survtmle.html @@ -1,6 +1,6 @@ - + @@ -9,30 +9,37 @@ print.tp.survtmle — print.tp.survtmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -98,20 +111,24 @@ -
    +
    +

    The print method for a timepoints object of class tp.survtmle

    +
    # S3 method for tp.survtmle
    -print(x, ...)
    +print(x, ...) -

    Arguments

    +

    Arguments

    @@ -148,12 +165,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/rtss.html b/docs/reference/rtss.html index fa57c76..e1f6aa9 100644 --- a/docs/reference/rtss.html +++ b/docs/reference/rtss.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Mock RTSS/AS01 data set — rtss • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -104,12 +117,15 @@ -
    +
    +

    A dataset containing data that is similar in structure to the RTSS/AS01 malaria vaccine trial. Privacy agreements prevent the sharing of the real @@ -119,6 +135,7 @@

    Mock RTSS/AS01 data set

    the ftype variable changes, simulating output data sets of multiply infected trial participants.

    +
    rtss
    @@ -163,12 +180,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/reference/rv144.html b/docs/reference/rv144.html index 3cae113..701c9b0 100644 --- a/docs/reference/rv144.html +++ b/docs/reference/rv144.html @@ -1,6 +1,6 @@ - + @@ -9,32 +9,39 @@ Mock RV144 data set — rv144 • survtmle - + - - + + - + - + + + + - - - + + + - + + + - + +
    @@ -100,17 +113,21 @@ -
    +
    +

    A dataset containing data that is similar in structure to the RV144 "Thai trial" of the ALVAC/AIDSVAX vaccine. Privacy agreements prevent the sharing of the real data, so please note THAT THIS IS NOT THE REAL RV144 DATA.

    +
    rv144
    @@ -149,12 +166,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    -
    + + diff --git a/docs/reference/survtmle.html b/docs/reference/survtmle.html index d749979..c0cc473 100644 --- a/docs/reference/survtmle.html +++ b/docs/reference/survtmle.html @@ -1,6 +1,6 @@ - + @@ -9,31 +9,38 @@ Compute Targeted Minimum Loss-Based Estimators in Survival Analysis Settings — survtmle • survtmle - + - - + + - + - + + + + - - - + + + + + + - + +
    @@ -99,36 +112,41 @@ -
    +
    +

    This function estimates the marginal cumulative incidence for failures of specified types using targeted minimum loss-based estimation.

    +
    -
    survtmle(ftime, ftype, trt, adjustVars, t0 = max(ftime[ftype > 0]),
    -  SL.ftime = NULL, SL.ctime = NULL, SL.trt = NULL, glm.ftime = NULL,
    -  glm.ctime = NULL, glm.trt = NULL, returnIC = TRUE,
    -  returnModels = TRUE, ftypeOfInterest = unique(ftype[ftype != 0]),
    -  trtOfInterest = unique(trt), method = "hazard", bounds = NULL,
    -  verbose = FALSE, tol = 1/(sqrt(length(ftime))), maxIter = 10,
    +    
    survtmle(ftime, ftype, trt, adjustVars, t0 = max(ftime[ftype > 0]),
    +  SL.ftime = NULL, SL.ctime = NULL, SL.trt = NULL,
    +  glm.ftime = NULL, glm.ctime = NULL, glm.trt = NULL,
    +  returnIC = TRUE, returnModels = TRUE,
    +  ftypeOfInterest = unique(ftype[ftype != 0]),
    +  trtOfInterest = unique(trt), method = "hazard", bounds = NULL,
    +  verbose = FALSE, tol = 1/(sqrt(length(ftime))), maxIter = 10,
       Gcomp = FALSE, gtol = 0.001)
    -

    Arguments

    +

    Arguments

    - - @@ -157,7 +175,7 @@

    Ar See ?SuperLearner for more information on how to specify valid SuperLearner libraries. It is expected that the wrappers used in the library will play nicely with the input variables, which will -be called "trt", names(adjustVars), and "t" (if +be called "trt", names(adjustVars), and "t" (if method="hazard").

    @@ -167,7 +185,7 @@

    Ar estimate of the conditional hazard for censoring. It is expected that the wrappers used in the library will play nicely with the input variables, which will be called "trt" and -names(adjustVars).

    +names(adjustVars).

    @@ -175,7 +193,7 @@

    Ar SL.library argument in the call to SuperLearner for the estimate of the conditional probability of treatment. It is expected that the wrappers used in the library will play nicely with the input -variables, which will be names(adjustVars).

    +variables, which will be names(adjustVars).

    @@ -184,7 +202,7 @@

    Ar for the outcome regression. Ignored if SL.ftime is not equal to NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can additionally include any -variables found in names(adjustVars).

    +variables found in names(adjustVars).

    @@ -193,7 +211,7 @@

    Ar for the estimate of the conditional hazard for censoring. Ignored if SL.ctime is not equal to NULL. Use "trt" to specify the treatment in this formula (see examples). The formula can -additionally include any variables found in names(adjustVars).

    +additionally include any variables found in names(adjustVars).

    @@ -201,7 +219,7 @@

    Ar equation passed to the formula option of a call to glm for the estimate of the conditional probability of treatment. Ignored if SL.trt is not equal to NULL. The formula can include -any variables found in names(adjustVars).

    +any variables found in names(adjustVars).

    @@ -222,14 +240,14 @@

    Ar

    @@ -250,9 +268,9 @@

    Ar

    @@ -313,10 +331,10 @@

    Value

    ftimeMod

    If returnModels=TRUE the fit object(s) for the call to glm or SuperLearner for the outcome regression models. If method="mean" this will be a list of length - length(ftypeOfInterest) each of length t0 (one + length(ftypeOfInterest) each of length t0 (one regression for each failure type and for each timepoint). If method="hazard" this will be a list of length - length(ftypeOfInterest) with one fit corresponding to + length(ftypeOfInterest) with one fit corresponding to the hazard for each cause of failure. If returnModels = FALSE, this entry will be NULL.

    ctimeMod

    If returnModels=TRUE the fit object for the call to @@ -339,13 +357,13 @@

    Value

    Examples

    # simulate data -set.seed(1234) +set.seed(1234) n <- 200 -trt <- rbinom(n, 1, 0.5) -adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2))) +trt <- rbinom(n, 1, 0.5) +adjustVars <- data.frame(W1 = round(runif(n)), W2 = round(runif(n, 0, 2))) -ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2) -ftype <- round(runif(n, 0, 1)) +ftime <- round(1 + runif(n, 1, 4) - trt + adjustVars$W1 + adjustVars$W2) +ftype <- round(runif(n, 0, 1)) # Fit 1 # fit a survtmle object with glm estimators for treatment, censoring, and @@ -371,8 +389,8 @@

    Examp # censoring and empirical estimators for treatment using the "mean" method fit2 <- survtmle(ftime = ftime, ftype = ftype, trt = trt, adjustVars = adjustVars, - SL.ftime = c("SL.mean"), - SL.ctime = c("SL.mean"), + SL.ftime = c("SL.mean"), + SL.ctime = c("SL.mean"), method = "mean", t0 = 6)

    #> Warning: glm.trt and SL.trt not specified. Proceeding with glm.trt = '1'
    #> Loading required package: nnls
    fit2
    #> $est #> [,1] #> 0 1 0.5284170 @@ -404,12 +422,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/timepoints.html b/docs/reference/timepoints.html index 7475fcf..2775959 100644 --- a/docs/reference/timepoints.html +++ b/docs/reference/timepoints.html @@ -1,6 +1,6 @@ - + @@ -9,24 +9,27 @@ Evaluate Results over Time Points of Interest — timepoints • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -108,12 +121,15 @@ -
    +
    +

    Wrapper function for survtmle that takes a fitted survtmle object and computes the TMLE estimated incidence for all times specified in @@ -127,10 +143,11 @@

    Evaluate Results over Time Points of Interest

    the original call to survtmle. This can be ensured be making the original call to survtmle with t0 = max(ftime).

    +
    timepoints(object, times, returnModels = FALSE)
    -

    Arguments

    +

    Arguments

    ftime

    A numeric vector of failure times. Right-censored observations +

    An integer-valued vector of failure times. Right-censored observations should have corresponding ftype set to 0.

    ftype

    A numeric vector indicating the type of failure. Observations +

    An integer-valued vector indicating the type of failure. Observations with ftype=0 are treated as a right-censored observation. Each unique value besides zero is treated as a separate type of failure.

    SL.trt
    glm.ftime
    glm.ctime
    glm.trt
    returnICftypeOfInterest

    An input specifying what failure types to compute estimates of incidence for. The default value computes estimates for -values unique(ftype). Can alternatively be set to a vector of +values unique(ftype). Can alternatively be set to a vector of values found in ftype.

    trtOfInterest

    An input specifying which levels of trt are of interest. The default value computes estimates for values -unique(trt). Can alternatively be set to a vector of values +unique(trt). Can alternatively be set to a vector of values found in trt.

    A data.frame of bounds on the conditional hazard function (if method = "hazard") or on the iterated conditional means (if method = "mean"). The data.frame should have a -column named "t" that includes values seq_len(t0). The -other columns should be names paste0("l",j) and -paste0("u",j) for each unique failure type label j, denoting +column named "t" that includes values seq_len(t0). The +other columns should be names paste0("l",j) and +paste0("u",j) for each unique failure type label j, denoting lower and upper bounds, respectively. See examples.

    @@ -153,18 +170,18 @@

    Ar

    Value

    An object of class tp.survtmle with number of entries equal to - length(times). Each entry is named "tX", where X denotes a + length(times). Each entry is named "tX", where X denotes a single value of times.

    Examples

    # simulate data -set.seed(1234) +set.seed(1234) n <- 100 -ftime <- round(runif(n, 1, 4)) -ftype <- round(runif(n, 0, 2)) -trt <- rbinom(n, 1, 0.5) -adjustVars <- data.frame(W1 = rnorm(n), W2 = rnorm(n)) +ftime <- round(runif(n, 1, 4)) +ftype <- round(runif(n, 0, 2)) +trt <- rbinom(n, 1, 0.5) +adjustVars <- data.frame(W1 = rnorm(n), W2 = rnorm(n)) # fit an initial survtmle object with t0=max(ftime) fm <- survtmle(ftime = ftime, ftype = ftype, @@ -177,7 +194,7 @@

    Examp allTimes <- timepoints(object = fm, times = 1:4, returnModels = FALSE) # look at results for time 1 -class(allTimes$t1)

    #> [1] "survtmle"
    allTimes$t1
    #> $est +class(allTimes$t1)
    #> [1] "survtmle"
    allTimes$t1
    #> $est #> [,1] #> 0 1 0.11420933 #> 1 1 0.08893310 @@ -226,12 +243,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + + diff --git a/docs/reference/updateVariables.html b/docs/reference/updateVariables.html index 11ea605..5770716 100644 --- a/docs/reference/updateVariables.html +++ b/docs/reference/updateVariables.html @@ -1,6 +1,6 @@ - + @@ -9,32 +9,39 @@ Update TMLEs for Hazard to Cumulative Incidence — updateVariables • survtmle - + - - + + - + - + + + + - - - + + + + + + - + + @@ -100,22 +113,26 @@ -
    +
    +

    A helper function that maps hazard estimates into estimates of cumulative incidence and updates the "clever covariates" used by the targeted minimum loss-based estimation fluctuation step.

    +
    -
    updateVariables(dataList, allJ, ofInterestJ, nJ, uniqtrt, ntrt, t0, verbose,
    -  ...)
    +
    updateVariables(dataList, allJ, ofInterestJ, nJ, uniqtrt, ntrt, t0,
    +  verbose, ...)
    -

    Arguments

    +

    Arguments

    @@ -184,12 +201,13 @@

    Contents

    -

    Site built with pkgdown.

    +

    Site built with pkgdown 1.3.0.

    - + +