Using-SingleCaseES.Rmd
The SingleCaseES
package provides R functions for
calculating basic, within-case effect size indices for single-case
designs, including several non-overlap measures and parametric effect
size measures, and for estimating the gradual effects model (Swan & Pustejovsky,
2018). Standard errors and confidence intervals are provided
for the subset of effect sizes indices with known sampling
distributions.
The package also includes two graphical user interfaces for interactive use (designed using Shiny), both of which are also available as web apps hosted through shinyapps.io:
SCD_effect_sizes()
opens an interactive calculator for
the basic non-overlap indices and parametric effect sizes. It is also
available at https://jepusto.shinyapps.io/SCD-effect-sizes/
shine_gem_scd()
opens an interactive calculator for the
gradual effects model. It is also available at https://jepusto.shinyapps.io/gem-scd/
In this vignette, we introduce the package’s primary functions for
carrying out effect size calculations. We demonstrate how to use the
functions for calculating an effect size from a single data series, how
to use the calc_ES()
function for calculating multiple
effect sizes from a single data series, and how to use
batch_calc_ES()
for calculating one or multiple effect
sizes from multiple data series.
To start, be sure to load the package:
The SingleCaseES
package includes functions for
calculating the major non-overlap measures that have been proposed for
use with single-case designs, as well as several parametric effect size
measures. The following non-overlap measures are available (function
names are listed in parentheses):
PND
)PAND
)IRD
)PEM
)NAP
)Tau
)Tau_BC
)Tau_U
)The following parametric effect sizes are available:
SMD
)LRRi
and LRRd
)LOR
)LRM
)All of the functions for calculating individual effect sizes follow
the same syntax. For demonstration purposes, let’s take a look at the
syntax for NAP()
, which calculates the non-overlap of all
pairs (Parker
& Vannest, 2009):
args(NAP)
#> function (A_data, B_data, condition, outcome, baseline_phase = NULL,
#> intervention_phase = NULL, improvement = "increase", SE = "unbiased",
#> confidence = 0.95, trunc_const = FALSE)
#> NULL
We will first demonstrate two methods for inputting data from a single SCD series, then explain the further arguments of the function.
There are two formats in which data can be provided to the functions:
the A_data
and B_data
inputs, or the
condition
and outcome
inputs. Both formats can
be used for any of the non-overlap or parametric measures.
A_data
, B_data
inputs
The first input format involves providing separate vectors for the data from each phase, where A corresponds to the baseline phase and B corresponds to the treatment phase.
Here are some hypothetical data from the A and B phases of a single-case data series:
We can feed these data into the NAP
function as
follows:
NAP(A_data = A, B_data = B)
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
The result reports the NAP effect size estimate for these hypothetical data, along with a standard error and a 95% confidence interval.
condition
, outcome
inputs
The second input format involves providing a single vector containing
all of the outcome data from the series, along with a vector
that describes the phase of each observation in the data. For example,
the hypothetical data above contains 6 baseline phase observations and 7
treatment phase observations. Therefore, the condition
input should consist of six entries of 'A'
followed by
seven entries of 'B'
:
phase <- c(rep("A", 6), rep("B", 7))
phase
#> [1] "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
This format also requires providing a single vector containing all of the outcome data from the series. Here is the hypothetical data from above, reformatted to follow this structure:
outcome_dat <- c(A, B)
outcome_dat
#> [1] 20 20 26 25 22 23 28 25 24 27 30 30 29
We can feed these data into the NAP
function as
follows:
NAP(condition = phase, outcome = outcome_dat)
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
It’s important to note a few further distinctions that can be made
when using the condition
and outcome
inputs.
If the vector provided to condition
has more than two
values, the effect size function will assume that the first value of
condition
is the baseline phase and the second unique value
of condition
is the intervention phase.
phase2 <- c(rep("A", 5), rep("B", 5), rep("C",3))
NAP(condition = phase2, outcome = outcome_dat)
#> Warning in calc_ES(A_data = A_data, B_data = B_data, condition = condition, :
#> The 'condition' variable has more than two unique values. Treating 'B' as the
#> intervention phase.
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.78 0.155 0.4115567 0.9423658
In some single-case data series, the initial observation might not be
in the baseline phase. For example, an SCD with four cases might use a
cross-over treatment reversal design, where two of the cases follow an
ABAB design and the other two cases follow a BABA design. To handle this
situation, we will need to specify the baseline phase using the
baseline_phase
argument:
phase_rev <- c(rep("B", 7), rep("A", 6))
outcome_rev <- c(B, A)
NAP(condition = phase_rev, outcome = outcome_rev, baseline_phase = "A")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
In data series that include more than two unique phases, it is also
possible to specify which one should be used as the intervention phase
using the intervention_phase
argument:
NAP(condition = phase2, outcome = outcome_dat,
baseline_phase = "A", intervention_phase = "C")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 1 0.06346478 1 1
NAP(condition = phase2, outcome = outcome_dat,
baseline_phase = "B", intervention_phase = "C")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 1 0.06346478 1 1
All of the effect size functions in SingleCaseES
are
defined based on some assumption about the direction of therapeutic
improvement in the outcome (e.g., improvement would correspond to
increases in on-task behavior but to decreases in
aggressive behavior). For all of the effect size functions, it is
important to specify the direction of therapeutic improvement for the
data series by providing a value for the improvement
argument, either “increase” or “decrease”:
NAP(A_data = A, B_data = B, improvement = "decrease")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.08333333 0.06900656 0.01398242 0.4026594
The NAP()
function and most of the other effect size
functions default to assuming that increases in the outcome correspond
to improvements.
The NAP
, Tau
, and Tau_BC
functions provide several possible methods for calculating the standard
error. By default, the exactly unbiased standard errors are used.
However, the function can also produce standard errors using the
Hanley-McNeil estimator, the standard error under the null hypothesis of
no effect, or no standard errors at all:
NAP(A_data = A, B_data = B, SE = "unbiased")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
NAP(A_data = A, B_data = B, SE = "Hanley")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.07739185 0.5973406 0.9860176
NAP(A_data = A, B_data = B, SE = "null")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.1666667 0.5973406 0.9860176
NAP(A_data = A, B_data = B, SE = "none")
#> ES Est
#> 1 NAP 0.9166667
The functions also produce confidence intervals for NAP, Tau, and
Tau_BC. By default, a 95% CI is calculated. This can be adjusted by
setting the confidence
argument to a value between 0 and 1.
To omit the confidence interval all together, set the value to
NULL
:
NAP(A_data = A, B_data = B)
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
NAP(A_data = A, B_data = B, confidence = .99)
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.4875014 0.9907377
NAP(A_data = A, B_data = B, confidence = .90)
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.6591091 0.9822249
NAP(A_data = A, B_data = B, confidence = NULL)
#> ES Est SE
#> 1 NAP 0.9166667 0.06900656
The SingleCaseES
package includes functions for
calculating several other non-overlap indices in addition to NAP. All of
the functions accept data in either the A_data
,
B_data
format or the condition
,
outcome
format with optional baseline specification, and
all of the functions include an argument to specify the direction of
improvement. Like the function for NAP, the functions for Tau
(Tau
) and baseline-corrected Tau (Tau_BC
) can
produce unbiased standard errors, Hanley-McNeil standard errors,
standard errors under the null hypothesis of no effect, or no standard
errors at all. Only NAP
, Tau
, and
Tau_BC
return standard errors and confidence intervals. The
remaining non-overlap measures return only a point estimate:
Tau(A_data = A, B_data = B)
#> ES Est SE CI_lower CI_upper
#> 1 Tau 0.8333333 0.1380131 0.1946812 0.9720352
Tau_BC(A_data = A, B_data = B)
#> ES Est SE CI_lower CI_upper
#> 1 Tau-BC 0.2857143 0.3595159 -0.3260702 0.7180613
PND(A_data = A, B_data = B)
#> ES Est
#> 1 PND 0.7142857
PEM(A_data = A, B_data = B)
#> ES Est
#> 1 PEM 1
PAND(A_data = A, B_data = B)
#> ES Est
#> 1 PAND 0.8461538
IRD(A_data = A, B_data = B)
#> ES Est
#> 1 IRD 0.6904762
Tau_U(A_data = A, B_data = B)
#> ES Est
#> 1 Tau-U 0.7380952
SMD()
The standardized mean difference parameter is defined as the
difference between the mean level of the outcome in phase B and the mean
level of the outcome in phase A, scaled by the within-case standard
deviation of the outcome in phase A. As with all functions discussed so
far, the SMD()
function accepts data in either the
A_data
, B_data
format or the
condition
, outcome
format with optional
baseline phase specification. In addition, direction of improvement can
be specified as discussed above, with “increase” being the default.
Changing the direction of the improvement does not change the magnitude
of the effect size, but does change its sign:
SMD(A_data = A, B_data = B, improvement = "increase")
#> ES Est SE CI_lower CI_upper baseline_SD
#> 1 SMD 1.649932 0.6340935 0.4071314 2.892732 2.503331
SMD(A_data = A, B_data = B, improvement = "decrease")
#> ES Est SE CI_lower CI_upper baseline_SD
#> 1 SMD -1.649932 0.6340935 -2.892732 -0.4071314 2.503331
The std_dev
argument controls whether the effect size
estimate is based on the standard deviation of the baseline phase alone
(the default, std_dev = "baseline"
), or based on the
standard deviation after pooling across both phases
(std_dev = "pool"
):
SMD(A_data = A, B_data = B, std_dev = "baseline")
#> ES Est SE CI_lower CI_upper baseline_SD
#> 1 SMD 1.649932 0.6340935 0.4071314 2.892732 2.503331
SMD(A_data = A, B_data = B, std_dev = "pool")
#> ES Est SE CI_lower CI_upper pooled_SD
#> 1 SMD 1.876247 0.6374216 0.6269241 3.125571 2.431752
By default the SMD()
function uses the Hedges’ g bias
correction for small sample sizes. The bias correction can be turned off
by specifying the argument bias_correct = FALSE
.
The SMD()
function also produces a 95% confidence
interval by default. This can be adjusted by setting the
confidence
argument to a value between 0 and 1. To omit the
confidence interval all together, set the argument to
confidence = NULL
.
LRRi()
and
LRRd()
)
The response ratio parameter is the ratio of the mean level of the outcome during phase B to the mean level of the outcome during phase A. The log response ratio is the natural logarithm of the response ratio. This effect size is appropriate for outcomes measured on a ratio scale, such that zero corresponds to the true absence of the outcome.
The package includes two versions of the LRR:
LRR-increasing (LRRi()
) is defined so that positive
values correspond to therapeutic improvements
LRR-decreasing (LRRd()
) is defined so that negative
values correspond to therapeutic improvements.
If you are estimating an effect size for a single series, pick the version of LRR that corresponds to the therapeutic improvement expected for your dependent variable. Similarly, if you are estimating effect sizes for a set of SCD series with the same therapeutic direction, pick the version that corresponds to your intervention’s expected change.
If you are estimating effect sizes for interventions where the direction of improvement depends upon the series or study, the choice between LRRi and LRRd is slightly more involved.
For example, imagine we have ten studies to meta-analyze. For eight
studies, the outcome are initiations of peer interaction, so therapeutic
improvements correspond to increases in behavior. For the other two
studies, the outcomes were episodes of verbal aggression towards peers,
so the therapeutic direction was a decrease. In this context it would be
sensible to pick the LRRi()
function, because most of the
outcomes are positively valenced. For the final two studies, we would
specify improvement = "decrease"
, which would ensure that
the sign and magnitude of the outcomes were consistent with the
direction of therapeutic improvement (i.e. a larger log-ratio represents
a larger change in the desired direction). Conversely, if most of the
outcomes had a negative valence and only a few had a positive valence,
then we would use LRRd()
and we would specify
improvement = "increase"
for the few series that had
positive-valence outcomes.
LRR differs from other effect size indices for single-case designs in
that calculating it involves some further information about how the
outcome variable was measured. One important piece of information to
know is the scale of the outcome measurements. For outcomes that are
measured by frequency counting, the scale might be expressed as a raw
count (scale = "count"
) or as a standardized rate per
minute (scale = "rate"
). For outcomes that are measures of
state behavior, where the main dimension of interest is the proportion
of time that the behavior occurs, the scale might be expressed as a
percentage (ranging from 0 to 100%; scale = "percentage"
)
or as a proportion (ranging from 0 to 1;
scale = "proportion"
). For outcomes that don’t fit into any
of these categories, set scale = "other"
.
The scale of the outcome variable has two important implications for how log response ratios are estimated. First, outcomes measured as percentages or proportions need to be coded so that the direction of therapeutic improvement is consistent with the direction of the effect size. Consequently, changing the improvement direction will alter the magnitude, in addition to the sign, of the effect size (see Pustejovsky, 2018, pp. 16–18 for further details). Here is an example:
A <- c(20, 20, 26, 25, 22, 23)
B <- c(28, 25, 24, 27, 30, 30, 29)
LRRi(A_data = A, B_data = B, scale = "percentage")
#> ES Est SE CI_lower CI_upper
#> 1 LRRi 0.1953962 0.05557723 0.08646679 0.3043255
LRRi(A_data = A, B_data = B, improvement = "decrease", scale = "percentage")
#> ES Est SE CI_lower CI_upper
#> 1 LRRi -0.06553504 0.01810144 -0.1010132 -0.03005687
Assuming that improvements correspond to increases, the LRRi value is positive and equal to 0.2. Assuming that improvements correspond to decreases, the LRRi value is negative and smaller in magnitude, equal to -0.07.
Note that if the outcome is a count (the default for both LRR functions) or rate, changing the improvement direction merely changes the sign of the effect size:
A <- c(20, 20, 26, 25, 22, 23)
B <- c(28, 25, 24, 27, 30, 30, 29)
LRRi(A_data = A, B_data = B, scale = "count")
#> ES Est SE CI_lower CI_upper
#> 1 LRRi 0.1953962 0.05557723 0.08646679 0.3043255
LRRi(A_data = A, B_data = B, scale = "count", improvement = "decrease")
#> ES Est SE CI_lower CI_upper
#> 1 LRRi -0.1953962 0.05557723 -0.3043255 -0.08646679
The scale of the outcome has one further important implication. To
account for the possibility of a sample mean of zero, the
LRRd()
and LRRi()
functions use a truncated
sample mean, where the truncation level is determined by the scale of
the outcome and some further details of how the outcomes were measured.
For rates, the truncated mean requires specifying the length of the
observation session in minutes:
A <- c(0, 0, 0, 0)
B <- c(28, 25, 24, 27, 30, 30, 29)
LRRd(A_data = A, B_data = B, scale = "rate")
#> ES Est SE CI_lower CI_upper
#> 1 LRRd NaN NaN NaN NaN
LRRd(A_data = A, B_data = B, scale = "rate", observation_length = 30)
#> ES Est SE CI_lower CI_upper
#> 1 LRRd 8.672947 0.5010548 7.690897 9.654996
If no additional information is provided and there is a sample mean
of 0, the function returns a value of NaN
.
For outcomes specified as percentages or proportions, the argument
intervals
must be supplied. For interval recording methods
such as partial interval recording or momentary time sampling, provide
the number of intervals. For continuous recording, set
intervals
equal to 60 times the length of the observation
session in minutes:
LRRd(A_data = A, B_data = B, scale = "percentage")
#> ES Est SE CI_lower CI_upper
#> 1 LRRd NaN NaN NaN NaN
LRRd(A_data = A, B_data = B, scale = "percentage", intervals = 180)
#> ES Est SE CI_lower CI_upper
#> 1 LRRd 5.859536 0.5010548 4.877487 6.841586
You can also specify your own value for the constant used to truncate
the sample mean using the D_const
argument. If a vector,
the mean will be used.
Both LRR functions return a effect size that has been bias-corrected
for small sample sizes by default. To omit the bias correction, set
bias_correct = FALSE
. Finally, as with the non-overlap
measures, the confidence
argument can be used to change the
default 95% confidence interval, or set to NULL
to omit
confidence interval calculations.
LOR()
The odds ratio parameter is the ratio of the odds that the outcome
occurs during phase B to the odds that the outcome occurs during phase
A. The log-odds ratio (LOR) is the natural logarithm of the odds ratio.
This effect size is appropriate for outcomes measured on a percentage or
proportion scale. The LOR()
function works almost
identically to the LRRi()
and LRRd()
functions, but there are a few exceptions.
The LOR()
function only works with outcomes that are on
proportion or percentage scales:
A_pct <- c(20, 20, 25, 25, 20, 25)
B_pct <- c(30, 25, 25, 25, 35, 30, 25)
LOR(A_data = A_pct, B_data = B_pct, scale = "percentage")
#> ES Est SE CI_lower CI_upper
#> 1 LOR 0.2852854 0.09790282 0.09339935 0.4771713
LOR(A_data = A_pct/100, B_data = B_pct/100, scale = "proportion")
#> ES Est SE CI_lower CI_upper
#> 1 LOR 0.2852854 0.09790282 0.09339935 0.4771713
LOR(A_data = A_pct, B_data = B_pct, scale = "count")
#> Warning: LOR can only be calculated for proportions or percentages. It will
#> return NAs for other outcome scales.
#> ES Est SE CI_lower CI_upper
#> 1 LOR NA NA NA NA
LOR(A_data = A_pct, B_data = B_pct, scale = "proportion")
#> Error in `map()`:
#> ℹ In index: 1.
#> Caused by error in `calc_LOR()`:
#> ! Proportions must be between 0 and 1!
As with the LRR functions, LOR()
includes an argument to
specify the direction of therapeutic improvement, with the default
assumption being that a therapeutic improvement is an increase in the
behavior. In contrast to LRRi and LRRd, changing the direction of
therapeutic improvement only reverses the sign of the LOR, but does not
change its absolute magnitude:
LOR(A_data = A_pct, B_data = B_pct,
scale = "percentage", improvement = "increase")
#> ES Est SE CI_lower CI_upper
#> 1 LOR 0.2852854 0.09790282 0.09339935 0.4771713
LOR(A_data = A_pct, B_data = B_pct,
scale = "percentage", improvement = "decrease")
#> ES Est SE CI_lower CI_upper
#> 1 LOR -0.2852854 0.09790282 -0.4771713 -0.09339935
Similar to the LRR functions, LOR()
will be calculated
using truncated sample means for cases where phase means are close to
the extremes of the scale. To use truncated means, the number of
intervals per observation session must be specified using the
intervals
argument:
LOR(A_data = c(0,0,0), B_data = B_pct,
scale = "percentage")
#> ES Est SE CI_lower CI_upper
#> 1 LOR NaN NaN NaN NaN
LOR(A_data = c(0,0,0), B_data = B_pct,
scale = "percentage", intervals = 20)
#> ES Est SE CI_lower CI_upper
#> 1 LOR 3.60657 0.676328 2.280992 4.932149
For data measured using continuous recording, set the number of
intervals equal to 60 times the length of the observation session in
minutes. Just like the LRR functions, it is possible to specify your own
truncation constant using the D_const
argument. By default
the LOR()
function uses a bias correction for small sample
sizes, but this can be turned off by specifying the argument
bias_correct = FALSE
. The width of the confidence intervals
is controlled via the confidence
argument; set the argument
to confidence = NULL
to omit the confidence interval
calculations.
The calc_ES()
function will calculate multiple effect
sizes estimates for a single SCD series. Just as with the individual
effect size functions, calc_ES()
accepts data in either the
A_data
, B_data
format or the
condition
, outcome
format. Here we use the
A_data
, B_data
format:
A <- c(20, 20, 26, 25, 22, 23)
B <- c(28, 25, 24, 27, 30, 30, 29)
calc_ES(A_data = A, B_data = B, ES = c("NAP","PND","Tau-U"))
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
#> 2 PND 0.7142857 NA NA NA
#> 3 Tau-U 0.7380952 NA NA NA
Here is the same calculation in the condition
,
outcome
format:
phase <- c(rep("A", length(A)), rep("B", length(B)))
outcome <- c(A, B)
calc_ES(condition = phase, outcome = outcome, baseline_phase = "A",
ES = c("NAP","PND","Tau-U"))
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
#> 2 PND 0.7142857 NA NA NA
#> 3 Tau-U 0.7380952 NA NA NA
To specify which effect size to calculate, use the ES
argument, which can include any of the following metrics:
"LRRd"
, "LRRi"
, "LOR"
,
"LRM"
, "SMD"
, "NAP"
,
"PND"
, "PEM"
, "PAND"
,
"IRD"
, "Tau"
, "Tau_BC"
or
"Tau-U"
.
calc_ES(A_data = A, B_data = B, ES = "SMD")
#> ES Est SE CI_lower CI_upper baseline_SD
#> 1 SMD 1.649932 0.6340935 0.4071314 2.892732 2.503331
To calculate multiple effect size estimates, provide a list of effect
sizes to the ES
argument.
calc_ES(A_data = A, B_data = B, ES = c("NAP", "PND", "Tau-U"))
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
#> 2 PND 0.7142857 NA NA NA
#> 3 Tau-U 0.7380952 NA NA NA
Setting ES = "all"
will return all available effect
sizes:
calc_ES(A_data = A, B_data = B, ES = "all")
#> Error in `map()`:
#> ℹ In index: 11.
#> Caused by error in `calc_PoGO()`:
#> ! argument "goal" is missing, with no default
Setting ES = "NOM"
will return all of the non-overlap
measures.
calc_ES(A_data = A, B_data = B, ES = "NOM")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176
#> 2 IRD 0.6904762 NA NA NA
#> 3 PAND 0.8461538 NA NA NA
#> 4 PND 0.7142857 NA NA NA
#> 5 PEM 1.0000000 NA NA NA
#> 6 Tau 0.8333333 0.13801311 0.1946812 0.9720352
#> 7 Tau-U 0.7380952 NA NA NA
#> 8 Tau-BC 0.2857143 0.35951593 -0.3260702 0.7180613
Setting ES = "parametric"
will return all of the
parametric effect sizes:
calc_ES(A_data = A, B_data = B, ES = "parametric")
#> Error in `map()`:
#> ℹ In index: 6.
#> Caused by error in `calc_PoGO()`:
#> ! argument "goal" is missing, with no default
If the ES
argument is omitted, calc_ES()
will return LRRd, LRRi, SMD, and Tau by default.
calc_ES(A_data = A, B_data = B)
#> ES Est SE CI_lower CI_upper baseline_SD
#> 1 LRRd -0.1953962 0.05557723 -0.30432554 -0.08646679 NA
#> 2 LRRi 0.1953962 0.05557723 0.08646679 0.30432554 NA
#> 3 SMD 1.6499319 0.63409351 0.40713144 2.89273232 2.503331
#> 4 Tau 0.8333333 0.13801311 0.19468122 0.97203517 NA
All of the individual effect size functions have the further argument
improvement
, and several of them also have further optional
arguments. Include these arguments in calc_ES()
in order to
pass them on to the individual effect size calculation functions. Any
additional arguments included in calc_ES()
will be used in
the calculation of effect sizes for which they are relevant, but will be
ignored if they are not relevant. For example, the direction of
improvement can be changed from the default increase
to
decrease
:
calc_ES(A_data = A, B_data = B, ES = "NOM", improvement = "decrease")
#> ES Est SE CI_lower CI_upper
#> 1 NAP 0.08333333 0.06900656 0.01398242 0.4026594
#> 2 IRD 0.07142857 NA NA NA
#> 3 PAND 0.53846154 NA NA NA
#> 4 PND 0.00000000 NA NA NA
#> 5 PEM 0.00000000 NA NA NA
#> 6 Tau -0.83333333 0.13801311 -0.97203517 -0.1946812
#> 7 Tau-U -0.73809524 NA NA NA
#> 8 Tau-BC -0.28571429 0.35951593 -0.71806125 0.3260702
It is also possible to change the method for calculating the standard
error for the NAP
, Tau
, and
Tau_BC
functions, as well as the coverage of the confidence
interval. For example, to omit the confidence interval calculations for
NAP and Tau, we can include the argument
confidence = NULL
:
calc_ES(A_data = A, B_data = B, ES = "NOM", improvement = "decrease", confidence = NULL)
#> ES Est SE
#> 1 NAP 0.08333333 0.06900656
#> 2 IRD 0.07142857 NA
#> 3 PAND 0.53846154 NA
#> 4 PND 0.00000000 NA
#> 5 PEM 0.00000000 NA
#> 6 Tau -0.83333333 0.13801311
#> 7 Tau-U -0.73809524 NA
#> 8 Tau-BC -0.28571429 0.35951593
For SMD()
there are several other inputs such as
std_dev
, bias_correct
, and
confidence
which control how the effect size estimate is
calculated, the usage of the Hedges’ g bias correction for small sample
sizes, and the coverage of the confidence interval. The log response
ratio and log odds ratio functions also include arguments for the
outcome scale on which the input scores are measured and optional
entries for session lengths and intervals. All of these additional
options are discussed in more depth in the first section of this
vignette.
Finally, calc_ES()
includes an option to change the
format of the output. The function defaults to
format = "long"
; setting format = "wide"
will
return all of the results as a single line, rather than one line per
effect size:
calc_ES(A_data = A, B_data = B, ES = c("NAP","PND","SMD"))
#> ES Est SE CI_lower CI_upper baseline_SD
#> 1 NAP 0.9166667 0.06900656 0.5973406 0.9860176 NA
#> 2 PND 0.7142857 NA NA NA NA
#> 3 SMD 1.6499319 0.63409351 0.4071314 2.8927323 2.503331
calc_ES(A_data = A, B_data = B, ES = c("NAP","PND","SMD"), format = "wide")
#> NAP_Est NAP_SE NAP_CI_lower NAP_CI_upper PND_Est SMD_Est SMD_SE
#> 1 0.9166667 0.06900656 0.5973406 0.9860176 0.7142857 1.649932 0.6340935
#> SMD_CI_lower SMD_CI_upper SMD_baseline_SD
#> 1 0.4071314 2.892732 2.503331
Most single-case studies include multiple cases, and many also
include multiple dependent variables measured on each case. Thus, it
will often be of interest to calculate effect size estimates for
multiple data series from a study, or even from multiple
studies. The batch_calc_ES()
function does exactly
this—calculating any of the previously detailed effect sizes for each of
several data series. Its syntax is a bit more involved than the previous
functions, and so we provide several examples here. In what follows, we
will assume that you are already comfortable using the
es_calc()
function as well as the other individual effect
size functions in the package.
Unlike with the other functions in the package, the input data for
batch_calc_ES()
must be organized in a data frame, with one
line corresponding to each observation within a series, and columns
corresponding to different variables (e.g. outcome, phase, session
number). One or more variables must be included that uniquely identify
every data series. Let’s look at two examples.
The McKissick
dataset is data drawn from McKissick, Hawkins, Lentz, Hailley, & McGuire (2010), a single-case
design study of a group contingency intervention. The study used a
multiple baseline design across three classrooms. The outcome data are
event counts of disruptive behaviors observed at the classroom
level.
data(McKissick)
Here are the first few rows of the data:
Case_pseudonym | Session_number | Condition | Outcome | Session_length | Procedure |
---|---|---|---|---|---|
Period 1 | 1 | A | 13.62 | 20 | count |
Period 1 | 2 | A | 12.57 | 20 | count |
Period 1 | 3 | A | 15.76 | 20 | count |
Period 1 | 4 | B | 5.97 | 20 | count |
Period 1 | 5 | B | 4.63 | 20 | count |
Period 1 | 6 | B | 5.82 | 20 | count |
Period 1 | 7 | B | 3.72 | 20 | count |
Period 1 | 8 | B | 8.07 | 20 | count |
Period 1 | 9 | B | 2.95 | 20 | count |
Period 1 | 10 | B | 11.86 | 20 | count |
The Schmidt2007
dataset are data drawn from Schmidt (2007). This data set is
somewhat more complicated. It has two outcomes for each participant, and
the outcomes differ in directions of therapeutic improvement and
measurement scale. The study used an ABAB design, replicated across
three participants. Each series therefore has four phases: a baseline
phase, a treatment phase, a return to baseline phase, and a second
treatment phase.
data(Schmidt2007)
Here are the first few rows of the data
Case_pseudonym | Behavior_type | Session_number | Outcome | Condition | Phase_num | Metric | Session_length | direction | n_Intervals |
---|---|---|---|---|---|---|---|---|---|
Faith | Disruptive Behavior | 1 | 22.944463 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 2 | 22.431292 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 3 | 27.785380 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 4 | 16.928954 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 5 | 21.838294 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 6 | 3.780363 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 7 | 18.137758 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 8 | 11.774433 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 9 | 22.083476 | A | 1 | count | 10 | decrease | NA |
Faith | Disruptive Behavior | 10 | 4.986945 | B | 1 | count | 10 | decrease | NA |
The Schmidt (2007) dataset contains many variables, but for now let’s focus on the following:
Case_Pseudonym
uniquely identifies each of the three
participantsBehavior_type
specifies whether the outcome is
disruptive behavior or on-task behaviorSession_number
specifies the order of the sessions
within each data seriesOutcome
contains the dependent variable
measurementsCondition
specifies whether the outcome is in a
baseline (“A”) condition or a treatment (“B”) conditionPhase_num
specifies whether the session is in the first
or second pair of phases in the designMetric
specifies whether the dependent variable is
percentage or count dataSession_length
specifies the length of the observation
sessiondirection
specifies the direction of therapeutic
improvementn_Intervals
specifies the number of intervals per
session for the dependent variable measured using partial interval
recording.batch_calc_ES()
Here are the arguments for the batch calculator function:
args(batch_calc_ES)
#> function (dat, grouping, condition, outcome, aggregate = NULL,
#> weighting = "equal", session_number = NULL, baseline_phase = NULL,
#> intervention_phase = NULL, ES = c("LRRd", "LRRi", "SMD",
#> "Tau"), improvement = "increase", scale = "other", intervals = NA,
#> observation_length = NA, goal = NULL, confidence = 0.95,
#> format = "long", warn = TRUE, ...)
#> NULL
This function has a lot of arguments, but many of them are optional and only used for certain effect size metrics (these options are described in more detail in previous sections). For the moment, let’s focus on the first few arguments, which are all we need to get going.
The argument dat
should be a dataframe containing
all of the observations for all of the data series of interest.
The grouping
argument should specify the set of
variables that uniquely identify each series. For a single study
consisting of several series, like the McKissick dataset, this might
simply be a variable name that identifies the participant pseudonym.
Specify using bare variable names (i.e., without quotes).
The condition
argument should be the variable that
identifies the treatment condition for each observation in the series.
Specify using a bare variable name. The values for the baseline and
treatment phases should be uniform across all of the series within a
dataset. That is, if some series are coded as “0” for baseline and “1”
for treatment, whereas other series had “A” as baseline and “B” as
treatment, you will first need to clean you data and standardize the
coding.
The outcome
argument should be the variable that
contains the outcomes of interest. Specify using a bare variable
name.
The ES
argument allows you to specify which effect
sizes to calculate. By default, the batch calculator generates estimates
of LRRd, LRRi, SMD, and Tau. However, you’re probably going to want to
specify your own effect sizes. Just as in calc_ES
, specify
your desired effect sizes as a character vector, with the individual
options of "LRRd"
, "LRRi"
, "LOR"
,
"LRM"
, "SMD"
, "NAP"
,
"PND"
, "PEM"
, "PAND"
,
"IRD"
, "Tau"
, "Tau_BC"
or
"Tau-U"
, in addition to "all"
for all effect
sizes, "NOM"
for all non-overlap measures, and
"parametric"
for all parametric effect sizes.
All of the remaining arguments are truly optional, and we’ll introduce them as we go along.
Let’s try applying the function to the McKissick data. Remember that
these data contains an identifier for each case
(Case_pseudonym
), a variable (Condition
)
identifying the baseline (“A”) and treatment (“B”) phases, and an
outcome variable containing the values of the outcomes. The outcomes are
disruptive behaviors, so a decrease in the behavior corresponds to
therapeutic improvement. Just as with the calc_ES()
function, we’ll need to specify the direction of therapeutic improvement
using the improvement
argument. In this example, we will
calculate estimates of NAP and PND, to keep things simple:
mckissick_ES <- batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
improvement = "decrease",
ES = c("NAP", "PND"))
Note that all of the inputs related to variable names are bare (i.e., no quotes). Let’s take a look at a table of the output.
Case_pseudonym | ES | Est | SE | CI_lower | CI_upper |
---|---|---|---|---|---|
Period 1 | NAP | 1.0000000 | 0.0440101 | 1.0000000 | 1.0000000 |
Period 1 | PND | 1.0000000 | NA | NA | NA |
Period 2 | NAP | 0.7714286 | 0.1538619 | 0.4305321 | 0.9322444 |
Period 2 | PND | 0.4285714 | NA | NA | NA |
Period 3 | NAP | 0.9166667 | 0.0833333 | 0.5676324 | 0.9874545 |
Period 3 | PND | 0.7500000 | NA | NA | NA |
The output will always start with one or more columns corresponding
to each unique combination of values from the grouping
argument, followed by a column describing the effect size reported in
each row. The column called Est
contains the effect size
estimates. If any of the requested effect sizes have standard
errors and confidence intervals, there will also be columns
corresponding to the standard error and the upper and lower limit. Here,
PND has NA
for each of those, because it does not have a
known standard error or confidence interval.
Now let’s look at an example using the Schmidt data. Remember that
these data contain a pseudonym that uniquely identifies each of the
three participants (Case_Pseudonym
) as well as a variable
that specifies whether the outcome is disruptive behavior or on-task
behavior (Behavior_type
). Furthermore, these data come from
a treatment reversal design with two pairs of AB phases for each
combination of case and behavior type. Each pair of AB phases is labeled
in the variable Phase_num
. We’re going to want an effect
size for each combination of pseudonym, behavior, and phase pair. The
data also have an outcome variable (Outcome
) and a variable
identifying whether it was in the baseline (“A”) or treatment (“B”)
phase (Condition
). Finally, the the two different behavior
types have different direction therapeutic improvement, so there is a
variable called direction
that specifies
"increase"
for on-task behavior or "decrease"
for disruptive behavior.
Here’s an example of how to calculate NAP and LRRi for these data:
schmidt_ES <- batch_calc_ES(
dat = Schmidt2007,
grouping = c(Case_pseudonym, Behavior_type, Phase_num),
condition = Condition,
outcome = Outcome,
improvement = direction,
ES = c("NAP", "LRRi")
)
The syntax is similar to the example with the McKissick dataset,
except for two things. Here, we’ve provided a vector of variable names
for grouping
that identify each series for which we want an
effect size. Instead of providing a uniform direction of improvement to
the improvement
variable, we’ve provided a variable name,
direction
, which will account for the fact that the two
behavior types have different directions of therapeutic improvement.
Here is a table of the output:
Case_pseudonym | Behavior_type | Phase_num | ES | Est | SE | CI_lower | CI_upper |
---|---|---|---|---|---|---|---|
Albert | Disruptive Behavior | 1 | NAP | 1.000 | 0.007 | 1.000 | 1.000 |
Albert | Disruptive Behavior | 1 | LRRi | 1.749 | 0.210 | 1.338 | 2.160 |
Albert | Disruptive Behavior | 2 | NAP | 0.861 | 0.144 | 0.443 | 0.977 |
Albert | Disruptive Behavior | 2 | LRRi | 0.947 | 0.538 | -0.108 | 2.001 |
Albert | On Task Behavior | 1 | NAP | 0.735 | 0.145 | 0.484 | 0.885 |
Albert | On Task Behavior | 1 | LRRi | 0.421 | 0.162 | 0.103 | 0.739 |
Albert | On Task Behavior | 2 | NAP | 0.444 | 0.294 | 0.153 | 0.783 |
Albert | On Task Behavior | 2 | LRRi | 0.052 | 0.117 | -0.177 | 0.282 |
Faith | Disruptive Behavior | 1 | NAP | 0.958 | 0.042 | 0.704 | 0.995 |
Faith | Disruptive Behavior | 1 | LRRi | 1.606 | 0.324 | 0.972 | 2.241 |
Faith | Disruptive Behavior | 2 | NAP | 1.000 | 0.063 | 1.000 | 1.000 |
Faith | Disruptive Behavior | 2 | LRRi | 1.651 | 0.376 | 0.914 | 2.388 |
Faith | On Task Behavior | 1 | NAP | 0.771 | 0.127 | 0.488 | 0.916 |
Faith | On Task Behavior | 1 | LRRi | 0.323 | 0.129 | 0.069 | 0.576 |
Faith | On Task Behavior | 2 | NAP | 0.933 | 0.067 | 0.495 | 0.994 |
Faith | On Task Behavior | 2 | LRRi | 0.241 | 0.201 | -0.152 | 0.635 |
Lilly | Disruptive Behavior | 1 | NAP | 0.777 | 0.147 | 0.521 | 0.912 |
Lilly | Disruptive Behavior | 1 | LRRi | 1.168 | 0.227 | 0.724 | 1.613 |
Lilly | Disruptive Behavior | 2 | NAP | 1.000 | 0.063 | 1.000 | 1.000 |
Lilly | Disruptive Behavior | 2 | LRRi | 1.427 | 0.310 | 0.819 | 2.035 |
Lilly | On Task Behavior | 1 | NAP | 0.580 | 0.135 | 0.340 | 0.784 |
Lilly | On Task Behavior | 1 | LRRi | 0.015 | 0.114 | -0.209 | 0.239 |
Lilly | On Task Behavior | 2 | NAP | 0.867 | 0.133 | 0.433 | 0.980 |
Lilly | On Task Behavior | 2 | LRRi | 0.604 | 0.672 | -0.712 | 1.921 |
The first three columns are the unique values from the variables
supplied to grouping
, followed by the effect size
information.
The Schmidt study used an ABAB design, and as a consequence we end up
with not one but two effect size estimates for each case and
each outcome. Under some circumstances, it may make sense to
aggregate—or average together—the effect size estimates from the first
and second AB pairs for each case. Doing so simplifies the structure of
the resulting effect size dataset, so that there is just one effect size
estimate per case per outcome. The batch_calc_ES
function
includes an optional argument called aggregate
that allows
you to aggregate effect size estimates across a grouping variable. To
use it, specify the name of one or more variables across which to
aggregate. These variables will then be treated as grouping variables
for purposes of effect size calculation (just like those specified in
the grouping
argument), but the results will then be
aggregated over the unique values of the variables.
Here’s an example of how to use aggregate
with the
Schmidt dataset (for simplicity, we will calculate only the NAP effect
size). Rather than specifying Phase_num
as a grouping
variable, we specify it as an aggregate
variable:
schmidt_ES_agg <-
batch_calc_ES(
dat = Schmidt2007,
grouping = c(Case_pseudonym, Behavior_type),
aggregate = Phase_num,
condition = Condition,
outcome = Outcome,
improvement = direction,
ES = "NAP"
)
phase_num
have been averaged together:
Case_pseudonym | Behavior_type | ES | Est | SE | CI_lower | CI_upper |
---|---|---|---|---|---|---|
Albert | Disruptive Behavior | NAP | 0.9305556 | 0.0719780 | 0.7894812 | 1.0716299 |
Albert | On Task Behavior | NAP | 0.5897436 | 0.1639183 | 0.2684697 | 0.9110175 |
Faith | Disruptive Behavior | NAP | 0.9791667 | 0.0379601 | 0.9047662 | 1.0535672 |
Faith | On Task Behavior | NAP | 0.8520833 | 0.0717033 | 0.7115474 | 0.9926193 |
Lilly | Disruptive Behavior | NAP | 0.8883929 | 0.0799921 | 0.7316112 | 1.0451745 |
Lilly | On Task Behavior | NAP | 0.7235119 | 0.0950093 | 0.5372970 | 0.9097268 |
The package allows for several different weighting schemes:
"equal"
(the default) or "Equal"
: Equal
weighting takes the simple arithmetic average of the effect size
estimates."1/V"
: Inverse variance weighting takes a weighted
average of the effect size estimates with weights that are inversely
proportional to the sampling variances of the estimates (i.e., the
square of the standard error). This weighting scheme is the most
efficient approach if the components being averaged together are all
estimating the same underlying parameter. However, inverse variance
weighting will not work for effect size estimates that do not have a
known standard error, such as PND or PAND."nA"
or "n_A"
: uses the number of baseline
phase observations as the weights for aggregating."nB"
or "n_B"
: uses the number of
treatment phase observations as the weights for aggregating."nAnB"
, "nA*nB"
, "nA * nB"
,
"n_A*n_B"
, or "n_A * n_B"
: uses the product of
the number of baseline and treatment phases as the weights for
aggregating."1/nA+1/nB"
, "1/nA + 1/nB"
,
"1/n_A+1/n_B"
, or "1/n_A + 1/n_B"
: uses the
sum of the inverse number of baseline phases and the inverse number of
treatment phases as the weights for aggregating.Here is an example of using equal weighting for calculating aggregated effect sizes across pairs of AB phases:
schmidt_ES_agg <-
batch_calc_ES(
dat = Schmidt2007,
grouping = c(Case_pseudonym, Behavior_type),
aggregate = Phase_num,
weighting = "equal",
condition = Condition,
outcome = Outcome,
improvement = direction,
ES = "NAP"
)
Case_pseudonym | Behavior_type | ES | Est | SE | CI_lower | CI_upper |
---|---|---|---|---|---|---|
Albert | Disruptive Behavior | NAP | 0.931 | 0.072 | 0.789 | 1.072 |
Albert | On Task Behavior | NAP | 0.590 | 0.164 | 0.268 | 0.911 |
Faith | Disruptive Behavior | NAP | 0.979 | 0.038 | 0.905 | 1.054 |
Faith | On Task Behavior | NAP | 0.852 | 0.072 | 0.712 | 0.993 |
Lilly | Disruptive Behavior | NAP | 0.888 | 0.080 | 0.732 | 1.045 |
Lilly | On Task Behavior | NAP | 0.724 | 0.095 | 0.537 | 0.910 |
By default, the batch calculator assumes the outcome scale is
"other"
. If using this default assumption, the log odd
ratio and the log response ratio will not be calculated if a phase mean
is equal to zero. Just as with calc_ES()
, you may need to
specify the outcome scales as well as things like the length of the
observation session or the number of intervals in each observation
session in order to calculate parametric effect sizes. If these values
are the same for all observations in the dataset, you can specify them
as further arguments to batch_calc_ES()
. Here is an example
using the McKissick dataset, where we specify that all of the outcomes
are measured as counts during 20-minute observation periods:
mckissick_ES <- batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
improvement = "decrease",
scale = "count",
observation_length = 20,
ES = "parametric")
#> Error in batch_calc_ES(dat = McKissick, grouping = Case_pseudonym, condition = Condition, : You must provide the goal level of the behavior to calculate the PoGO effect size.
Note that we get a warning about the log odds ratio. Let’s take a look at the output:
Case_pseudonym | ES | Est | SE | CI_lower | CI_upper |
---|---|---|---|---|---|
Period 1 | NAP | 1.000 | 0.044 | 1.000 | 1.000 |
Period 1 | PND | 1.000 | NA | NA | NA |
Period 2 | NAP | 0.771 | 0.154 | 0.431 | 0.932 |
Period 2 | PND | 0.429 | NA | NA | NA |
Period 3 | NAP | 0.917 | 0.083 | 0.568 | 0.987 |
Period 3 | PND | 0.750 | NA | NA | NA |
Once again, we have a column specifying the case to which the effect
sizes correspond, as well as a column specifying the effect size metric.
The log odds ratio returns all NA
s, because the log odds
ratio can’t be estimate for count outcomes.
Let’s suppose that we are interested in estimating effect sizes using
data where the measurement scale—as well as perhaps measurement details
like the observation length or the number of intervals—varies depending
on the data series. The Schmidt data is one example of this. Remember
that the Schmidt data has a variable specifying the measurement scale of
the outcome (Metric
) which is "percentage"
for
desirable behavior and "count"
for disruptive behaviors. It
also has a variable that specifies the length of the observation session
(Session_length
), and a variable that specifies the number
of intervals per session for the dependent variable measured using
partial interval recording (n_Intervals
). The value of
Session_length
is NA
for the percentage
outcomes and the value of n_Intervals
is NA
for the count outcomes because those details are not relevant for those
outcome measurement scales. Let’s try it out:
schmidt_ES <- batch_calc_ES(dat = Schmidt2007,
grouping = c(Case_pseudonym, Behavior_type, Phase_num),
condition = Condition,
outcome = Outcome,
improvement = direction,
scale = Metric,
observation_length = Session_length,
intervals = n_Intervals,
ES = c("parametric"))
#> Error in batch_calc_ES(dat = Schmidt2007, grouping = c(Case_pseudonym, : You must provide the goal level of the behavior to calculate the PoGO effect size.
Unlike the previous example, where we specified a uniform value for
the scale
and observation_length
, we now have
to specify variable names for scale
,
observation_length
, and the number of
intervals
. Note that we get some warnings again about the
LOR effect size. Let’s take a look at the output:
Case_pseudonym | Behavior_type | Phase_num | ES | Est | SE | CI_lower | CI_upper |
---|---|---|---|---|---|---|---|
Albert | Disruptive Behavior | 1 | NAP | 1.000 | 0.007 | 1.000 | 1.000 |
Albert | Disruptive Behavior | 1 | LRRi | 1.749 | 0.210 | 1.338 | 2.160 |
Albert | Disruptive Behavior | 2 | NAP | 0.861 | 0.144 | 0.443 | 0.977 |
Albert | Disruptive Behavior | 2 | LRRi | 0.947 | 0.538 | -0.108 | 2.001 |
Albert | On Task Behavior | 1 | NAP | 0.735 | 0.145 | 0.484 | 0.885 |
Albert | On Task Behavior | 1 | LRRi | 0.421 | 0.162 | 0.103 | 0.739 |
Albert | On Task Behavior | 2 | NAP | 0.444 | 0.294 | 0.153 | 0.783 |
Albert | On Task Behavior | 2 | LRRi | 0.052 | 0.117 | -0.177 | 0.282 |
Faith | Disruptive Behavior | 1 | NAP | 0.958 | 0.042 | 0.704 | 0.995 |
Faith | Disruptive Behavior | 1 | LRRi | 1.606 | 0.324 | 0.972 | 2.241 |
Faith | Disruptive Behavior | 2 | NAP | 1.000 | 0.063 | 1.000 | 1.000 |
Faith | Disruptive Behavior | 2 | LRRi | 1.651 | 0.376 | 0.914 | 2.388 |
Faith | On Task Behavior | 1 | NAP | 0.771 | 0.127 | 0.488 | 0.916 |
Faith | On Task Behavior | 1 | LRRi | 0.323 | 0.129 | 0.069 | 0.576 |
Faith | On Task Behavior | 2 | NAP | 0.933 | 0.067 | 0.495 | 0.994 |
Faith | On Task Behavior | 2 | LRRi | 0.241 | 0.201 | -0.152 | 0.635 |
Lilly | Disruptive Behavior | 1 | NAP | 0.777 | 0.147 | 0.521 | 0.912 |
Lilly | Disruptive Behavior | 1 | LRRi | 1.168 | 0.227 | 0.724 | 1.613 |
Lilly | Disruptive Behavior | 2 | NAP | 1.000 | 0.063 | 1.000 | 1.000 |
Lilly | Disruptive Behavior | 2 | LRRi | 1.427 | 0.310 | 0.819 | 2.035 |
Lilly | On Task Behavior | 1 | NAP | 0.580 | 0.135 | 0.340 | 0.784 |
Lilly | On Task Behavior | 1 | LRRi | 0.015 | 0.114 | -0.209 | 0.239 |
Lilly | On Task Behavior | 2 | NAP | 0.867 | 0.133 | 0.433 | 0.980 |
Lilly | On Task Behavior | 2 | LRRi | 0.604 | 0.672 | -0.712 | 1.921 |
In this case, LOR is all NA
for the outcomes that are
disruptive behaviors because those are counts and therefore the LOR
isn’t an appropriate effect size. However, for the percentage of on task
behavior, the LOR was estimated.
We can also request the effect sizes in a wide format:
mckissick_wide_ES <-
batch_calc_ES(
dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
improvement = "decrease",
ES = c("NAP", "PND"),
format = "wide"
)
The default argument for the batch calculator is
format = "long"
, but if you want each case to be on a
single line, specifying format = "wide"
will provide the
output that way, just like calc_ES()
. Here’s the
output:
Case_pseudonym | NAP_Est | NAP_SE | NAP_CI_lower | NAP_CI_upper | PND_Est |
---|---|---|---|---|---|
Period 1 | 1.0000000 | 0.0440101 | 1.0000000 | 1.0000000 | 1.0000000 |
Period 2 | 0.7714286 | 0.1538619 | 0.4305321 | 0.9322444 | 0.4285714 |
Period 3 | 0.9166667 | 0.0833333 | 0.5676324 | 0.9874545 | 0.7500000 |
In this case there are columns for NAP, NAP’s standard error, and the
upper and lower bounds of the confidence interval. PND only has a column
for the estimate, but remember that the values for SE and upper and
lower CI were all NA
in the long format. Columns that would
have all NA
values are removed when specifying
format = "wide"
.
Remember how, when we asked for the LOR for counts, the calculator
gave us a bunch of warning messages? If you’re asking for the LOR, and
some of your outcomes are in a scale other than percentage or
proportion, you can specify the argument warn = FALSE
(by
default it is set to TRUE
) if you want to suppress the
warning messages. You will still get NA for any series with an
inappropriate outcome scale.
batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
improvement = "decrease",
scale = "count",
observation_length = 20,
ES = c("LRRi","LOR"),
warn = FALSE)
#> # A tibble: 6 × 6
#> Case_pseudonym ES Est SE CI_lower CI_upper
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 Period 1 LRRi 0.807 0.198 0.419 1.19
#> 2 Period 1 LOR NA NA NA NA
#> 3 Period 2 LRRi 0.610 0.349 -0.0736 1.29
#> 4 Period 2 LOR NA NA NA NA
#> 5 Period 3 LRRi 0.748 0.353 0.0550 1.44
#> 6 Period 3 LOR NA NA NA NA
The ...
argument allows you to specify arguments
particular to an individual function such as std_dev
for
the SMD()
function. For instance, compare the results of
calculating a pooled SMD versus the default, baseline phase only
SMD:
batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
ES = "SMD",
improvement = "decrease")
#> # A tibble: 3 × 7
#> Case_pseudonym ES Est SE CI_lower CI_upper baseline_SD
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Period 1 SMD 2.75 0.943 0.906 4.60 1.63
#> 2 Period 2 SMD 1.21 0.650 -0.0633 2.48 5.58
#> 3 Period 3 SMD 2.89 1.08 0.763 5.01 2.33
batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
ES = "SMD",
improvement = "decrease",
std_dev = "pool")
#> # A tibble: 3 × 7
#> Case_pseudonym ES Est SE CI_lower CI_upper pooled_SD
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Period 1 SMD 2.58 0.853 0.909 4.25 2.74
#> 2 Period 2 SMD 1.12 0.588 -0.0345 2.27 6.97
#> 3 Period 3 SMD 2.34 0.727 0.920 3.77 2.95
Arguments common to several functions will be used when calculating
any of the effect sizes for which they are relevant. For example, the
bias_correct
argument applies to all of the parametric
effect sizes:
batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
ES = "parametric",
improvement = "decrease",
scale = Procedure,
observation_length = Session_length,
bias_correct = FALSE,
warn = FALSE)
#> Error in batch_calc_ES(dat = McKissick, grouping = Case_pseudonym, condition = Condition, : You must provide the goal level of the behavior to calculate the PoGO effect size.
The bias_correct
argument cannot be specified
differently for different effect size functions. If you want to obtain
bias-corrected values for the LRRd effect size but not for the SMD
effect size, you would need to call batch_calc_ES()
separately for the two different effect sizes.
The session_number
argument orders the data within each
series by the specified variable. This argument is only important if
baseline-corrected Tau or Tau-U is being calculated. For these effect
sizes, the ordering of the baseline phase is important because they
involve adjustments for trend in the baseline phase. This argument is
irrelevant for all of the other effect sizes.
The baseline_phase
argument works the same was as in the
calc_ES()
function. If nothing is specified, the first
phase in each series will be treated as the baseline phase. However, if
the baseline phase is not always the first phase in each series, such as
an SCD with four cases that use a cross-over treatment reversal design,
where two of the cases follow an ABAB design and the other two cases
follow a BABA design, you will need to specify the
baseline_phase
in the same way as in the
calc_ES()
function.
The confidence
argument controls the confidence
intervals in the same way as all the other functions. To skip
calculating confidence intervals, specify
confidence = NULL
:
batch_calc_ES(dat = McKissick,
grouping = Case_pseudonym,
condition = Condition,
outcome = Outcome,
ES = "parametric",
improvement = "decrease",
scale = Procedure,
observation_length = Session_length,
confidence = NULL,
warn = FALSE)
#> Error in batch_calc_ES(dat = McKissick, grouping = Case_pseudonym, condition = Condition, : You must provide the goal level of the behavior to calculate the PoGO effect size.