---
title: "replicateBE"
subtitle: "Comparative BA-calculation for Average Bioequivalence with Expanding Limits (ABEL)"
lang: "en"
output:
rmarkdown::html_vignette:
css: vignette.css
toc: true
toc_depth: 2
vignette: >
%\VignetteIndexEntry{replicateBE}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#"
)
```
# Introduction{#intro}
The package provides datasets (internal `.rda` and in CSV-format in `/extdata/`) which support users in a [black-box](https://en.wikipedia.org/wiki/Black-box_testing) performance qualification (PQ) of their software installations.^[Schütz H, Tomashevskiy M, Labes D, Shitova A, González-de la Parra M, Fuglsang A. *Reference Datasets for Studies in a Replicate Design Intended for Average Bioequivalence with Expanding Limits.* AAPS J. 2020; 22(2): Article 44. [doi:10.1208/s12248-020-0427-6](https://doi.org/10.1208/s12248-020-0427-6).] Users can analyze data imported from CSV- and Excel-files.
The methods^[European Medicines Agency. *Annex I.* EMA/582648/2016. London. 21 September 2016. [Online](https://www.ema.europa.eu/en/documents/other/31-annex-i-statistical-analysis-methods-compatible-ema-bioequivalence-guideline_en.pdf)] given by the European Medicines Agency (EMA) for reference-scaling according to the Guideline on the Investigation of Bioequivalence^[European Medicines Agency. Committee for Medicinal Products for Human Use. *Guideline on the Investigation of Bioequivalence.* CPMP/EWP/QWP/1401/98 Rev. 1/Corr \*\*. London. 20 January 2010 [Online](https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-investigation-bioequivalence-rev1_en.pdf).] are implemented. Health Canada’s approach^[Health Canada. *Guidance Document. Conduct and Analysis of Comparative Bioavailability Studies.* Ottawa. 2018/06/08. [Online](https://www.canada.ca/content/dam/hc-sc/documents/services/drugs-health-products/drug-products/applications-submissions/guidance-documents/bioavailability-bioequivalence/conduct-analysis-comparative.pdf).] -- requiring a mixed-effects model -- is approximated by intra-subject contrasts. Direct widening of the limits for *C*~max~ (as a special case of ABEL) recommended by the Gulf Cooperation Council^[Executive Board of the Health Ministers’ Council for GCC States. *The GCC Guidelines for Bioequivalence.* Version 3.0. May 2021. [Online](https://www.sfda.gov.sa/sites/default/files/2021-10/GCC_Guidelines_Bioequivalence.pdf).] (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, the United Arab Emirates) is supported as well.
Potential influence of outliers on the variability of the reference can be assessed by box plots of studentized and standardized residuals as suggested at a joint EGA/EMA symposium.^[European Generic Medicines Association. *Revised EMA Bioequivalence Guideline. Questions & Answers.* 3^rd^ EGA Symposium on Bioequivalence. London. 1 June 2010. [Online](https://www.medicinesforeurope.com/wp-content/uploads/2016/03/EGA_BEQ_QA_WEB_QA_1_32.pdf).]
# Functions{#methods}
## `method.A()`{#methodA}
A linear model of log-transformed PK responses and effects\
_sequence_, _subject_(_sequence_), _period_, _treatment_\
where all effects are fixed (*i.e.*, evaluated by an ANOVA). Estimated via function `lm()` of package `stats`.
```{r eval = FALSE}
modA <- lm(log(PK) ~ sequence + subject %in% sequence + period + treatment,
data = data)
```
## `method.B()`{#methodB}
A linear model of log-transformed PK responses and effects\
_sequence_, _subject_(_sequence_), _period_, _treatment_\
where _subject_(_sequence_) is a random effect and all others are fixed.\
Three options are provided
(@) Estimated via function `lmer()` of package `lmerTest`.
```{r show_mod1, eval = FALSE}
modB <- lmer(log(PK) ~ sequence + period + treatment + (1|subject),
data = data)
```
* Employs Satterthwaite’s approximation^[Satterthwaite FE. *An Approximate Distribution of Estimates of Variance Components.* Biometr Bull. 1946; 2(6): 110--4. [doi:10.2307/3002019](https://doi.org/10.2307/3002019).] of the degrees of freedom `method.B(..., option = 1)`, which is equivalent to SAS’ `DDFM=SATTERTHWAITE`, Phoenix WinNonlin’s `Degrees of Freedom Satterthwaite`, and Stata’s `dfm=Satterthwaite`.
* Note that this is the only available approximation in SPSS.
(@) Estimated via function `lme()` of package `nlme`.
```{r show_mod2, eval = FALSE}
modB <- lme(log(PK) ~ sequence + period + treatment, random = ~1|subject,
data = data)
```
* Employs degrees of freedom equivalent to SAS’ `DDFM=CONTAIN` (implicitly preferred by the EMA), Phoenix WinNonlin’s `Degrees of Freedom Residual`, STATISTICA’s `GLM containment`, and Stata’s `dfm=anova`.
* To comply with the EMA’s Q&A document, `method.B(..., option = 2)` is the default (*i.e.*, if the argument `option` is missing).
(@) Estimated via function `lmer()` of package `lmerTest`.
```{r show_mod3, eval = FALSE}
modB <- lmer(log(PK) ~ sequence + period + treatment + (1|subject),
data = data)
```
* Employs the Kenward-Roger approximation^[Kenward MG, Roger JH. *Small Sample Inference for Fixed Effects from Restricted Maximum Likelihood.* Biometrics. 1997; 53(3): 983--97. [doi:10.2307/2533558](https://doi.org/10.2307/2533558).] of the degrees of freedom `method.B(..., option = 3)`, which is equivalent to Stata’s `dfm=Kenward Roger (EIM)` and SAS’ `DDFM=KENWARDROGER(FIRSTORDER)`, *i.e.*, based on the *expected* information matrix.
* Note that SAS with `DDFM=KENWARDROGER` and JMP calculate Satterthwaite’s [*sic*] degrees of freedom and apply the Kackar-Harville correction^[Kackar RN, Harville DA. *Approximations for Standard Errors of Estimators of Fixed and Random Effects in Mixed Linear Models.* J Am Stat Assoc. 1984; 79(388): 853--62. [doi:10.1080/01621459.1984.10477102](https://doi.org/10.1080/01621459.1984.10477102).] *i.e.*, based on the *observed* information matrix.
## `ABE()`{#ABE}
Average Bioequivalence, where the model is identical to the one of [`method.A()`](#methodA). By default the conventional acceptance range of 80.00 -- 125.00\% is used. Tighter limits (90.00 -- 111.11\%) for narrow therapeutic index drugs (EMA and others) or wider limits (75.00 – 133.33\%) for highly variable *C*~max~ (South Africa^[Medicines Control Council. Registration of Medicines. *Biostudies.* Pretoria. June 2015. [Online](https://www.sahpra.org.za/wp-content/uploads/2020/01/61de452d2.06_Biostudies_Jun15_v6.pdf).], Kazakhstan^[Shohin LE, Rozhdestvenkiy DA, Medvedev VYu, Komarow TN, Grebenkin DYu. *Russia, Belarus & Kazakhstan.* In: Kanfer I, editor. *Bioequivalence Requirements in Various Global Jurisdictions.* Charm: Springer; 2017. p. 223.]) can be specified by the arguments `theta1` (lower limit) and/or `theta2` (upper limit).
# Hypotheses{#hypotheses}
The hypotheses in the confidence interval inclusion approach are $$\small{H_{0}: \frac{\mu_\textrm{T}}{\mu_\textrm{R}}\ni \left\{L,U\right\}\:vs\:H_{1}:L<\frac{\mu_\textrm{T}}{\mu_\textrm{R}}ABEL they can be [*expanded*](#BElim) based on the within-subject variability of the reference treatment.
# Tested designs{#designs}
Details about the reference datasets and their designs:
```{r help, eval = FALSE}
help("data", package = "replicateBE")
?replicateBE::data
```
## Four period (full) replicates{#per4_full}
Both the test and the reference treatments are administered at least once.
### Two sequences{#per4_full_2}
`TRTR | RTRT`
`TRRT | RTTR`
`TTRR | RRTT`
### Four sequences{#per4_full_4}
`TRTR | RTRT | TRRT | RTTR`
`TRRT | RTTR | TTRR | RRTT`
## Three period (full) replicates{#per3_full}
The test treatment is administered at least once to ½ of the subjects and the reference treatment at least once to the respective other ½ of the subjects.
`TRT | RTR`
`TRR | RTT`
## Two period (full) replicate{#per2_full}
The test and reference treatments are administered once to ½ of the subjects (for the estimation of the CI), *i.e.*, the first group of subjects follows a conventional 2×2×2 trial. In the second group the test and reference treatments are administered at least once to ¼ of the subjects, respectively (for the estimation of *CV*~wT~ and *CV*~wR~).
`TR | RT | TT | RR`
Although supported, Balaam’s design^[Balaam LN. *A Two-Period Design with t^2^ Experimental Units.* Biometrics. 1968; 24(1): 61–73. [doi:10.2307/2528460](https://doi.org/10.2307/2528460).] is _not recommended_ due to its poor power characteristics.
## Three period (partial) replicates{#per3_part}
The test treatment is administered once and the reference treatment at least once.
`TRR | RTR | RRT`
`TRR | RTR`
The latter is the so-called extra-reference design^[Chow, SC, Shao J, Wang H. *Individual bioequivalence testing under 2×3 designs.* Stat Med. 2002; 21(5): 629–48. [doi:10.1002/sim.1056](https://doi.org/10.1002/sim.1056).] which is _not recommended_ since it is biased in the presence of period effects.
# Data structure{#data_struct}
Columns must have the headers `subject`, `period`, `sequence`, `treatment`, `PK`, and/or `logPK`.^[Napierian logarithm (base _e_). The decadic logarithm (base 10) is not supported).] Any order of columns is acceptable. Uppercase and mixed case headers will be internally converted to lowercase headers.
## Format{#fmt}
| Variable | Format |
|--|----------|
| `subject` | Integers or any combination of alphanumeric characters (`A-Z`, `a-z`, `-`, `_`, `#`, `0-9`) |
| `period` | Integers |
| `sequence` | Numbers or literal sequences not listed in the [tested designs](#designs) are not accepted (*e.g.*, **`ABAB`**). |
| `treatment` | The Test treatment must be coded **`T`** and the Reference **`R`** (both uppercase). |
| `PK` | Real positive numbers of PK responses. |
| `logPK` | Real numbers of already log~e~-transformed PK responses (optional and rarely needed). |
Relevant data are used for the estimation of *CV*~wR~ (and *CV*~wT~ in full replicate designs) and BE, *i.e.*, the datasets might be different (see the [example below](#setexpl)). It is good practice to state that in the Statistical Analysis Plan.
## Incomplete data{#incomplete}
### Estimation of _CV_~w~{#setCW}
If a subject drops out from the study in a higher period, data of repeated administrations will still be used for the estimation of *CV*~w~, although data of the other treatment might be missing. Examples for the estimation of *CV*~wR~ (missing values are denoted by `·`):\
`RTRT` | `RTR·`\
`TRRT` | `TRR·`\
`RRTT` | `RRT·` or `RR··`\
`RRT` | `RR··`\
### Assessment of BE{#setBE}
If a subject drops out from the study in a higher period, data with at least one administration of the Test and Reference will be used in the assessment of BE. Examples (missing values are denoted by `·`):\
`TRTR` | `TRT·` or | `TR··`\
`RTRT` | `RTR·` or | `RT··`\
`TRRT` | `TRR·` or | `TR··`\
`RTTR` | `RTT·` or | `RT··`\
`TTRR` | `TTR.`\
`TRT` | `TR·`\
`RTR` | `RT·`\
`TRR` | `TR·`\
`RTR` | `RT·`\
`RTT` | `RT·`\
### Example of different datasets{#setexpl}
16 subjects enrolled in the study. In sequence `RTRT` one dropout in the 2^nd^ period and one in the 4^th^ period. In sequence `TRTR` one dropout in the 3^rd^ period and one in the 4^th^.\
`1 RTR. 5 RTRT 9 TRTR 13 RTRT`\
`2 RTRT 6 TR·· 10 TRTR 14 TRT·`\
`3 RTRT 7 RTRT 11 RTRT 15 TRTR`\
`4 TRTR 8 R··· 12 TRTR 16 TRTR`
We obtain these datasets:
| Dataset | Purpose | included | excluded |
|:-:|-----|----------|:---:|
| #1 | Estimation of *CV*~wR~ | 13 who received 2 treatments `R` | 6, 8, 14 |
| #2 | Assessment of BE | 15 who received ≥1 treatment `T` _and_ ≥1 treatment `R` | 8 |
| #3 | Estimation of *CV*~wT~ | 13 who received 2 treatments `T` | 1, 6, 8 |
Datasets #1 and #2 are required for ABEL, whereas for ABE only dataset #2 is required.\
Formerly all three were required for the WHO’s reference-scaling of *AUC* (see [below](#app_out)).
# Notes on the methods{#notes}
## Estimation of intra-subject variability{#cv}
The EMA proposed a linear model of log-transformed PK responses of the reference treatment\
_sequence_, _subject_(_sequence_), _period_\
where all effects are fixed. Estimated via function `lm()` of package `stats`:
```{r mod_CV, eval = FALSE}
modCV <- lm(log(PK) ~ sequence + subject%in%sequence + period,
data = data[data$treatment = "R", ])
```
In full replicate designs the same model is run with `data = data[data$treatment = "T", ]` for informational purposes.
Special conditions for the sample size in three period full replicate designs:
> _The question raised asks if it is possible to use a design where subjects are randomised to receive treatments in the order of TRT or RTR._
>
> _The CHMP bioequivalence guideline requires that at least 12 patients are needed to provide data for a bioequivalence study to be considered valid, and to estimate all the key parameters. Therefore, if a 3-period replicate design, where treatments are given in the order TRT or RTR, is to be used to justify widening of a confidence interval for C~max~ then it is considered that at least 12 patients would need to provide data from the RTR arm. This implies a study with at least 24 patients in total would be required if equal number of subjects are allocated to the 2 treatment sequences._
>
> --- Q&A document^[European Medicines Agency. *Questions & Answers: positions on specific questions addressed to the Pharmacokinetics Working Party (PKWP).* EMA/618604/2008. London. June 2015 (and later revisons). [Online](https://www.ema.europa.eu/en/documents/scientific-guideline/questions-answers-positions-specific-questions-addressed-pharmacokinetics-working-party_en.pdf). ]
If less than twelve subjects are eligible in sequence `RTR` of a `TRT | RTR` design (and in analogy in sequence `TRR` of a `TRR | RTT` design), the user is notified about the ‘uncertain’ estimate of *CV*~wR~. However, in a sufficiently powered study such a case is [extremely unlikely](https://forum.bebac.at/forum_entry.php?id=15139 "BEBA Forum, 23 July 2015").\
Let us explore the 95\% confidence interval of the _CV_:
```{r expl1}
# Estimate sample sizes of full replicate designs (theta0 0.90,
# target power 0.80) and CI of the CV with package PowerTOST
CV <- 0.30
design <- c("2x2x4", "2x2x3") # 4- and 3-period full replicate designs
res <- data.frame(design = rep(design, 2), n = c(rep(NA, 2), rep(12, 2)),
df = NA, CV = CV, lower = NA, upper = NA)
# 95% confidence interval of the CV
for (i in 1:nrow(res)) {
if (is.na(res$n[i])) {
res$n[i] <- PowerTOST::sampleN.scABEL(CV = CV,
design = res$design[i],
details = FALSE,
print = FALSE)[["Sample size"]]
}
if (i > 2 ) {
n.1 <- res$n[i-2] / 2 # no dropouts in one sequence
n.2 <- 12 # only 12 eligible subjects in the other
res$n[i] <- n.1 + n.2
}
if (res$design[i] == "2x2x4") {
res$df[i] <- 3 * res$n[i] - 4
} else {
res$df[i] <- 2 * res$n[i] - 3
}
res[i, 5:6] <- round(PowerTOST::CVCL(CV = CV,
df = res$df[i],
side = "2-sided",
alpha = 0.05), 3)
}
print(res, row.names = FALSE)
# Rows 1-2: Sample sizes for target power
# Rows 3-4: Only 12 eligible subjects in one sequence
```
It is incomprehensible why a four period full replicate design is considered by the EMA to give a more ‘reliable’ estimate of the _CV_ than a three period full replicate design.
## Model structure{#model_struc}
The EMA’s models assume equal [_sic_] intra-subject variances of Test and Reference (like in 2×2×2 trials) – even if proven false in one of the full replicate designs (were _both_ *CV*~wT~ and *CV*~wR~ can be estimated). Hence, among biostatisticians they are called ‘crippled models’ because the replicative nature of the study is ignored.
It should be noted that by default `ANOVA()` produces a ‘Type I’ table, *i.e.*, all tests are performed against the residual error. Starting with version 1.1.0 ‘Type III’ is employed in order to obtain the correct test for carryover, where _subject_ has to be tested against _subject_(_sequence_).^[Chow S-C, Liu J-p. *Design and Analysis of Bioavailability and Bioequivalence Studies.* Boca Raton: CRC Press; 3^rd^ ed. 2009. Chapter 3.2.]
```{r typeIII}
library(replicateBE) # attach the library to run example scripts
method.A(data = rds01, print = FALSE, verbose=TRUE)
```
In ‘Type I’ the _F_ value of _sequence_ would be $\small{\frac{0.007652}{0.159995}=0.04783}$ with $\small{p(F_{q=0.04783,\,df_1=1,\,df_2=217})\sim 0.827}$, whereas in ‘Type III’ it is $\small{\frac{0.007652}{2.855061}=0.00268}$ with $\small{p(F_{q=0.00268,\,df_1=1,\,df_2=75})\sim 0.959}$.
The nested structure _subject_(_sequence_) of the [methods](#methods) leads to an over-specified model.^[It contradicts the law of parsimony. Such a nesting is superfluous since in comparative bioavailability trials subjects are uniquely coded. If, say, subject `1` is allocated to sequence `TRTR` there is not yet ‘another’ subject `1` allocated to sequence `RTRT`. This explains the many lines in SAS `PROC GML` given with `.` and in Phoenix WinNonlin as `not estimable`.] The simple model\
_sequence_, _subject_, _period_, _treatment_\
gives identical estimates of the residual variance and the treatment effect and hence, its confidence interval.
The same holds true for the [EMA’s model](#cv) to estimate *CV*~wR~. The simple model\
_subject_, _period_\
gives an identical estimate of the residual variance.
Reference-scaling is acceptable for *C*~max~ (immediate release products: BE Guideline) and *C*~max~, *C*~max,ss~, *C*~τ,ss~, ~partial~*AUC* (modified release products^[European Medicines Agency, Committee for Medicinal Products for Human Use. *Guideline on the pharmacokinetic and clinical evaluation of modified release dosage forms.* EMA/CHMP/EWP/280/96. London. 20 November 2014. [Online](https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-pharmacokinetic-clinical-evaluation-modified-release-dosage-forms_en.pdf).]). The intention to expand the limits has to be stated in the protocol and -- contrary to the FDA’s RSABE -- a clinical justification provided.
> _Those HVDP for which a wider difference in C~max~ is considered clinically irrelevant based on a sound clinical justification can be assessed with a widened acceptance range.
> The request for widened interval must be prospectively specified in the protocol._
>
> --- CHMP Bioequivalence Guideline
## BE limits, PE restriction, rounding issues{#BElim}
The limits can be expanded based on *CV*~wR~.
* $\small{CV_\textrm{wR}\leq 30\%}$\
Lower cap, *i.e.*, no scaling, conventional limits:\
$\small{\left\{{L,U}\right\} = 80.00 - 125.00\%}$
* $\small{30\%50\%}$ (EMA), $\small{CV_\textrm{wR}>\sim57.4\%}$\
Upper cap, *i.e.*, applying $\small{s^*_\textrm{wR}=\sqrt{\log_{e}(CV_\textrm{cap}^{2}+1)}}$ in the expansion formula or\
$\small{\left\{L,U\right\} = 69.84 - 143.19\%}$ (EMA)\
$\small{\left\{L,U\right\} = 66.7 - 150.0\%}$ (Health Canada).
The special case recommended by the Gulf Cooperation Council (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, the United Arab Emirates) is:
* $\small{CV_\textrm{wR}\leq 30\%}$\
Lower cap, *i.e.*, no scaling, conventional limits:\
$\small{\left\{{L,U}\right\} = 80.00 - 125.00\%}$\
* $\small{CV_\textrm{wR}>30\%}$\
Widened limits:\
$\small{\left\{{L,U}\right\} = 75.00 - 133.33\%}$
In reference-scaling a so-called *mixed* (a.k.a. *aggregate*) criterion is applied. In order to pass ABEL,
* the 90\% confidence interval has to lie entirely within the acceptance range $\small{\left\{L,U\right\}}$ _and_
* the point estimate (PE) has to lie within 80.00 – 125.00\%.
To avoid discontinuities due to double rounding, expanded limits are calculated in full numeric precision and only the confidence interval is rounded according to the guidelines.
```{r expl2}
# Calculate limits with package PowerTOST
CV <- c(30, 50, 57.382)
df <- data.frame(CV = CV,
reg1 = "EMA", L1 = NA, U1 = NA, cap1 = "",
reg2 = "HC", L2 = NA, U2 = NA, cap2 = "",
reg3 = "GCC", L3 = NA, U3 = NA, cap3 = "",
stringsAsFactors = FALSE)
for (i in seq_along(CV)) {
df[i, 3:4] <- sprintf("%.3f",
PowerTOST::scABEL(CV[i]/100,
regulator = df$reg1[i])*100)
df[i, 7:8] <- sprintf("%.1f",
PowerTOST::scABEL(CV[i]/100,
regulator = df$reg2[i])*100)
df[i, 11:12] <- sprintf("%.3f",
PowerTOST::scABEL(CV[i]/100,
regulator = df$reg3[i])*100)
}
df$cap3[df$CV <= 30] <- df$cap2[df$CV <= 30] <- df$cap1[df$CV <= 30] <- "lower"
df$cap1[df$CV >= 50 & df$reg1 == "EMA"] <- "upper"
df$cap2[df$CV >= 57.382 & df$reg2 == "HC"] <- "upper"
names(df) <- c("CV(%)", rep(c("reg", "L(%)", "U(%)", "cap"), 3))
print(df, row.names = FALSE)
```
{width=560px}
## Degrees of freedom, comparison of methods{#df_comp}
The SAS code provided by the EMA in the Q&A document does not specify how the degrees of freedom should be calculated in Method B. Hence, the default in `PROC MIXED`, namely `DDFM=CONTAIN` is applied, *i.e.*, `method.B(..., option = 2`). For incomplete data (missing periods) Satterthwaite’s approximation of the degrees of freedom, *i.e.*, `method.B(..., option = 1)` or Kenward-Roger `method.B(..., option = 3)` might be a better choice -- if stated as such in the SAP.\
For background about approximations in different software packages see the electronic [Supplementary Material](https://static-content.springer.com/esm/art%3A10.1208%2Fs12248-020-0427-6/MediaObjects/12248_2020_427_MOESM1_ESM.docx) of Schütz *et al.*
The EMA seemingly prefers Method A:
> _A simple linear mixed model, which assumes identical within-subject variability (Method B), may be acceptable as long as results obtained with the two methods do not lead to different regulatory decisions. However, in borderline cases […] additional analysis using Method A might be required._
>
> --- Q&A document (January 2011 and later revisions)
The half-width of the confidence interval in log-scale allows a comparison of methods (B *v.s.* A) where a higher value _might_ point towards a more conservative decision.^[Of course, only if point estimates are identical.] In the provided example datasets – with one exception – the conclusion of BE (based on the mixed criterion) agrees between Method A and Method B.\
However, for the highly incomplete dataset 14 Method A is _liberal_ (passing by ANOVA but failing by the random effects model):
```{r expl3}
# Compare Method B acc. to the GL with Method A for all reference data sets.
ds <- substr(grep("rds", unname(unlist(data(package = "replicateBE"))),
value = TRUE), start = 1, stop = 5)
for (i in seq_along(ds)) {
A <- method.A(print = FALSE, details = TRUE, data = eval(parse(text = ds[i])))$BE
B <- method.B(print = FALSE, details = TRUE, data = eval(parse(text = ds[i])))$BE
r <- paste0("A ", A, ", B ", B, " - ")
cat(paste0(ds[i], ":"), r)
if (A == B) {
cat("Methods agree.\n")
} else {
if (A == "fail" & B == "pass") {
cat("Method A is conservative.\n")
} else {
cat("Method B is conservative.\n")
}
}
}
```
Exploring dataset 14:
```{r expl4}
A <- method.A(print = FALSE, details = TRUE, data = rds14)
B1 <- method.B(print = FALSE, details = TRUE, data = rds14, option = 1)
B2 <- method.B(print = FALSE, details = TRUE, data = rds14) # apply default option
B3 <- method.B(print = FALSE, details = TRUE, data = rds14, option = 3)
# Rounding of CI according to the GL
A[17:21] <- round(A[17:21], 2) # all effects fixed
B1[17:21] <- round(B1[17:21], 2) # Satterthwaite's df
B2[17:21] <- round(B2[17:21], 2) # df acc. to Q&A
B3[17:21] <- round(B3[17:21], 2) # Kenward-Roger df
cs <- c(2, 10, 17:25)
df <- rbind(A[cs], B1[cs], B2[cs], B3[cs])
names(df)[c(1, 3:6, 11)] <- c("Meth.", "L(%)", "U(%)",
"CL.lo(%)", "CL.hi(%)", "hw")
df[, c(2, 11)] <- signif(df[, c(2, 11)], 5)
print(df[order(df$hw, df$BE, decreasing = c(FALSE, TRUE),
method = "radix"), ], row.names = FALSE)
```
All variants of Method B are more conservative than Method A. Before rounding the confidence interval, `option = 2` with 192 degrees of freedom would be more conservative (lower CL 69.21029) than `option = 1` with 197.44 degrees of freedom (lower CL 69.21286). Given the incompleteness of this data set (four missings in period 2, twelve in period 3, and 19 in period 4), Satterthwaite’s or Kenward-Roger degrees of freedom are probably the better choice.
For detailed comparisons between methods based on simulations see the electronic [Supplementary Material](https://static-content.springer.com/esm/art%3A10.1208%2Fs12248-020-0427-6/MediaObjects/12248_2020_427_MOESM1_ESM.docx) of Schütz *et al.*
## Outlier analysis{#ola}
It is an open issue how outliers should be handled.
> _The applicant should justify that the calculated intra-subject variability is a reliable estimate and that it is not the result of outliers._
>
> --- CHMP Bioequivalence Guideline
The author ‘suggested’ box plots as a mere joke [*sic*] at the EGA/EMA symposium, being aware of their nonparametric nature and the EMA’s reluctance towards robust methods. Alas, this _joke_ was included in the Q&A document.
> _[…] a study could be acceptable if the bioequivalence requirements are met both including the outlier subject (using the scaled average bioequivalence approach and the within-subject CV with this subject) and after exclusion of the outlier (using the within-subject CV without this subject)._
>
> _An outlier test is not an expectation of the medicines agencies but outliers could be shown by a box plot. This would allow the medicines agencies to compare the data between them._
>
> --- EGA/EMA Q&A document
With the additional argument `ola = TRUE` in `method.A()` and `method.B()` an outlier analysis is performed, where the default `fence = 2`.^[The fences are given by the lowest value still within _m_×IQR of the lower quartile, and the highest value still within _m_×IQR of the upper quartile, where IQR is the interquartile range (difference between the 3^rd^ and 1^st^ quartiles). Values outside fences are considered outliers. Decreasing the multiplier *m* to *e.g.*, 1.5 might result in many outliers, whereas increasing the multiplier in only a few.
Different methods exist to calculate quartiles (nine ‘types’ are available in R, where the default is `type = 7`). R’s default is used by S, MATLAB, Octave, and Excel. Phoenix WinNonlin, Minitab, and SPSS use `type = 6`, ScyPy uses `type = 4`, whereas the default in SAS and Stata is `type = 2` (though others are available as well).]
Results differ slightly depending on software’s algorithms to calculate the median and quartiles. Example with the nine types implemented in R:
```{r expl5}
# Compare different types with some random data
set.seed(123456)
x <- rnorm(48)
p <- c(25, 50, 75)/100
q <- matrix(data = "", nrow = 9, ncol = 4,
dimnames = list(paste("type =", 1:9),
c("1st quart.", "median", "3rd quart.",
"software / default")))
for (i in 1:9) {
q[i, 1:3] <- sprintf("%.5f", quantile(x, prob = p, type = i))
}
q[c(2, 4, 6:8), 4] <- c("SAS, Stata", "SciPy", "Phoenix, Minitab, SPSS",
"R, S, MATLAB, Octave, Excel", "Maple")
print(as.data.frame(q))
```
Note the differences even in the medians.
* Box plots of studentized^[*Externally* studentized: $\small{\widehat{\sigma}_{(i)}^2={1 \over n-m-1}\sum_{\begin{smallmatrix}j = 1\\j \ne i\end{smallmatrix}}^n \widehat{\varepsilon\,}_j^{\,2}}$] and standarized^[*Internally* studentized: $\small{\widehat{\sigma}^2={1 \over n-m}\sum_{j=1}^n \widehat{\varepsilon\,}_j^{\,2}}$] model residuals are constructed.^[Both are available in SAS and R, whereas only the latter in *e.g.*, Phoenix WinNonlin. In general the former are slightly more restrictive. Which one will be used has to be stated in the SAP.]
* Potential outliers are flagged based on the argument `fence` provided by the user.
* With the additional argument `verbose = TRUE` detailed information is shown in the console.
Example for the reference dataset 01:
```
Outlier analysis
(externally) studentized residuals
Limits (2×IQR whiskers): -1.717435, 1.877877
Outliers:
subject sequence stud.res
45 RTRT -6.656940
52 RTRT 3.453122
standarized (internally studentized) residuals
Limits (2×IQR whiskers): -1.69433, 1.845333
Outliers:
subject sequence stand.res
45 RTRT -5.246293
52 RTRT 3.214663
```
If based on *studentized* residuals outliers are detected, additionally to the expanded limits based on the complete reference data, tighter limits are calculated based on *CV*~wR~ after exclusion of outliers and BE assessed with the new limits. Note that *standardized* residuals are given for informational purposes only and *not* used for exclusion of outliers.
Output for the reference dataset 01 (re-ordered for clarity):
```
CVwR : 46.96% (reference-scaling applicable)
swR : 0.44645
Expanded limits : 71.23% ... 140.40% [100exp(±0.760·swR)]
Assessment based on original CVwR 46.96%
────────────────────────────────────────
Confidence interval: 107.11% ... 124.89% pass
Point estimate : 115.66% pass
Mixed (CI & PE) : pass
╟────────┼─────────────────────┼───────■────────◊─────────■───────────────╢
Outlier fence : 2×IQR of studentized residuals.
Recalculation due to presence of 2 outliers (subj. 45|52)
─────────────────────────────────────────────────────────
CVwR (outl. excl.) : 32.16% (reference-scaling applicable)
swR (recalculated) : 0.31374
Expanded limits : 78.79% ... 126.93% [100exp(±0.760·swR)]
Assessment based on recalculated CVwR 32.16%
────────────────────────────────────────────
Confidence interval: pass
Point estimate : pass
Mixed (CI & PE) : pass
╟┼─────────────────────┼───────■────────◊─────────■─╢
```
Note that the PE and its CI are not affected, since the entire data are used and therefore, these values not reported in the second analysis (only the conclusion of the assessment).\
The ‘ASCII line plot’ is given for informational purposes since its resolution is only \~0.5\%. The filled squares `■` are the lower and upper 90\% confidence limits, the rhombus `◊` the point estimate, the vertical lines `│` at 100\% and the PE restriction (80.00 – 125.00\%), and the double vertical lines `║` the expanded limits. The PE and CI take presedence over other symbols. In this case the upper limit of the PE restriction is not visible.\
Since both analyses arrive at the same conclusion, the study should be acceptable according to the Q&A document.
An assessment of outliers is *not* required for Brazil’s ANVISA and Chile’s ANAMED.
Health Canada^[Health Canada. *Guidance Document: Conduct and Analysis of Comparative Bioavailability Studies. 2.3.5. Outlier Consideration.* Ottawa. 2018/06/08. [Online](https://www.canada.ca/en/health-canada/services/drugs-health-products/drug-products/applications-submissions/guidance-documents/bioavailability-bioequivalence/conduct-analysis-comparative.html#a234).] accepts exclusion of outliers (irrespective of the treatment) if
1. the procedure is stated in the protocol and followed *before* the results of the analysis are summarised into confidence intervals (*i.e.*, regardless of whether results meet the standard),
2. no more than 5\% of the subjects may be considered to be outliers, unless there are 20 or fewer subjects, in which case only one subject may be removed,
3. it is recommended that a simple outlier test such as a studentised residual being greater than 3 be used, and
4. the subject in question should be identified as an outlier for all parameters, for either the test or reference product, upon which the bioequivalence decision is to be based.
Note that assessing of outliers of the test treatment is currently not supported. In order to comply with conditions #1 and #3 use (here for the internal dataset 01 with a fence narrower than the default):\
```
B1 <- method.B(ola = TRUE, verbose = TRUE, print = FALSE,
details = FALSE, fence = 1.5, data = rds01,
option = 1)
```
You will get
```
Outlier analysis
(externally) studentized residuals
Limits (1.5×IQR whiskers): -1.631514, 1.553557
Outliers:
subject sequence stud.res
41 RTRT 1.877877
45 RTRT -6.656940
46 TRTR -1.717435
52 RTRT 3.453122
standarized (internally studentized) residuals
Limits (1.5×IQR whiskers): -1.612749, 1.53832
Outliers:
subject sequence stand.res
41 RTRT 1.845333
45 RTRT -5.246293
46 TRTR -1.694330
52 RTRT 3.214663
```
There are 73 subjects in the dataset with two treatments of the reference. Hence, while observing condition #2, we are allowed to delete 73×5\%~3 outliers. Delete subject 45 and 52 with residuals outside ±3 from the dataset. Run the analysis on the reduced dataset as usual.
It must be mentioned that Health Canada’s approach might increase the Type I Error.^[Fuglsang A. *Increased Patient’s Risk Associated with the Canadian Bioequivalence Guidance Due to Outlier Removal.* Ther Innov Reg Sci. 2022; 56: 168--72. [doi:10.1007/s43441-021-00344-2](https://doi.org/10.1007/s43441-021-00344-2).]
# Applicability, caveats, outlook{#app_out}
The EMA’s approach of reference-scaling for highly variable drugs / drug products is currently recommended in other jurisdictions as well (*i.e.*, the WHO; the Association of Southeast Asian Nations, Australia, Belarus, Brazil, Chile, the East African Community, Egypt, the Eurasian Economic Union, New Zealand, and the Russian Federation). Health Canada accepts ABEL only for *AUC* with an upper cap of scaling at \~57.4\% (maximum expansion to 66.7 -- 150.0\%) and *might* require a true mixed-effects model. Whether Method B is acceptable is unclear.
If degrees of freedom are approximated (Satterthwaite, Kenward-Roger), the SAP and statistical report should always specify which method will be / was used (see [above](#method.b)) in order to allow recalculation in other software. This package uses the *expected* information matrix.^[For Satterthwaite only available in Phoenix WinNonlin, SPSS, and Stata; as an option in SAS by `SCORING=1`. For Kenward-Roger default option `EIM` in Stata. The *observed* information matrix is the only available in JMP and default `SCORING=0` in SAS; option `OIM` in Stata.]
The estimated *CV*~wR~ is always uncertain (the degree of uncertainty depends on the *CV*~wR~ itself, the design, and – to a minor degree – the sample size), which might lead to an inflation of the type I error (*i.e.*, if ABEL is falsely applied although the *true* – but unknown – *CV*~wR~ is lower than its estimate).^[Wonnemann M, Frömke C, Koch A. *Inflation of the Type I Error: Investigations on Regulatory Recommendations for Bioequivalence of Highly Variable Drugs.* Pharm Res. 2015; 32(1): 135--43. [doi:10.1007/s11095-014-1450-z](https://doi.org/10.1007/s11095-014-1450-z).], ^[Muñoz J, Alcaide D, Ocaña J. *Consumer’s risk in the EMA and FDA regulatory approaches for bioequivalence in highly variable drugs.* Stat Med. 2016; 35(12): 1933--43. [doi:10.1002/sim.6834](https://doi.org/10.1002/sim.6834).]\
Use the optional argument `method.A(..., adjust = TRUE)` to iteratively adjust $\small{\alpha}$ to control the type I error.^[Labes D, Schütz H. *Inflation of Type I Error in the Evaluation of Scaled Average Bioequivalence, and a Method for its Control.* Pharm Res. 2016; 33(11): 2805--14. [doi:10.1007/s11095-016-2006-1](https://doi.org/10.1007/s11095-016-2006-1).]\
If you want to apply the most conservative approach of Molins *et al.*^[Molins E, Cobo E, Ocaña J. *Two-Stage Designs Versus European Scaled Average Designs in Bioequivalence Studies for Highly Variable Drugs: Which to Choose?* Stat Med. 2017: 36(30); 4777--88. [doi:10.1002/sim.7452](https://doi.org/10.1002/sim.7452).] (which corrects for *CV*~wR~ 30\% instead of the observed one), get the data frame of results with\
` x <- method.A(..., details = TRUE, print = FALSE)`.\
Adjust $\small{\alpha}$ in package [PowerTOST](https://cran.r-project.org/package=PowerTOST) and call `method.A()` again:\
` design <- "2x2x4" # your design`\
` n <- as.integer(strsplit(x[[6]], "|", fixed = TRUE)[[1]]) # subjects / sequence`\
` y <- PowerTOST::scABEL.ad(CV = 0.3, n = n, design = design, print = FALSE)`\
` method.A(..., alpha = y$alpha.adj)`
In a pilot phase the WHO accepted reference-scaling for *AUC* (four period full replicate studies were mandatory in order to assess the variability associated with each product).^[World Health Organization. *Application of reference-scaled criteria for AUC in bioequivalence studies conducted for submission to PQTm.* [Online](https://extranet.who.int/pqweb/sites/default/files/documents/AUC_criteria_November2018.pdf) Geneva. 22 November 2018.] It was not evident how this assessment should have been done.\
In PBE and IBE the *s*~wT~/*s*~wR~ ratio was assessed and ‘similar’ variability concluded for a ratio within 0.667 -- 1.500. However, the power of comparing variabilities in a study designed to compare means is low. This was one of the reasons why PBE and IBE were not implemented in regulatory practice. An alternative approach is given by the FDA where variabilities are considered ‘comparable’ if the upper confidence limit of *σ*~wT~/*σ*~wR~ is ≤2.5.^[U.S. Food and Drug Administration, Center for Drug Evaluation and Research. *Draft Guidance for Industry. Bioequivalence Studies with Pharmacokinetic Endpoints for Drugs Submitted Under an ANDA.* August 2021. [Download](https://www.fda.gov/media/87219/Download).]\
However, in 2021 the requirement of comparing variabilities was lifted by the WHO.^[World Health Organization. *Application of reference-scaled criteria for AUC in bioequivalence studies conducted for submission to PQT/MED.* Geneva. 02 July 2021. [Online](https://extranet.who.int/pqweb/sites/default/files/documents/AUC_criteria_July2021.pdf).]
# Cross-validation{#cross}
Results of all reference datasets agree with ones obtained in SAS (9.4), Phoenix WinNonlin (6.4--8.3.4), STATISTICA (13), SPSS (22.0), Stata (15.0), and JMP (10.0.2).
# Contributors{#contr}
- Helmut Schütz (Author)
- Michael Tomashevskiy (Contributor)
- Detlew Labes (Contributor)
# License
[GPL-3](https://cran.r-project.org/web/licenses/GPL-3 "GNU General Public License, Version 3") `r format(Sys.time(), "%Y")` Helmut Schütz
# Disclaimer{#disclaimer}
Program offered for Use without any Guarantees and Absolutely No Warranty. No Liability is accepted for any Loss and Risk to Public Health Resulting from Use of this R-Code.