The {ordinalsimr} package combines a Shiny interface with simulation functions for two-group ordinal outcomes. In practice, the app is designed to help users compare test performance under user-defined assumptions and to summarize Type I error, Type II error, and power.
In the app, users can set:
The app also provides progress tracking for longer simulation runs, optional Type I error runs for each group, plots, and downloadable outputs.
Data generation is handled by assign_groups() and
orchestrated in repeated runs by run_simulations().
At each iteration:
assign_groups() samples group membership
(y) using the specified allocation probabilities.x) within each group
using prob0 (group 0) and prob1 (group
1).run_simulations() repeats this process for each
requested sample size and iteration count.This design keeps the data-generating mechanism explicit and directly tied to user-entered assumptions.
For each simulated dataset, ordinal_tests() computes
p-values for the selected methods. By default, all implemented methods
are run:
stats::wilcox.test)stats::fisher.test)stats::chisq.test(correct = FALSE))stats::chisq.test(correct = TRUE))rms::lrm)coin::independence_test to fit
the test, then coin::pvalue to extract the p-value)The test set can be restricted in both the app and function calls, which is useful for targeted method comparisons.
These methods provide complementary views of group differences for ordinal endpoints: rank-based tests focus on distributional shift, contingency-table tests assess association between group and category counts, and the proportional-odds model summarizes effects in an ordered logistic framework.
Using them side-by-side is helpful in simulation studies because Type I error and power can change with sample size, allocation imbalance, and outcome distribution shape.
An explanation of how to use the core simulation functions in a script-based workflow is provided in the Coding Simulations vignette. The same functions that power the app can be used in scripts for more customized analyses, batch runs, or integration with other workflows.
This workflow mirrors the app logic: generate group/outcome data repeatedly, compute test p-values, then summarize results across iterations.
The package is structured in three connected layers:
assign_groups(),
ordinal_tests(), and run_simulations() define
data generation, test evaluation, and iteration over simulation
settings. The run_simulations_in_background() function
wraps run_simulations() to run simulation in background
processes for the Shiny app.calculate_power_t2error() and
calculate_t1_error() summarize simulation results.app_server) wires together modular components for data
entry, simulation triggers, background execution, progress updates,
plotting, and export/report generation.This separation helps keep methods transparent while allowing the app and script-based workflows to use the same simulation engine.
Bug reports and feature requests can be submitted as issues at https://github.com/NeuroShepherd/ordinalsimr/issues