Skip to main content
AI in Production 2026 is now open for talk proposals.
Share insights that help teams build, scale, and maintain stronger AI systems.
items
Menu
  • About
    • Overview 
    • Join Us  
    • Community 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public Courses 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation & Dashboards 
    • Open-source Data Science 
    • Data Science as a Service 
    • Gallery 
  • Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 
    • R Package Validation 
    • diffify  

End-to-end testing with shinytest2: Part 3

Author: Russ Hyde

Published: January 19, 2023

tags: r, shinytest2, shiny, testing

This is the final part of a series of three blog posts about using the {shinytest2} package to develop automated tests for shiny applications. In the posts we cover

  • the purpose of browser-driven end-to-end tests for a shiny developer, and tools (like {shinytest2}) that help implement them;

  • how to write and run a simple test using {shinytest2};

  • how best to design your test code so that it supports your future work (this post).

By this point in the blog series, we have created a simple shiny application as an R package, added the {shinytest2} testing infrastructure, and have written, ran, broken and fixed a {shinytest2} test case. Here, we will add a new feature to the application. As in real (programming) life, we will add a new test for this feature, and ensure that our old test still passes.

UI-driven end-to-end tests require a bit more code than unit tests. For example, starting the app and navigating around to set up some initial state will require a few lines of code. But these are things you’ll likely need to do in several tests. As you add more and more test cases and these commonalities reveal themselves, it pays to extract out some helper functions and / or classes. By doing so, your tests will look simpler, the behaviour that you are testing will be more explicit, and you’ll have less code to maintain. We’ll show some software designs that may simplify your {shinytest2} code.

This post builds upon the previous posts in the series, but is quite a bit more technical than either of them. In addition to shiny development, you’ll need to know how to define functions in R and for the last section you’ll need to know about object-oriented programming in R (specifically using R6). The ideas in that section may be of interest even if you aren’t fluent with R6 classes yet.

Let’s get started.

The initial application

Our initial shiny application had a text field where the user could enter their name and a “Greet” button. The source code can be obtained from github. On clicking the button, a greeting (“Hello !”) is displayed in the app. The source code for the user interface and server function is shown below.

# In ./R/ui.R
ui = function(req) {
  fluidPage(
    textInput("name", "What is your name?"),
    actionButton("greet", "Greet"),
    textOutput("greeting")
  )
}

# In ./R/server.R
server = function(input, output, session) {
  output$greeting = renderText({
    req(input$greet)
    paste0("Hello ", isolate(input$name), "!")
  })
}

For this app we have a single test that checks that the greeting is displayed once the user has entered their name and clicked the “Greet” button.

# ./tests/testthat/test-e2e-greeter_accepts_username.R
test_that("the greeter app updates user's name on clicking the button", {
  # GIVEN: the app is open
  shiny_app = shinyGreeter::run()
  app = shinytest2::AppDriver$new(shiny_app, name = "greeter")
  app$set_window_size(width = 1619, height = 970)

  # WHEN: the user enters their name and clicks the "Greet" button
  app$set_inputs(name = "Jumping Rivers")
  app$click("greet")

  # THEN: a greeting is printed to the screen
  values = app$expect_values(output = "greeting", screenshot_args = FALSE)
})

Do you require help building a Shiny app? Would you like someone to take over the maintenance burden? If so, check out our Shiny and Dash services.

Writing your second test

We’ll add a second bit of functionality to the app first. A simple change, might be to greet the user in Spanish:

# In the UI
textOutput("spanish_greeting")

# In the server
output$spanish_greeting = renderText({
  req(input$greet)
  paste0("Hola ", isolate(input$name), "!")
})

The first thing to note is that with the change to the app, the first test still passes. It would have failed had we not restricted our test to just look at the greeting variable (for example, if we had used app$expect_values() to make a snapshot of all the variables that are in-play and to take an image of the app).

We want to add a new test to check the spanish_greeting as well as the greeting variable.

To add a new test to the app, we could use the {shinytest2} recorder (as in the previous post), or we could just copy and paste the first test, and modify the bits we need to. We’ll do the latter.

# ./tests/testthat/test-e2e-greeter_accepts_username.R

# ... snip ...

test_that("the greeter app prints a Spanish greeting to the user", {
  # GIVEN: The app is open
  shiny_app = shinyGreeter::run()
  app = shinytest2::AppDriver$new(shiny_app, name = "spanish_greeter")
  app$set_window_size(width = 1619, height = 970)

  # WHEN: the user enters their name and clicks the "Greet" button
  app$set_inputs(name = "Jumping Rivers")
  app$click("greet")

  # THEN: a Spanish greeting is printed to the screen
  values = app$expect_values(output = "spanish_greeting", screenshot_args = FALSE)
})

Note that we have changed the name argument to AppDriver$new(), this allows us to have multiple test cases in the same script - were the AppDrivers for the English- and the Spanish-test both given name="greeter", the snapshots would both be written to the same file.

Use functions to simplify and clarify your test code

The new test is almost identical to the previous one we wrote. That kind of duplication should set off alarm bells - more duplication means more maintenance.

In R, the simplest way to reduce code duplication is by writing a function.

Simplify the set-up code

Let’s add a function to get the app into the pre-test state:

initialise_test_app = function(name) {
  shiny_app = shinyGreeter::run()
  app = shinytest2::AppDriver$new(shiny_app, name = name)
  app$set_window_size(width = 1619, height = 970)

  app
}

With that we can start the test-version of the app using app = initialise_test_app("greeter") in the first test and app = initialise_test_app("spanish_greeter") in the second. This removes a few lines of code, and would make it easier to write new tests, but the main purpose of doing this is to make the test code more prominent.

The Spanish test now looks like:

test_that("the greeter app prints a Spanish greeting to the user", {
  # GIVEN: The app is open
  app = initialise_test_app("spanish_greeter")

  # WHEN: the user enters their name and clicks the "Greet" button
  app$set_inputs(name = "Jumping Rivers")
  app$click("greet")

  # THEN: a Spanish greeting is printed to the screen
  values = app$expect_values(output = "spanish_greeting", screenshot_args = FALSE)
})

Make the user steps more descriptive

What’s actually happening when the following code runs?

app$set_inputs(name = "Jumping Rivers")
app$click("greet")

First we set the value for the input$name variable to be “Jumping Rivers” and then we click on a button that has the HTML identifier “greet”. These are quite ‘internal’ concerns. What’s really happening is that the user is entering their username into the app (clicking the button is part of that process).

This is a really simple app, so it shouldn’t take long to work out what the above code does here. But in more complicated apps, and when testing more complicated workflows, the series of steps that define the user actions can be quite extensive.

Having well-defined functions that are responsible for the different steps in a test workflow is really valuable. With these, your non-coding colleagues will find it easier to follow the connection between what the code is testing and how the test is defined.

Even in this simple setting, it might be beneficial to introduce a function:

enter_username = function(app, username) {
  app$set_inputs(name = username)
  app$click("greet")

  # return the app object, so that you can pipe together the actions
  invisible(app)
}

Then you can rewrite the test steps:

test_that("the greeter app prints a Spanish greeting to the user", {
  # GIVEN: The app is open
  app = initialise_test_app("spanish_greeter")

  # WHEN: the user enters their name and clicks the "Greet" button
  enter_username(app, "Jumping Rivers")

  # THEN: a Spanish greeting is printed to the screen
  values = app$expect_values(output = "spanish_greeting", screenshot_args = FALSE)
})

Another benefit of introducing functions for commonly repeated parts of your test actions, relates to refactoring. Suppose the input$name variable was renamed in the app. With the initial two tests, to accommodate the change in this variable name we would have had to touch two different places in the code - one in the English test and one in the Spanish test. Now we only have to modify a single line in enter_username(). A similar issue happens when decomposing apps into shiny modules (because the HTML identifiers for different elements will change with the refactoring).

Make your expectations descriptive too …

The snapshot tests used by {shinytest2} are wonderful if you need to compare many values at once, or you need to do visual comparison of the contents of your app. But they can make your test cases a bit opaque. In the above, on entering their username, two welcome messages were printed to the screen. While each test was running, {shinytest2} compared the observed value for a given welcome message to a previously stored value - but that previously stored value is stored a distance from the place where the test is defined. Hiding the expectations away like this may make it hard for a new developer to see why the actions performed in the “WHEN” steps of a test should culminate in the values observed in the “THEN” step.

{shinytest2} provides some additional methods that help extract specific values. With these, you can use the expectation functions from {testthat} much as you would when unit-testing functions in R.

For example, we might rewrite the first test like so:

test_that("the greeter app updates user's name on clicking the button", {
  # GIVEN: The app is open
  app = initialise_test_app("greeter")

  # WHEN: the user enters their name and clicks the "Greet" button
  enter_username(app, "Jumping Rivers")

  # THEN: a greeting is printed to the screen
  message = app$get_value(output = "greeting")
  expect_equal(message, "Hello Jumping Rivers!")
})

The source code for this version of the application can be obtained from github.

The Page Object Model

The functions that were introduced above hid the details of the app away from us. By using these functions, we don’t need to know which HTML element or shiny variable we need to interact with or modify when setting the username.

A pattern called the “Page Object Model” (POM) takes this idea of hiding an app’s internal details away from the test author even further. The POM is common in UI-based end-to-end testing in other languages. Here, a class is defined that contains methods for interacting with the app (but does not contain any code to perform test expectations). The test code calls methods provided by the POM, so that the test code is more concise and descriptive. A neat way to achieve this design in R, is by using R6 classes. Here, we might have a class that has a method for opening the app, and a method enter_username.

The AppDriver class provided by {shinytest2} is an R6 class. It provides a lot of methods that we used above (expect_values, get_value) for interacting with the app. So by now, you have some experience of using an R6 object. We can inherit from the AppDriver class to create a POM that is specific for our app as follows:

GreeterApp = R6::R6Class(
  "GreeterApp",
  # Alternatively you could pass an AppDriver in at initiation
  inherit = shinytest2::AppDriver,
  public = list(
    width = 1619,
    height = 970,
    initialize = function(name) {
      shiny_app = shinyGreeter::run()
      super$initialize(shiny_app, name = name)
      self$set_window_size(width = self$width, height = self$height)
    },
    enter_username = function(username) {
      self$set_inputs(name = username)
      self$click("greet")

      invisible(self)
    }
  )
)

With that class in place, we can rewrite our original test as follows:

test_that("the greeter app updates user's name on clicking the button", {
  # GIVEN: The app is open
  app = GreeterApp$new("greeter")

  # WHEN: the user enters their name and clicks the "Greet" button
  app$enter_username("Jumping Rivers")

  # THEN: a greeting is printed to the screen
  message = app$get_value(output = "greeting")
  expect_equal(message, "Hello Jumping Rivers!")
})

Adding all this design work into setting up your tests might seem like a lot of unnecessary work. But, it does make it easier to add new tests, it makes it simpler to keep your tests passing as you refactor your app and it makes your tests easier to follow.

If your tests hinder your ability to add new features to your app, or prevent you from restructuring your app it may be worth restructuring your test code.

The source code for the application in its current form can be obtained from github.

Conclusion

This blog series was a brief introduction to UI-based end-to-end tests for web applications and to the new package {shinytest2}. These kinds of tests are very powerful and with {shinytest2}’s test recorder, they are relatively easy to construct. But, because the whole app is within their scope, these tests can be quite frail and difficult to follow. So if you find that small changes to your app may lead seemingly unconnected tests to fail, or that keeping your tests passing requires you to make very similar changes in multiple places, you may benefit from some of the ideas in this post:

  • Can you introduce some functions (or POM methods) to clarify what is happening in each step of your test?
  • Can you ensure that the assertion in your test is only comparing data that is directly relevant to that test?

The shinytest2 vignettes (Robust testing, Testing in depth) discuss some of the ideas in this post in more depth, and with a slightly different perspective.


Jumping Rivers Logo

Recent Posts

  • Start 2026 Ahead of the Curve: Boost Your Career with Jumping Rivers Training 
  • Should I Use Figma Design for Dashboard Prototyping? 
  • Announcing AI in Production 2026: A New Conference for AI and ML Practitioners 
  • Elevate Your Skills and Boost Your Career – Free Jumping Rivers Webinar on 20th November! 
  • Get Involved in the Data Science Community at our Free Meetups 
  • Polars and Pandas - Working with the Data-Frame 
  • Highlights from Shiny in Production (2025) 
  • Elevate Your Data Skills with Jumping Rivers Training 
  • Creating a Python Package with Poetry for Beginners Part2 
  • What's new for Python in 2025? 

Top Tags

  • R (236) 
  • Rbloggers (182) 
  • Pybloggers (89) 
  • Python (89) 
  • Shiny (63) 
  • Events (26) 
  • Training (23) 
  • Machine Learning (22) 
  • Conferences (20) 
  • Tidyverse (17) 
  • Statistics (14) 
  • Packages (13) 

Authors

  • Amieroh Abrahams 
  • Aida Gjoka 
  • Shane Halloran 
  • Osheen MacOscar 
  • Keith Newman 
  • Tim Brock 
  • Russ Hyde 
  • Gigi Kenneth 
  • Sebastian Mellor 
  • Myles Mitchell 
  • Theo Roe 
  • Colin Gillespie 
  • Pedro Silva 

Keep Updated

Like data science? R? Python? Stan? Then you’ll love the Jumping Rivers newsletter. The perks of being part of the Jumping Rivers family are:

  • Be the first to know about our latest courses and conferences.
  • Get discounts on the latest courses.
  • Read news on the latest techniques with the Jumping Rivers blog.

We keep your data secure and will never share your details. By subscribing, you agree to our privacy policy.

Follow Us

  • GitHub
  • Bluesky
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst Newcastle Helix Newcastle, NE4 5TG
Get directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • Shiny in Production
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality management British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management Cyber Essentials Certified Plus badge
  • Privacy Notice
  • |
  • Booking Terms

©2016 - present. Jumping Rivers Ltd