Skip to main content
AI in Production 2026 is now open for talk proposals.
Share insights that help teams build, scale, and maintain stronger AI systems.
items
Menu
  • About
    • Overview 
    • Join Us  
    • Community 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public Courses 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation & Dashboards 
    • Open-source Data Science 
    • Data Science as a Service 
    • Gallery 
  • Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 
    • R Package Validation 
    • diffify  

Testing with {testthat}

Author: Aida Gjoka

Published: September 25, 2025

tags: r, test, testthat

One of our main projects at Jumping Rivers in the last year has been building the litmus platform for validation of R packages. Among other metrics of interest, an important component when assessing the quality of code within a package is unit tests. In this blog we discuss the main features of the {testthat} package, as a convenient way for testing R code.

Testing in R

Testing is an important step when developing code in R or any other language. If you are a Python user, you can consider reading our previous blogs in pytest. Writing tests helps us make sure that the code is working as expected. In the R ecosystem, the testthat package is one of the most used frameworks. In this blog we will explore some of the main properties of {testthat} highlighting some of the most useful functions with some examples.

Before starting, although it is possible to use {testthat} outside of an R package it works best within an R package so the directory structure of the code and testing code should look like this:

./testthatExample/                
├── R/
│   ├── function1.R
│   ├── function2.R
├── tests/
│   ├── testthat.R
│   └── testthat/
│       ├── test-function1.R
│       ├── test-function2.R
└── DESCRIPTION          

where the main functions, in our case function1.R, function2.R are stored in R/ and the tests are stored under tests/. All tests should be contained in files that start with test. Then automatically, when we run testthat::test_local() from the root directory, or using devtools::test() the tests are recognised accordingly.

Installing and Loading testthat

First, let’s install and load the package:

# Install testthat 
install.packages("testthat")

# Load the package
library(testthat)

Basic testthat Structure

The testthat package is built around three main components:

  1. Expectations: The building blocks that check if a result matches what you expect
  2. Tests: Groups of expectations that test a specific function or behavior
  3. Test files: Collections of tests, typically organised by the functions they’re testing

Let’s start with the most commonly used expectations:

Testing Equality

expect_equal() function tests for near equality, and it is good for floating point numbers, while expect_identical() tests for the exact equality.

expect_equal(2 + 2, 4)
 
expect_identical(c(1L, 2L, 3L), 1:3)

Need help with R package validation to unleash the power of open source? Check out the Litmusverse suite of risk assessment tools.

Testing Errors and Warnings

expect_error() checks if the code throws an error, expect_warning() checks for warnings and expect_silent() checks that code runs without errors or warnings. Although it is better practice to test for specific error and warning messages, we don’t have to. See in the code below, the first example of expect_error and expect_warning we haven’t passed a specific message to check for. This means if the code returns an error / warning respectively then the test will pass.

expect_error(log("not a number"))
expect_error(stop("Something went wrong"), "Something went wrong")
 
expect_warning(log(-1))
expect_warning(as.numeric(c("1", "2", "not_a_number")))

expect_silent(2 + 2)

Testing Data Types

The expect_type() and expect_[s3|s4|s7]_class() functions check if the code returns an object inherits from the expected base type or from a specified S3, S4 or s7 class.

expect_type(c(1, 2, 3), "double")
expect_type(1:3, "integer")
expect_s3_class(data.frame(x = 1:3), "data.frame")

Testing a simple function

Let us have a look at a function which is stored inside function1.R file and has the following structure:

# Function to calculate the sum of a vector
get_sum = function(x) {
  total = sum(x)
  total
}

The tests that we can write for the above function would be:

# Tests for get_sum function
test_that("get_sum  calculates the sum correctly", {
  expect_equal(get_sum(x = c(1, 2, 3)), 6)
  expect_equal(get_sum(c(0, 0)), 0)
})

test_that("get_sum handles invalid inputs", {
  expect_error(get_sum(NULL), "The argument of the function must be a number")
})

Here we have created a test case with some description. We start with the test_that function call, providing both a description of the test followed by the testing block.

Testing Plots

Here we make an example of testing a ggplot2 output and a base R plot.

ggplot2 plots are easier to test because they return structured objects with accessible components:

# ggplot2 
p = ggplot(mtcars, aes(x = mpg, y = hp)) + geom_point()

test_that("ggplot structure is correct", {
  expect_s3_class(p, "ggplot")
  expect_equal(rlang::as_name(p$mapping$x), "mpg")
})

Note: this example may change in the future, as {ggplot2} has been rewritten to use S7 classes internally so that would require expect_s7_class.

Base R plots are harder to test because they produce immediate visual output without returning testable objects. A useful package to use when we test base R plot is the {vdiffr} package and the expect_doppelganger function (which also works for ggplot objects). This allows us to perform a semblance of snapshot testing for our plot, where on the initial test run an image is saved and then compared against in future tests.

Assume the following code is used to make a plot:

library(vdiffr)

# Function that creates base R plot
create_base_scatter = function(data) {
  plot(data$mpg, data$hp,
       main = "MPG vs Horsepower",
       xlab = "Miles per Gallon",
       ylab = "Horsepower",
       col = "blue",
       pch = 16)
  abline(lm(hp ~ mpg, data = data), col = "red")
}

And the testing code for the above function would be:

test_that("base R scatter plot visual output is correct", {
  expect_doppelganger("base_scatter_plot", {
    create_base_scatter(mtcars)
  })
})

The way expect_doppelganger works is, an svg of the plot is saved in a sub-directory of the tests directory. Upon future runs of the tests a new image is generated and compared against the original, if they match the test passes but if they differ the test will fail. There are a few issues which can cause doppelganger tests to fail, like randomness in the plot or time / date based variables so keep these in mind when writing your tests.

Conclusion

The testthat package provides a robust and intuitive framework for ensuring code quality in R packages. From basic equality checks to plot validation, these testing strategies help catch bugs early and maintain reliable code as your package evolves. Whether you’re testing simple mathematical functions or complex data visualisations, incorporating comprehensive unit tests into your development workflow is essential for building trustworthy R packages. As demonstrated through the examples in this blog, testthat makes it straightforward to implement testing practices that will benefit both you and your package users in the long run. If you would like some further reading on {testthat}, then check out the website.


Jumping Rivers Logo

Recent Posts

  • Start 2026 Ahead of the Curve: Boost Your Career with Jumping Rivers Training 
  • Should I Use Figma Design for Dashboard Prototyping? 
  • Announcing AI in Production 2026: A New Conference for AI and ML Practitioners 
  • Elevate Your Skills and Boost Your Career – Free Jumping Rivers Webinar on 20th November! 
  • Get Involved in the Data Science Community at our Free Meetups 
  • Polars and Pandas - Working with the Data-Frame 
  • Highlights from Shiny in Production (2025) 
  • Elevate Your Data Skills with Jumping Rivers Training 
  • Creating a Python Package with Poetry for Beginners Part2 
  • What's new for Python in 2025? 

Top Tags

  • R (236) 
  • Rbloggers (182) 
  • Pybloggers (89) 
  • Python (89) 
  • Shiny (63) 
  • Events (26) 
  • Training (23) 
  • Machine Learning (22) 
  • Conferences (20) 
  • Tidyverse (17) 
  • Statistics (14) 
  • Packages (13) 

Authors

  • Amieroh Abrahams 
  • Tim Brock 
  • Aida Gjoka 
  • Shane Halloran 
  • Russ Hyde 
  • Theo Roe 
  • Colin Gillespie 
  • Gigi Kenneth 
  • Osheen MacOscar 
  • Sebastian Mellor 
  • Myles Mitchell 
  • Keith Newman 
  • Pedro Silva 

Keep Updated

Like data science? R? Python? Stan? Then you’ll love the Jumping Rivers newsletter. The perks of being part of the Jumping Rivers family are:

  • Be the first to know about our latest courses and conferences.
  • Get discounts on the latest courses.
  • Read news on the latest techniques with the Jumping Rivers blog.

We keep your data secure and will never share your details. By subscribing, you agree to our privacy policy.

Follow Us

  • GitHub
  • Bluesky
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst Newcastle Helix Newcastle, NE4 5TG
Get directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • Shiny in Production
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality management British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management Cyber Essentials Certified Plus badge
  • Privacy Notice
  • |
  • Booking Terms

©2016 - present. Jumping Rivers Ltd