Skip to main content
AI in Production 2026 is now open for talk proposals.
Share insights that help teams build, scale, and maintain stronger AI systems.
items
Menu
  • About
    • Overview 
    • Join Us  
    • Community 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public Courses 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation & Dashboards 
    • Open-source Data Science 
    • Data Science as a Service 
    • Gallery 
  • Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 
    • R Package Validation 
    • diffify  

API as a package: Testing

Author: Jamie Owen

Published: September 29, 2022

tags: r, python

Introduction

This is part the final part of our three part series

  • Part 1: API as a package: Structure
  • Part 2: API as a package: Logging
  • Part 3: API as a package: Testing (this post)

This blog post is a follow on to our API as a package series, which looks to expand on the topic of testing {plumber} API applications within the package structure leveraging {testthat}. As a reminder of the situation, so far we have an R package that defines functions that will be used as endpoints in a {plumber} API application. The API routes defined via {plumber} decorators in inst simply map the package functions to URLs.

The three stages of testing

The intended structure of the API as a package setup is to encourage a particular, consistent, composition of code for each exposed endpoint. That is:

  • A plumber decorator that maps a package function to a URL
  • A wrapper function that takes a request object, deals with any serialization of data and dispatches to a “business logic” function
  • The “business logic” function, or core functionality of the purpose of a particular endpoint

With that, we believe that this induces three levels of testing to consider:

  • Does the running API application successfully return an appropriate response when we make a request to an endpoint?
  • Does the wrapper function handle behaviour matching your expectations?
  • Is my logic correct?

Do you use Professional Posit Products? If so, check out our managed Posit services

Example: Sum

Consider a POST endpoint that will sum the numeric contents of objects. For simplicity, we will consider only requests that send valid JSON objects, however there are a few scenarios that might arise:

  1. A JSON array

    # array.json
    [1, 2]
    # expected sum: [3]
    
  2. A single JSON object

    # single_object.json
    {
      "a": 1,
      "b": 2
    }
    # expected sum: [3]
    
  3. An array of JSON objects

    # array_objects.json
    [
      {
        "a": 1,
        "b": 2
      },
      {
        "a": 1,
        "b": 2
      }
    ]
    # expected sum: [3, 3]
    

R code solution

Writing some R code to ensure that we calculate the expected sums for each of these is fairly simple, keeping in mind that when parsing JSON objects we would obtain a named list to represent an object and an unnamed list to represent an array:

# R/api_sum.R

# function to check whether the object we 
# receive looks like a json array
is_array = function(parsed_json) {
  is.null(names(parsed_json))
}

# function to sum the numeric components in a list
sum_list = function(l) {
  purrr::keep(l, is.numeric) |> purrr::reduce(`+`, .init = 0)
}

# main sum function which handles lists of lists appropriately
my_sum = function(x) {
  if (is_array(x)) {
    if(is.list(x)) {
      purrr::map(x, sum_list)
    } else {
      sum(x)
    }
  } else {
    sum_list(x)
  }
}

To integrate this into our API service we can then write a wrapper function

# R/api_sum.R

#' @export
api_sum = function(req) {
  # parse the JSON body of the request
  parsed_json = jsonlite::fromJSON(req$postBody)
  # return the sum
  return(my_sum(parsed_json))
}

and add a plumber annotation in inst/extdata/api/routes/example.R

#* @post /sum
#* @serializer unboxedJSON
cookieCutter::api_sum

which exposes our sum function on the URL <root_of_api>/example/sum.

Testing: Setup

With the above example we are now ready to start writing some tests. There are a few elements which are likely to be common when wanting to test endpoints of an API application:

  • Start an instance of your API
  • Send a request to your local running API
  • Create a mock object that looks like a real rook request object

The {testthat} package for R has utilities that make defining and using common structures like this easy. A tests/testthat/setup.R script will run before any tests are constructed, here we can put together the setup and subsequent tear down for a running API instance, for our cookieCutter example package being built as part of this series this might look like

# test/testthat/setup.R

## run before any tests
# pick a random available port to serve your app locally
# note that port will also be available in the environment in which your
# tests run.
port = httpuv::randomPort()

# start a background R process that launches an instance of the API
# serving on that random port
running_api = callr::r_bg(
  function(port) {
    dir = cookieCutter::get_internal_routes()
    routes = cookieCutter::create_routes(dir)
    api = cookieCutter::generate_api(routes)
    api$run(port = port, host = "0.0.0.0")
  }, list(port = port)
)

Sys.sleep(2)
## run after all tests
withr::defer(running_api$kill(), testthat::teardown_env())

With this, as our test suite runs, we can send requests to our API at the following url pattern, http://0.0.0.0:{port}{endpoint}.

Similarly, {testthat} allows for defining helper functions for the purposes of your test-suite. Any file with “helper” at the beginning of the name in your testthat directory will be executed before your tests run. We might use this to define some helper functions which will allow us to send requests easily and create mock objects, as well as some other things.

# tests/testthat/helper-example.R

# convenience function for creating correct endpoint url
endpoint = function(str) {
  glue::glue("http://0.0.0.0:{port}{str}")
}

# convenience function for sending post requests to our test api
api_post = function(url, ...) {
  httr::POST(endpoint(url), ...)
}

# function to create minimal mock request objects
# doesn't fully replicate a rook request, but gives the parts
# we need
as_fake_post = function(obj) {
  req = new.env()
  req$HTTP_CONTENT_TYPE = "application/json"
  req$postBody = obj
  req
}

You might also want to skip the API request tests in cases where the API service did not launch correctly

# tests/testthat/helper-example.R

# skip other tests if api is not alive
skip_dead_api = function() {
  # running_api is created in setup.R
  testthat::skip_if_not(running_api$is_alive(), "API not started")
}

One of the things that we like to do, inspired by the pytest-datadir plugin for the python testing framework, pytest, is have numerous test cases stored as data files. This makes it easy to run your tests against many examples, as well as to add new ones that should be tested in future. With that our final helper function might be

# tests/testthat/helper-example.R

test_case_json = function(path) {
  # test_path() will give appropriate path to running test environment
  file = testthat::test_path(path)
  # read a file from disk
  obj = readLines(file)
  # turn json contents into a single string
  paste(obj, collapse = "")
}

Testing: Tests

With all of the setup work done (at least we only need to do that once) we will finally write tests to address the three types identified earlier in the article. We identified three scenarios for JSON we might receive, so we can go ahead and stick those in a data folder within our test directory.

└── tests
    ├── testthat
    │   ├── example_data
    │   │   ├── array.json
    │   │   ├── array_objects.json
    │   │   └── single_object.json

Our test script for this endpoint then, will iterate through the files in this directory and:

  • Send each example as the body of a POST request and ensure we get a success response (200)
  • Send a mock request object to the wrapper function, ensuring that data is being parsed correctly and the return object is of the right shape
  • Take the data from the example file, run it through the my_sum() function and ensure that the result is correct
# tests/testthat/test-example.R

# iterate through multiple test cases
purrr::pwalk(tibble::tibble(
  # get all files in the test data directory
  file = list.files(test_path("example_data"), full.names = TRUE),
  # expected length (shape) of result
  length = c(2, 1, 1),
  # expected sums
  sums = list(c(3,3), 3, 3)
), function(file, length, sums){

  # use our helper to create the POST body
  test_case = test_case_json(file)

  # test against running API
  test_that("succesful api response", {
    # skip if not running
    skip_dead_api()
    headers = httr::add_headers(
      Accept = "application/json",
      "Content-Type" = "application/json"
    )
    # use our helper to send the data to the correct endpoint
    response = api_post("/example/sum", body = test_case, headers = headers)
    # check our expectation
    expect_equal(response$status_code, 200)
  })

  # test that the wrapper is doing its job
  test_that("successful api func", {
    # use helper to create fake request object
    input = as_fake_post(test_case)
    # execute the function which is exposed as a route directly
    res = api_sum(input)
    # check the output has the expected shape
    expect_length(res, length)
  })

  # test the business logic of the function
  test_that("successful sum", {
    # use the data parsed from the test case
    input = jsonlite::fromJSON(test_case)
    # execute the logic function directly
    res = my_sum(input)
    # check the result equals our expectation
    expect_equal(res, sums)
  })
})

Concluding remarks

With that we have a setup for our test suite that takes care of a number of common elements, which can of course be expanded for other HTTP methods, data types etc; and a consistent approach to testing many cases at the API service level, serialization/parsing and logic level. As with the other posts in this series a dedicated package example is available in our blogs repo.


Jumping Rivers Logo

Recent Posts

  • Start 2026 Ahead of the Curve: Boost Your Career with Jumping Rivers Training 
  • Should I Use Figma Design for Dashboard Prototyping? 
  • Announcing AI in Production 2026: A New Conference for AI and ML Practitioners 
  • Elevate Your Skills and Boost Your Career – Free Jumping Rivers Webinar on 20th November! 
  • Get Involved in the Data Science Community at our Free Meetups 
  • Polars and Pandas - Working with the Data-Frame 
  • Highlights from Shiny in Production (2025) 
  • Elevate Your Data Skills with Jumping Rivers Training 
  • Creating a Python Package with Poetry for Beginners Part2 
  • What's new for Python in 2025? 

Top Tags

  • R (236) 
  • Rbloggers (182) 
  • Pybloggers (89) 
  • Python (89) 
  • Shiny (63) 
  • Events (26) 
  • Training (23) 
  • Machine Learning (22) 
  • Conferences (20) 
  • Tidyverse (17) 
  • Statistics (14) 
  • Packages (13) 

Authors

  • Amieroh Abrahams 
  • Aida Gjoka 
  • Shane Halloran 
  • Russ Hyde 
  • Osheen MacOscar 
  • Keith Newman 
  • Tim Brock 
  • Gigi Kenneth 
  • Sebastian Mellor 
  • Myles Mitchell 
  • Theo Roe 
  • Colin Gillespie 
  • Pedro Silva 

Keep Updated

Like data science? R? Python? Stan? Then you’ll love the Jumping Rivers newsletter. The perks of being part of the Jumping Rivers family are:

  • Be the first to know about our latest courses and conferences.
  • Get discounts on the latest courses.
  • Read news on the latest techniques with the Jumping Rivers blog.

We keep your data secure and will never share your details. By subscribing, you agree to our privacy policy.

Follow Us

  • GitHub
  • Bluesky
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst Newcastle Helix Newcastle, NE4 5TG
Get directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • Shiny in Production
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality management British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management Cyber Essentials Certified Plus badge
  • Privacy Notice
  • |
  • Booking Terms

©2016 - present. Jumping Rivers Ltd