Skip to main content
AI in Production 2026 is now open for talk proposals.
Share insights that help teams build, scale, and maintain stronger AI systems.
items
Menu
  • About
    • Overview 
    • Join Us  
    • Community 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public Courses 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation & Dashboards 
    • Open-source Data Science 
    • Data Science as a Service 
    • Gallery 
  • Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 
    • R Package Validation 
    • diffify  

Advanced Testing in Python

Authors: Aida Gjoka & Russ Hyde

Published: May 8, 2025

tags: python, pytest

Writing tests is one of the best ways to keep your code reliable and reproducible. This post builds on our previous blog about Python testing with pytest Part 1, and explores some of the more advanced features it offers. From parametrised fixtures to mocking and other useful pytest plugins, we will show how to make your tests more reproducible, easier to manage and demonstrate how writing simple tests can save you time in the long run.

Testing in Python

When we write code, it is important to ensure it behaves as expected, which is why we test it. Testing (and re-testing) our code should be a regular practice, ideally done thoroughly, quickly, and reliably after every change.

To achieve this, we write additional code to verify the behavior of our main code. We use specific terms to differentiate between these two types of code:

  • Production Code: the code that fulfills the purpose of the software, and is run by the user.
  • Test Code: additional code only used to test the production code.

The directory structure for production and testing code typically looks as follows:

./advanced_pytest/                
|── map.py # production code
├── tests/                
│   ├── parametrised_fixture.py        
│   └── test_map.py
├── venv          

where the main functions, in our case map.py are in the root directory and the tests are stored under tests.

Parametrised fixtures

In Part 1 we introduced the concept of fixtures in pytest. Now, let’s explore parametrised fixtures, a powerful feature that allows us to run the same test logic with different inputs. This helps avoid code duplication while testing various scenarios without rewriting your tests.

import pytest

@pytest.fixture(params=[1, 2, 3])
def input_value(request):
    return request.param

def test_increment(input_value):
    assert input_value + 1 > input_value

This test will run three times—once for each value in the params list (1, 2, and 3). By parameterising the fixture, we effectively reuse the same test logic across multiple inputs. This makes your tests more compact and helps catch potential issues that might only appear with certain values.

Mocking

Mocking is the process of replacing a real object with a pretend object, which records how it is called and can assert if it is called incorrectly. In python, mocking can be performed via the unittest.mock module. We can create a mock version of a function as follows:

# ./tests/test_mock_function.py
from unittest.mock import Mock
mock_function = Mock(name="my_function", return_value=2)

This creates a new object called mock_function which can be used in place of any other function. The name="my_function" argument is a label for the mock_function which is useful when debugging. The return_value=2 argument for Mock means that any time that mock_function() is called, it will return 2, regardless of any other arguments passed to mock_function().

We can use our mock_function in a test:

# ./tests/test_mock_function.py (continued)
def test_mock_function_works():
    assert mock_function() == 2
    assert mock_function(123, "abc") == 2

Running the test script shows that mock_function() always returns 2.

python -m pytest tests/test_mock_function.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /PATH/pytest-advanced-blog-post
collected 1 item                                                               

tests/test_mock_function.py .                                            [100%]

============================== 1 passed in 0.02s ===============================

Mocking External Dependencies

When testing functions that interact with external systems (such as APIs or databases), it’s important to isolate the code being tested. We want to avoid having our tests make real calls to remote resources, as this could cause failures due to issues like internet outages or slow database responses. Instead, we use mocks. Pytest supports mocking by integrating with the unittest.mock module (here we use the patch function).

Let consider an example of some code (map.py) that retrieves and displays a static map image of a geographic location (Paris in this case).

import requests


def map_at(lat, long, satellite=False, zoom=12, size=(400, 400)):
    base = "https://static-maps.yandex.ru/1.x/?"
    params = dict(
        z=zoom,
        size=str(size[0]) + "," + str(size[1]),
        ll=str(long) + "," + str(lat),
        l="sat" if satellite else "map",
        lang="en_US",
    )
    return requests.get(base, params=params, timeout=60)


paris_map = map_at(48.853, 2.3499)

import IPython

IPython.core.display.Image(paris_map.content)

In this example there is a single function map_at() that could be tested. Additional code in the script makes use of that function (paris_map = map_at(...)). The way the script is written means that whenever it is loaded as a module (import map), all of the top-level commands will be evaluated. In particular, when a test script loads this module, the commands paris_map = map_at(...) and ...Image(paris_map.content) will run. You don’t want this to happen. That is, you don’t want to run all of the code in your analysis scripts, just to test that the functions within it work correctly, it will make your testing routine take a long time.

The top-level code that displays a map of Paris is script-specific. It should run when map.py is ran as a script, but not when map.py is imported. The standard Python way to prevent script-specific code from running when a module is imported, is to wrap it in the following block:

if __name__ == "__main__":
    # script-specific commands go here

To make testing easier, we can make map.py a little more import-safe:

import requests
import IPython


def map_at(lat, long, satellite=False, zoom=12, size=(400, 400)):
    # Function body is unchanged
    # ...
    return requests.get(base, params=params, timeout=60)


if __name__ == "__main__":
    paris_map = map_at(48.853, 2.3499)
    IPython.core.display.Image(paris_map.content)

Now we can load the functions from map.py without having to run all the other script-specific code within it. Then the test file (test_map.py) for the map.py would be:

import requests

from unittest.mock import patch

from map import map_at 

def test_build_default_params():
    with patch.object(requests, "get") as mock_get:
        map_at(51.0, 0.0)
        mock_get.assert_called_with(
            "https://static-maps.yandex.ru/1.x/?",
            params={
                "z": 12,
                "size": "400,400",
                "ll": "0.0,51.0",
                "l": "map",
                "lang": "en_US",
            },
            timeout=60,
        )

This test checks the behavior of the map_at function. Using the unittest.mock.patch method, the test mocks the requests.get function to prevent actual network calls. It ensures that when the map_at function is called with specific coordinates, it generates the correct HTTP GET request with the expected URL and parameters (such as zoom level, map type, and language).

Similarly, you can patch a function using the context manager with patch.object(my_module, "original_function", mock_function) and this will mean that any calls to my_module.original_function() will be replaced with calls to mock_function().

Mocking is important in testing because it isolates the code being tested from external dependencies, such as APIs, databases, or file systems. This allows tests to run faster, as they do not rely on slow or unreliable external services. Mocking also ensures tests are more predictable and repeatable by simulating specific responses or error conditions, without making real network requests or modifying external data. This makes tests more focused on the logic of the code itself, while avoiding unintended side effects.

Useful pytest Plugins

Pytest’s functionality can be extended through a rich ecosystem of plugins. Here are some useful plugins:

pytest-xdist: Enables parallel test execution, speeding up test runs.

  pip install pytest-xdist
  pytest -n auto

pytest-cov: Provides code coverage reports.

  pip install pytest-cov
  pytest --cov=your_package

pytest-mock: Simplifies mocking by integrating with unittest.mock.

  pip install pytest-mock

By integrating these advanced pytest features, you can make your tests more efficient, reproducible, and easier to manage. Don’t hesitate to experiment with parametrised fixtures, mocking, and useful plugins like pytest-cov and pytest-xdist to level up your testing.


Jumping Rivers Logo

Recent Posts

  • Start 2026 Ahead of the Curve: Boost Your Career with Jumping Rivers Training 
  • Should I Use Figma Design for Dashboard Prototyping? 
  • Announcing AI in Production 2026: A New Conference for AI and ML Practitioners 
  • Elevate Your Skills and Boost Your Career – Free Jumping Rivers Webinar on 20th November! 
  • Get Involved in the Data Science Community at our Free Meetups 
  • Polars and Pandas - Working with the Data-Frame 
  • Highlights from Shiny in Production (2025) 
  • Elevate Your Data Skills with Jumping Rivers Training 
  • Creating a Python Package with Poetry for Beginners Part2 
  • What's new for Python in 2025? 

Top Tags

  • R (236) 
  • Rbloggers (182) 
  • Pybloggers (89) 
  • Python (89) 
  • Shiny (63) 
  • Events (26) 
  • Training (23) 
  • Machine Learning (22) 
  • Conferences (20) 
  • Tidyverse (17) 
  • Statistics (14) 
  • Packages (13) 

Authors

  • Amieroh Abrahams 
  • Tim Brock 
  • Aida Gjoka 
  • Shane Halloran 
  • Russ Hyde 
  • Theo Roe 
  • Colin Gillespie 
  • Gigi Kenneth 
  • Osheen MacOscar 
  • Sebastian Mellor 
  • Myles Mitchell 
  • Keith Newman 
  • Pedro Silva 

Keep Updated

Like data science? R? Python? Stan? Then you’ll love the Jumping Rivers newsletter. The perks of being part of the Jumping Rivers family are:

  • Be the first to know about our latest courses and conferences.
  • Get discounts on the latest courses.
  • Read news on the latest techniques with the Jumping Rivers blog.

We keep your data secure and will never share your details. By subscribing, you agree to our privacy policy.

Follow Us

  • GitHub
  • Bluesky
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst Newcastle Helix Newcastle, NE4 5TG
Get directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • Shiny in Production
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality management British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management Cyber Essentials Certified Plus badge
  • Privacy Notice
  • |
  • Booking Terms

©2016 - present. Jumping Rivers Ltd