Skip to main content
AI in Production 2026 is now open for talk proposals.
Share insights that help teams build, scale, and maintain stronger AI systems.
items
Menu
  • About
    • Overview 
    • Join Us  
    • Community 
    • Contact 
  • Training
    • Overview 
    • Course Catalogue 
    • Public Courses 
  • Posit
    • Overview 
    • License Resale 
    • Managed Services 
    • Health Check 
  • Data Science
    • Overview 
    • Visualisation & Dashboards 
    • Open-source Data Science 
    • Data Science as a Service 
    • Gallery 
  • Engineering
    • Overview 
    • Cloud Solutions 
    • Enterprise Applications 
  • Our Work
    • Blog 
    • Case Studies 
    • R Package Validation 
    • diffify  

Analysing Shiny App start-up Times with Google Lighthouse

Author: Osheen MacOscar

Published: December 7, 2023

tags: r, shiny, lighthouse

This is part one of a three part series on Lighthouse for Shiny Apps.

  • Part 1: Using Google Lighthouse for Web Pages
  • Part 2: Analysing Shiny App start-up Times with Google Lighthouse (This post)
  • Part 3: Effect of Shiny Widgets with Google Lighthouse

Intro

In the last blog I spoke about using Google Lighthouse to test the speed of web pages. I wanted to build upon that and use Lighthouse to test some Shiny apps.

To get a feel for Shiny’s performance in a Lighthouse analysis, I needed a lot of shiny apps that I could test and create a dataset from, so I used the entries to the 2021 Shiny app contest, which is a competition where people enter Shiny apps to be judged on technical merit and artistic achievement. I used the 2021 apps as there has unfortunately not been a competition since. A full list of the submissions can be found on the Posit Community website.

To actually obtain data from these apps I used Google Lighthouse in the same way I described for general web pages in the previous blog in this series. This generated a Lighthouse report for each app.

Google Lighthouse

To test a singular app from the contest it was exactly the same as testing a normal webpage, I simply ran:

lighthouse --output json --output-path data/output_file.json url

Where url is the app I’m testing. You can also test in browser using devtools (as demonstrated in the last blog), but I was testing a lot of apps so I needed to do it programmatically.

Before we get into the data it’s important to point out that Google Lighthouse scores do vary; you may run a report on an app that I’ve covered and get a different score. There are a number of reasons for this covered here, so the devs recommend running multiple tests. I’d also like to point out I have only run the report once for each app due to length of time it would take to run reports on all the apps a few times.

Do you require help building a Shiny app? Would you like someone to take over the maintenance burden? If so, check out our Shiny and Dash services.

App data

The entries to the 2021 Shiny app contest were great! Loads of unique and interesting apps, given it was 2021 there were plenty of COVID- and election-related apps. I ran Lighthouse reports locally on 268 of the Shiny app contest submissions (some of the links were broken), and have compiled a few plots to summarise the performance of the apps.

Below is a histogram showing the distribution of overall performance scores for the apps. The Lighthouse docs give the following advice for apps based on performance scores:

  • 90-100 is an app with good performance;
  • 50-89 is an app that needs some improvement;
  • and 0-49 is an app with poor performance.

As we can see many of the apps (79 / 268) have good performance, whereas the bulk of the apps are in need of some improvement (149 / 268) or have poor overall performance (40 / 268).

Distribution of overall scores for the apps, with the mean score of 73.2 marked.

We can dive deeper into the distribution of the raw values that are used in calculating the overall performance score. The performance score is a weighted sum of some metrics formed from these raw (time) values - see the previous blog for more details on what each of these metrics means. The scores follow a similar trend - most of the measurements fall on the faster side of the spectrum then decrease as the time increases. I think this was to be expected based on the distribution of the performance score seen earlier, as most of the apps scored pretty well.

App metric scores for six different metrics: Cumulative Layout Shift, First Contentful Paint, Largest Contentful Paint, Speed Index, Time to Interactive, Total Blocking Time. The plots are histograms of the time taken to each stage.

The Apps

I don’t highlight any of the apps on the lower end of the performance spectrum here, as it would be unfair on the creators. The Shiny app contest has two sections one for < 1 year’s experience and another for > 1 years experience, so people new to Shiny are likely to have been experimenting with what’s possible in an app and not focusing on performance.

That being said I’d like to reiterate what Colin Fay said in his talk “Destroy All Widgets” about being sensible with widget use within apps, and understanding that they can hinder performance and increase wait times when they are not always necessary. For instance do you really need a {plotly} plot or would a ggplot suffice? The same could be said about interactive data tables.

I will highlight a couple of high scoring apps:

This app by Rabii Bouhestine is a really cool Geoguessr-esque game where you are trying to pinpoint the location of world wonders. This app received the overall score from Lighthouse of 95!

WorldGuesser app screenshot

Another high scoring app is “Mix Things Up” by Sam Parmar a previous competition winner who was a judge on the 2021 contest. This app is a simple yet efficient way to generate random work outs.

Exercise app screenshot

The last one I’m going to highlight is this app by Edgar Cáceres, which is an app for visualising air quality data from the station in La Oroya, Junin, Peru. This app is particularly impressive in it’s scores as it actually has two interactive {leaflet} plots.

Air quality app screenshot

Google Lighthouse is a good starting point for testing the start-up times of your apps, however it is worth noting that the score can be misleading. An app may score very highly but not actually load fast for the user. This may happen, for example, if Lighthouse thinks that the contentful paints have loaded when it was the background for the app. A way to check this is looking at the screenshots of the Google Lighthouse report within the browser. You can do this by adding --view after the url argument when running a test in the terminal. I will be using the next blog in this series to investigate this further.

So if you are developing a desktop Shiny app and want see see how it does you can use Lighthouse and this blog for a benchmark, although with a pinch of salt as there are many different kinds of apps that we tested - games and data visualations etc. Roughly, however, if your app scores better than 73 then that’s a good start. If you can’t bring your app load time down for whatever reason, maybe due data processing for example, then something you can do is use a loading screen to let your app-users know that something is happening. This is covered excellently at the start of this blog on Shiny extensions.

In the final blog in this series, we will be investigating the impact various widgets have on Shiny app Lighthouse scores.


Jumping Rivers Logo

Recent Posts

  • Start 2026 Ahead of the Curve: Boost Your Career with Jumping Rivers Training 
  • Should I Use Figma Design for Dashboard Prototyping? 
  • Announcing AI in Production 2026: A New Conference for AI and ML Practitioners 
  • Elevate Your Skills and Boost Your Career – Free Jumping Rivers Webinar on 20th November! 
  • Get Involved in the Data Science Community at our Free Meetups 
  • Polars and Pandas - Working with the Data-Frame 
  • Highlights from Shiny in Production (2025) 
  • Elevate Your Data Skills with Jumping Rivers Training 
  • Creating a Python Package with Poetry for Beginners Part2 
  • What's new for Python in 2025? 

Top Tags

  • R (236) 
  • Rbloggers (182) 
  • Pybloggers (89) 
  • Python (89) 
  • Shiny (63) 
  • Events (26) 
  • Training (23) 
  • Machine Learning (22) 
  • Conferences (20) 
  • Tidyverse (17) 
  • Statistics (14) 
  • Packages (13) 

Authors

  • Amieroh Abrahams 
  • Aida Gjoka 
  • Gigi Kenneth 
  • Osheen MacOscar 
  • Keith Newman 
  • Shane Halloran 
  • Russ Hyde 
  • Sebastian Mellor 
  • Myles Mitchell 
  • Pedro Silva 
  • Tim Brock 
  • Theo Roe 
  • Colin Gillespie 

Keep Updated

Like data science? R? Python? Stan? Then you’ll love the Jumping Rivers newsletter. The perks of being part of the Jumping Rivers family are:

  • Be the first to know about our latest courses and conferences.
  • Get discounts on the latest courses.
  • Read news on the latest techniques with the Jumping Rivers blog.

We keep your data secure and will never share your details. By subscribing, you agree to our privacy policy.

Follow Us

  • GitHub
  • Bluesky
  • LinkedIn
  • YouTube
  • Eventbrite

Find Us

The Catalyst Newcastle Helix Newcastle, NE4 5TG
Get directions

Contact Us

  • hello@jumpingrivers.com
  • + 44(0) 191 432 4340

Newsletter

Sign up

Events

  • North East Data Scientists Meetup
  • Leeds Data Science Meetup
  • Shiny in Production
British Assessment Bureau, UKAS Certified logo for ISO 9001 - Quality management British Assessment Bureau, UKAS Certified logo for ISO 27001 - Information security management Cyber Essentials Certified Plus badge
  • Privacy Notice
  • |
  • Booking Terms

©2016 - present. Jumping Rivers Ltd