npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Profiles and optimizes frontend, backend, and database performance for scalable apps. Use for slow response times, scaling bottlenecks via Chrome DevTools, React Profiler, Node/Python profilers, Postgres/Mongo queries.
Optimizes application performance using profiling-driven methodology: CPU/memory profiling, caching strategies, query optimization, indexing, lazy loading, connection pooling, and load testing.
Guides performance optimization workflow: profile baselines, identify bottlenecks, apply fixes for frontend (bundle size, rendering, images), backend (queries, caching), and database layers.
Share bugs, ideas, or general feedback.
Profile, diagnose, and optimize Shiny application performance through caching, async operations, and reactive graph optimization.
# Profile with profvis
profvis::profvis({
shiny::runApp("path/to/app", display.mode = "normal")
})
# Or profile specific operations
profvis::profvis({
result <- expensive_computation(data)
})
Identify the top bottlenecks:
Use the reactive log for reactive graph analysis:
# Enable reactive logging
options(shiny.reactlog = TRUE)
shiny::runApp("path/to/app")
# Press Ctrl+F3 in the browser to view the reactive graph
Expected: Clear identification of the 2-3 biggest bottlenecks.
On failure: If profvis doesn't show useful detail, wrap specific sections with profvis::profvis(). If reactlog is overwhelming, focus on one interaction at a time.
Reduce unnecessary reactive invalidations:
# BAD: Recomputes on ANY input change
output$plot <- renderPlot({
data <- load_data() # Runs every time
filtered <- data[data$category == input$category, ]
plot(filtered)
})
# GOOD: Isolate data loading from filtering
raw_data <- reactive({
load_data()
}) |> bindCache() # Cache the expensive part
filtered_data <- reactive({
raw_data()[raw_data()$category == input$category, ]
})
output$plot <- renderPlot({
plot(filtered_data())
})
Use isolate() to prevent unnecessary invalidations:
# Only recompute when the button is clicked, not on every input change
output$result <- renderText({
input$compute # Take dependency on button
isolate({
paste("N =", input$n, "Mean =", mean(rnorm(input$n)))
})
})
Use debounce() and throttle() for high-frequency inputs:
# Debounce text input — wait 500ms after user stops typing
search_text <- reactive(input$search) |> debounce(500)
# Throttle slider — update at most every 250ms
slider_value <- reactive(input$slider) |> throttle(250)
Expected: Reactive graph fires only necessary recalculations.
On failure: If removing a dependency breaks functionality, use req() to add explicit guards instead of relying on implicit reactive dependencies.
output$plot <- renderPlot({
create_expensive_plot(filtered_data())
}) |> bindCache(input$category, input$date_range)
output$table <- renderDT({
expensive_query(input$filters)
}) |> bindCache(input$filters)
bindCache uses input values as cache keys. When the same inputs occur again, the cached result is returned immediately.
# Cache expensive function results
load_reference_data <- memoise::memoise(
function(dataset_name) {
readr::read_csv(paste0("data/", dataset_name, ".csv"))
},
cache = cachem::cache_disk("cache/", max_age = 3600)
)
# In global.R or outside server function — computed once at app startup
reference_data <- readr::read_csv("data/reference.csv")
model <- readRDS("models/trained_model.rds")
server <- function(input, output, session) {
# reference_data and model are available to all sessions
# without reloading
}
Expected: Repeated operations use cached results; response time drops significantly.
On failure: If cache grows too large, set max_age or max_size limits. If cached values are stale, reduce max_age or add a cache-clear button. If bindCache causes errors, ensure cache key inputs are serializable.
Use ExtendedTask (Shiny >= 1.8.1) for long-running computations:
server <- function(input, output, session) {
# Define the extended task
analysis_task <- ExtendedTask$new(function(data, params) {
promises::future_promise({
# This runs in a background process
run_heavy_analysis(data, params)
})
}) |> bind_task_button("run_analysis")
# Trigger the task
observeEvent(input$run_analysis, {
analysis_task$invoke(dataset(), input$params)
})
# Use the result
output$result <- renderTable({
analysis_task$result()
})
}
For apps on Shiny < 1.8.1, use promises directly:
library(promises)
library(future)
plan(multisession, workers = 4)
server <- function(input, output, session) {
result <- eventReactive(input$compute, {
future_promise({
Sys.sleep(5) # Simulate long computation
expensive_analysis(isolate(input$params))
})
})
output$table <- renderTable({
result()
})
}
Expected: Long operations don't block the UI; other users can interact while computation runs.
On failure: If future_promise errors, check that plan(multisession) is set. If variables aren't available in the future, pass them explicitly — futures run in separate R processes.
Reduce rendering overhead:
# Use plotly for interactive plots instead of re-rendering
output$plot <- plotly::renderPlotly({
plotly::plot_ly(filtered_data(), x = ~x, y = ~y, type = "scatter")
})
# Use server-side DT for large tables
output$table <- DT::renderDataTable({
DT::datatable(large_data(), server = TRUE, options = list(
pageLength = 25,
processing = TRUE
))
})
# Conditional UI to avoid rendering hidden elements
output$details <- renderUI({
req(input$show_details)
expensive_details_ui()
})
Expected: Rendering operations are faster and don't block the UI.
On failure: If plotly is slow with large datasets, use toWebGL() for WebGL rendering or downsample data before plotting.
# Before/after benchmarking
system.time({
shiny::testServer(myModuleServer, args = list(...), {
session$setInputs(category = "A")
session$flushReact()
})
})
# Load testing with shinyloadtest
shinyloadtest::record_session("http://localhost:3838")
shinyloadtest::shinycannon(
"recording.log",
"http://localhost:3838",
workers = 10,
loaded_duration_minutes = 5
)
shinyloadtest::shinyloadtest_report("recording.log")
Expected: Measurable improvement in response times and/or concurrent user capacity.
On failure: If performance didn't improve, re-profile to find the next bottleneck. Performance optimization is iterative — fix the biggest bottleneck first, then re-measure.
bindCache().future_promise runs in a separate process. Global variables, database connections, and reactive values must be captured explicitly.build-shiny-module — modular architecture for maintainable reactive codescaffold-shiny-app — choose the right app framework from the startdeploy-shiny-app — deploy optimized apps with appropriate server resourcestest-shiny-app — performance regression tests