Skip to main content
The Test Runs page lists every Playwright test execution in your project. Identify failing or flaky runs, confirm where they happened (branch and environment), and open detailed evidence for debugging.

Search and Filters

ControlsPurposeOptions
SearchFind runs by text or IDCommit message, run number (for example, #1493)
Time PeriodLimit runs to a date rangeLast 24 hours, 3 days, 7 days, 14 days, 30 days, Custom
Test StatusFilter by outcomePassed, Failed, Skipped, Flaky
DurationSort by runtimeLow to High, High to Low
CommitterShow runs by authorSelect one or more committers
EnvironmentFocus on a mapped environmentproduction, development, hotfix
BranchScope by branchesSelect one or more branches
TagsFilter by run-level or test-case-level tagsSwitch between Test Run and Test Case tabs, then select one or more tags

Active Test Runs

Runs currently executing appear in a collapsible Active Test Runs section at the top of the list. Results update in real time as tests complete. Each active run displays a progress bar, live pass/fail/skip counts, commit, branch, and CI source. Active test run with sharded execution showing shard tabs, worker status, and live progress bar For sharded runs, the run is labeled SHARDED with tabs for each shard. Select a shard tab to view its workers and currently executing tests. Non-sharded runs show a single progress bar with per-worker detail.

Test Run Key Columns

Test runs list showing run ID, commit info, branch, environment, test results counts, and AI insights columns
ColumnDescription
Test RunRun ID, start time, and executor (CI or Local). Click the CI label to open the job.
CommitCommit message, short SHA, and author. Links to the commit in your Git host.
Branch & EnvironmentBranch name and mapped environment label. Environment labels come from branch mapping.
Test ResultsCounts for Passed, Failed, Flaky, Skipped, Interrupted and total.
AI InsightsCategory counts for Actual Bug, UI Change, Unstable, and Miscellaneous.

Test Run Grouping

Runs that share the same commit hash and commit message are grouped as attempts by TestDino. This usually happens when you rerun a CI workflow or trigger multiple executions for the same commit. Expand the group to see each attempt (for example, Attempt #1, Attempt #2). This grouping helps you:
  • Track reruns for a single commit without scanning separate rows
  • Compare results across attempts to confirm if a rerun fixed flaky failures
  • See how many times a workflow was triggered for the same code change

Run-Level Tags

Attach labels to an entire test run using the --tag CLI flag. Tags appear as chips on each run row in the list and are available as filter values.
npx tdpw test --tag="regression,smoke"
Use run-level tags to label runs by build number, sprint, release, or test type. These are separate from test-case-level tags set via annotations.
Tag typeSet viaScopeExample
Run-level--tag CLI flagEntire test runregression, sprint-42, nightly
Test-case-levelTest annotationsIndividual test casessmoke, critical-path, login

Quick Start Steps

  1. Set scope - Filter by Time Period, Environment, Branch, Committer, Status, or Tags, and sort by Duration to focus the list.
  2. Scan and open - Review result counts and AI labels, then open a run that needs action.
  3. Review details - The run details page provides six tabs:
    • Summary: Totals for Failed, Flaky, and Skipped with sub-causes and test case analysis
    • Specs: File-centric view. Sort and filter to identify slow or falling specs.
    • Errors: Groups failed and flaky tests by error message. Jump to stack traces.
    • History: Outcome and runtime charts across recent runs. Spot spikes and regressions.
    • Configuration: Source, CI, system, and test settings. Detect config drift.
    • Coverage: Statement, branch, function, and line coverage with per-file breakdown.
    • AI Insights: Category breakdowns, error variants, and patterns for new or recurring issues.
Explore runs, CI optimization, and streaming.