Benchmarks
Overview
CEM provides two types of benchmarks for comprehensive performance analysis:
Generate Tool Benchmarks
Performance comparison of custom element manifest generation tools.
Number of runs per tool: 100
Number of files analyzed per run: 45
@lit-labs/cli
Avg Time
Avg Output Size
Runs
Validation
Command
npx --yes @lit-labs/cli labs gen --manifest --out data/litValidation Results
@custom-elements-manifest/analyzer
Avg Time
Avg Output Size
Runs
Validation
Command
npx --yes @custom-elements-manifest/analyzer analyze --outdir data/cea --globs benchmark/components/*.tsValidation Results
cem generate
Avg Time
Avg Output Size
Runs
Validation
Command
cem generate -o data/cem/custom-elements.json benchmark/components/*.tsValidation Results
LSP Server Benchmarks
Last benchmark run:HTML File Operations
Pure LSP protocol timing in .html files. Each benchmark measures the time from LSP request to response completion, using multiple iterations to calculate statistical distributions (mean, median, P95, P99).