Datasets:
Uploading dataset cards and figures
Browse files- README.md +94 -3
- histograms.png +3 -0
- intro_figure_boom.png +3 -0
- metadata.png +3 -0
README.md
CHANGED
@@ -1,3 +1,94 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
dataset: boom
|
3 |
+
License: apache-2.0
|
4 |
+
task_categories:
|
5 |
+
- time-series-forecasting
|
6 |
+
modalities:
|
7 |
+
- time-series
|
8 |
+
size_categories:
|
9 |
+
- 100M<n<1B
|
10 |
+
pretty_name: BOOM
|
11 |
+
dataset_creators: ["Datadog"]
|
12 |
+
paperswithcode_id: boom-benchmark
|
13 |
+
tags:
|
14 |
+
- time-series
|
15 |
+
- forecasting
|
16 |
+
- observability
|
17 |
+
- foundation models
|
18 |
+
---
|
19 |
+
|
20 |
+
# Dataset Card for BOOM Benchmark
|
21 |
+
|
22 |
+
## Dataset Summary
|
23 |
+
|
24 |
+
The **BOOM Benchmark** is a large-scale, real-world time series dataset designed for evaluating models on **forecasting** and **anomaly detection** tasks in complex observability environments. Collected from a high-volume telemetry system, the benchmark captures the irregularity, structural complexity, and heavy-tailed statistics typical of production observability data. Unlike synthetic or curated benchmarks, BOOM reflects the full diversity and unpredictability of operational signals observed in distributed systems, covering infrastructure, networking, databases, security, and application-level metrics.
|
25 |
+
|
26 |
+

|
27 |
+
*<center>Figure 1: (a) Boom is comprised of observability time series data with distinct semantic categories corresponding to various temporal patterns; percentages indicate proportion of each category in Boom.
|
28 |
+
(b) 2D PCA projections of statistical features from [GiftEval](https://huggingface.co/datasets/Salesforce/GiftEval) and Boom highlight a clear distinction in the underlying time series characteristics of these two benchmarks.
|
29 |
+
(c) Boom is comprised of data from various system domains. </center>*
|
30 |
+
|
31 |
+
|
32 |
+
Boom consists of 350 million points across 32887 variates. The dataset is split on 2807 dataset, each represents a metric query with one or multiple variates. These series vary widely in sampling frequency, temporal length, and number of variates. Looking beyond the basic characteristics of the series, we highlight a few of the typical challenging properties of observability time series (several of which are illustrated in Figure 1):
|
33 |
+
|
34 |
+
- Zero-inflation: Many metrics track infrequent events (e.g., system errors), resulting in sparse series dominated by zeros with rare, informative spikes.
|
35 |
+
|
36 |
+
- Highly dynamic patterns: Some series fluctuate rapidly, exhibiting frequent sharp transitions that are difficult to model and forecast.
|
37 |
+
|
38 |
+
- Complex seasonal structure: Series are often modulated by carrier signals exhibiting non-standard seasonal patterns that differ from conventional cyclic behavior.
|
39 |
+
|
40 |
+
- Trends and abrupt shifts: Metrics may feature long-term trends and sudden structural breaks, which, when combined with other properties, increase forecasting difficulty.
|
41 |
+
|
42 |
+
- Stochasticity: Some metrics appear pseudo-random or highly irregular, with minimal discernible temporal structure.
|
43 |
+
|
44 |
+
- Heavy-tailed and skewed distributions: Outliers due to past incidents or performance anomalies introduce significant skew.
|
45 |
+
|
46 |
+
- High cardinality: Observability data is often segmented by tags such as service, region, or instance, producing large families of multivariate series with high dimensionality but limited history per variate.
|
47 |
+
|
48 |
+
## Dataset Structure
|
49 |
+
|
50 |
+
Each entry in the dataset consists of:
|
51 |
+
|
52 |
+
- A multivariate or univariate time series (one metric query with up to 100 variates)
|
53 |
+
- Metadata including sampling start time, frequency, series length and variates number. Figure 2 shows the metadata decomposition of the dataset by number of series.
|
54 |
+
- Taxonomy labels for dataset stratification:
|
55 |
+
- **Metric Type** (e.g., count, rate, gauge, histogram)
|
56 |
+
- **Domain** (e.g., infrastructure, networking, security)
|
57 |
+
- **Semantic Class** (e.g., skewed, seasonal, flat)
|
58 |
+
|
59 |
+

|
60 |
+
*<center>Figure 2: Representative figure showing the metadata breakdown by variate in the dataset: (left) sampling frequency distribution, (middle) series length distribution, and (right) number of variates distribution.</center>*
|
61 |
+
|
62 |
+
## Collection and Sources
|
63 |
+
|
64 |
+
The dataset is sourced from a proprietary staging environment of an observability platform. Data was collected using a standardized query API and preprocessed to remove sensitive or production-specific information. Metric domains and structures mirror real-world telemetry from distributed systems. Each time series in Boom dataset undergoes a standardized preprocessing pipeline designed to address the noise, irregularities, and high cardinality typical of observability data.
|
65 |
+
|
66 |
+
## Comparison with Other Benchmarks
|
67 |
+
|
68 |
+
The BOOM Benchmark diverges significantly from traditional time series datasets, including those in the [GiftEval](https://huggingface.co/datasets/Salesforce/GiftEval) suite, when analyzed using 12 standard and custom diagnostic features computed on normalized series (see Figure 3). These features capture key temporal and distributional characteristics:
|
69 |
+
• Spectral entropy (unpredictability),
|
70 |
+
• Skewness and kurtosis (distribution shape),
|
71 |
+
• Autocorrelation coefficients (temporal structure),
|
72 |
+
• Unit root tests and transience scores (stationarity and burstiness).
|
73 |
+
|
74 |
+
BOOM series exhibit substantially higher spectral entropy, indicating greater irregularity in temporal dynamics. Distributions show heavier tails and more frequent structural breaks, as reflected by shifts in skewness and stationarity metrics. A wider range of transience scores highlights the presence of both persistent and highly volatile patterns—common in operational observability data but largely absent from curated academic datasets.
|
75 |
+
|
76 |
+
Principal Component Analysis (PCA) applied to the full feature set (Figure 1) reveals a clear separation between BOOM and [GiftEval](https://huggingface.co/datasets/Salesforce/GiftEval) datasets. BOOM occupies a broader and more dispersed region of the feature space, reflecting greater diversity in signal complexity and temporal structure. This separation reinforces the benchmark’s relevance for evaluating models under realistic, deployment-aligned conditions.
|
77 |
+
|
78 |
+

|
79 |
+
*<center>Figure 3: Distributional comparison of 12 statistical features computed on normalized time series from the Boom dataset and the [GiftEval](https://huggingface.co/datasets/Salesforce/GiftEval) dataset. The broader and often shifted distributions in the Boom series reflect the increased diversity, irregularity, and non-stationarity characteristic of observability data.</center>*
|
80 |
+
|
81 |
+
Links:
|
82 |
+
- Paper (To add)
|
83 |
+
- Codebase (To add)
|
84 |
+
- Toto model (to add)
|
85 |
+
|
86 |
+
## Citation
|
87 |
+
|
88 |
+
```bibtex
|
89 |
+
@inproceedings{boom_ds,
|
90 |
+
title={BOOM: },
|
91 |
+
author={Names,
|
92 |
+
year={2025},
|
93 |
+
booktitle={NeurIPS Time Series Workshop},
|
94 |
+
}
|
histograms.png
ADDED
![]() |
Git LFS Details
|
intro_figure_boom.png
ADDED
![]() |
Git LFS Details
|
metadata.png
ADDED
![]() |
Git LFS Details
|