Time Series Decomposition
Separate your time series into trend, seasonal, and residual components.
| Function | Description |
|---|---|
ts_mstl_decomposition | Multiple Seasonal-Trend decomposition (MSTL) |
ts_decompose | Basic additive/multiplicative decomposition |
ts_detrend | Remove trend using 7 methods |
ts_detect_peaks | Detect local maxima with prominence |
ts_analyze_peak_timing | Analyze peak timing variability |
Showing 5 of 5
ts_mstl_decomposition
Multiple Seasonal-Trend decomposition using Loess (MSTL). Decomposes a time series into trend, one or more seasonal components, and residual.
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
table_name | VARCHAR | Yes | - | Source table |
group_col | VARCHAR | No | NULL | Group column for multi-series |
date_col | VARCHAR | Yes | - | Date column |
value_col | VARCHAR | Yes | - | Value column |
params | MAP | No | MAP | Configuration |
Options MAP:
| Option | Type | Default | Description |
|---|---|---|---|
seasonal_periods | INTEGER[] | [7] | Seasonal periods to decompose |
Output
Returns table with original columns plus:
| Field | Type | Description |
|---|---|---|
trend | DOUBLE | Trend component |
seasonal_P | DOUBLE | Seasonal component for period P (one column per period) |
residual | DOUBLE | Remainder after removing trend and seasonality |
Example
-- Single seasonality (weekly)
SELECT * FROM anofox_fcst_ts_mstl_decomposition(
'sales_data',
NULL, -- group_col
date,
sales,
MAP{'seasonal_periods': [7]}
);
Multi-Seasonal Decomposition
-- Multiple seasonalities (weekly + monthly)
SELECT * FROM anofox_fcst_ts_mstl_decomposition(
'hourly_demand',
NULL,
timestamp,
demand,
MAP{'seasonal_periods': [24, 168]} -- daily (24h) + weekly (168h)
);
Multi-Series Decomposition
-- Decompose multiple products
SELECT * FROM anofox_fcst_ts_mstl_decomposition(
'sales_by_product',
product_id, -- group column
date,
sales,
MAP{'seasonal_periods': [7, 365]}
);
Interpretation Guide
| Component | What It Shows | Business Use |
|---|---|---|
| Trend | Long-term direction | Growth/decline analysis |
| Seasonal | Repeating patterns | Capacity planning, promotions |
| Residual | Random fluctuations | Anomaly detection, forecast uncertainty |
Residual Analysis:
- Large residuals indicate unusual events or data quality issues
- Patterns in residuals suggest missing seasonality
- Residual variance indicates forecast uncertainty
Common Seasonal Periods
| Data Frequency | Common Periods |
|---|---|
| Daily | 7 (weekly), 365 (yearly) |
| Hourly | 24 (daily), 168 (weekly) |
| Weekly | 52 (yearly) |
| Monthly | 12 (yearly) |
| Quarterly | 4 (yearly) |
Decomposition Workflow
-- 1. Detect seasonality first
SELECT * FROM ts_analyze_seasonality(
'sales_data', NULL, date, sales, MAP{}
);
-- 2. Use detected periods for decomposition
SELECT * FROM ts_mstl_decomposition(
'sales_data',
NULL,
date,
sales,
MAP{'seasonal_periods': [7, 365]} -- periods from step 1
);
-- 3. Analyze components
SELECT
AVG(trend) as avg_trend,
STDDEV(residual) as residual_volatility,
MIN(trend) as min_trend,
MAX(trend) as max_trend
FROM (
SELECT * FROM ts_mstl_decomposition(
'sales_data', NULL, date, sales, MAP{'seasonal_periods': [7]}
)
);
ts_decompose
Basic additive or multiplicative seasonal decomposition.
ts_decompose(values DOUBLE[], type VARCHAR) → STRUCT
ts_decompose(values DOUBLE[], type VARCHAR, period INTEGER) → STRUCT
Parameters:
values: Time series values (DOUBLE[])type: Decomposition type'additive'- data = trend + seasonal + remainder'multiplicative'- data = trend × seasonal × remainder'auto'- Automatic selection
period: Optional known period (INTEGER)
Returns:
STRUCT(
trend DOUBLE[], -- Trend component
seasonal DOUBLE[], -- Seasonal component
remainder DOUBLE[], -- Residual component
period INTEGER, -- Detected/used period
type_used VARCHAR -- 'additive' or 'multiplicative'
)
Example:
-- Additive decomposition
SELECT ts_decompose([1,2,3,4,1,2,3,4,1,2,3,4]::DOUBLE[], 'additive');
-- Auto-detect decomposition type
SELECT ts_decompose(LIST(value ORDER BY date), 'auto') AS decomposition
FROM sales GROUP BY product_id;
-- Multiplicative with known period
SELECT ts_decompose(LIST(value ORDER BY date), 'multiplicative', 12)
FROM monthly_sales GROUP BY product_id;
ts_detrend
Remove trend from time series using various methods.
ts_detrend(values DOUBLE[], method VARCHAR) → STRUCT
Methods:
| Method | Description | Best For |
|---|---|---|
'linear' | Linear trend via least squares | Simple linear trends |
'polynomial' | Polynomial (degree 2) via QR decomposition | Curved trends |
'diff' | First differencing | Random walk removal |
'diff2' | Second differencing | Strong trends |
'loess' | LOESS local polynomial regression | Non-linear trends |
'spline' | P-splines detrending | Smooth trends |
'auto' | Automatic selection using AIC | Unknown trend type |
Returns:
STRUCT(
detrended DOUBLE[], -- Detrended values
trend DOUBLE[], -- Extracted trend component
method_used VARCHAR, -- Method that was used
residual_var DOUBLE -- Variance of detrended series
)
Example:
-- Linear detrending
SELECT ts_detrend([1,2,3,4,5,6,7,8,9,10]::DOUBLE[], 'linear');
-- Automatic method selection
SELECT (ts_detrend(LIST(value ORDER BY date), 'auto')).detrended AS detrended_values
FROM sales GROUP BY product_id;
-- LOESS detrending for non-linear trends
SELECT ts_detrend(LIST(value ORDER BY date), 'loess') AS result
FROM sensor_data GROUP BY sensor_id;
Use Cases:
- Preparing data for stationarity tests
- Isolating seasonal patterns
- Removing growth effects for comparison
ts_detect_peaks
Detect peaks (local maxima) in time series data with prominence calculation.
ts_detect_peaks(values DOUBLE[]) → STRUCT
ts_detect_peaks(values DOUBLE[], min_prominence DOUBLE) → STRUCT
ts_detect_peaks(values DOUBLE[], min_prominence DOUBLE, min_distance INTEGER) → STRUCT
Parameters:
values: Time series values (DOUBLE[])min_prominence: Minimum peak prominence threshold (default: 0.0, all peaks)min_distance: Minimum distance between peaks in observations (default: 1)
Returns:
STRUCT(
peak_indices UBIGINT[], -- Indices of detected peaks (0-based)
peak_values DOUBLE[], -- Values at peak locations
prominences DOUBLE[], -- Prominence of each peak
n_peaks UBIGINT -- Number of peaks detected
)
Example:
-- Detect all peaks
SELECT ts_detect_peaks([1.0, 3.0, 2.0, 5.0, 1.0, 4.0, 2.0]::DOUBLE[]);
-- Returns: {peak_indices: [1, 3, 5], peak_values: [3.0, 5.0, 4.0], ...}
-- Detect significant peaks only (prominence > 2.0)
SELECT ts_detect_peaks(LIST(value ORDER BY date), 2.0) AS peaks
FROM sensor_data GROUP BY sensor_id;
-- With minimum distance between peaks (at least 7 apart)
SELECT ts_detect_peaks(LIST(value ORDER BY date), 1.0, 7) AS peaks
FROM daily_data GROUP BY series_id;
Use Cases:
- Finding demand spikes
- Identifying seasonal peaks
- Anomaly detection
ts_analyze_peak_timing
Analyze peak timing variability across seasonal cycles.
ts_analyze_peak_timing(values DOUBLE[], period INTEGER) → STRUCT
Parameters:
values: Time series values (DOUBLE[])period: Expected seasonal period (INTEGER)
Returns:
STRUCT(
mean_peak_position DOUBLE, -- Mean position within cycle (0 to period-1)
peak_position_std DOUBLE, -- Standard deviation of positions
peak_timing_cv DOUBLE, -- Coefficient of variation
n_cycles UBIGINT, -- Number of cycles analyzed
is_consistent BOOLEAN -- CV < 0.2
)
Example:
-- Analyze weekly peak timing
SELECT ts_analyze_peak_timing(LIST(value ORDER BY date), 7) AS timing
FROM daily_sales GROUP BY store_id;
-- Check if monthly peaks occur at consistent times
SELECT
product_id,
(ts_analyze_peak_timing(LIST(value ORDER BY date), 30)).is_consistent AS has_consistent_peaks
FROM monthly_data GROUP BY product_id;
Interpretation:
is_consistent = TRUE: Peaks occur at predictable times (good for planning)is_consistent = FALSE: Peak timing varies (need flexible capacity)