GapWalkForward

class tscv.GapWalkForward(n_splits=5, max_train_size=None, test_size=None, gap_size=0, rollback_size=0)[source]

Legacy walk forward time series cross-validator

Deprecated since version 0.0.5: This utility is kept for backward compatibility. For new code, the more flexible and thus powerful GapRollForward is recommended.

Provides train/test indices to split time series data samples that are observed at fixed time intervals, in train/test sets. In each split, test indices must be higher than before. This cross-validation object is a variation of K-Fold. In the kth split, it returns first k folds as train set and the (k+1)th fold as test set.

Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them.

Parameters
n_splitsint, default=5

Number of splits. Must be at least 2.

max_train_sizeint, default=None

Maximum size for a single training set.

test_sizeint, default=None

Number of samples in each test set. Defaults to n_samples / (n_splits + 1).

gap_sizeint, default=0

Number of samples to exclude from the end of each train set before the test set.

Notes

The training set has size i * n_samples // (n_splits + 1) + n_samples % (n_splits + 1) in the i-th split, with a test set of size n_samples // (n_splits + 1) by default, where n_samples is the number of samples.

Examples

>>> import numpy as np
>>> from tscv import GapWalkForward
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> cv = GapWalkForward(n_splits=5)
>>> for train_index, test_index in cv.split(X):
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
TRAIN: [0] TEST: [1]
TRAIN: [0 1] TEST: [2]
TRAIN: [0 1 2] TEST: [3]
TRAIN: [0 1 2 3] TEST: [4]
TRAIN: [0 1 2 3 4] TEST: [5]
>>> # Fix test_size to 2 with 12 samples
>>> X = np.random.randn(12, 2)
>>> y = np.random.randint(0, 2, 12)
>>> cv = GapWalkForward(n_splits=3, test_size=2)
>>> for train_index, test_index in cv.split(X):
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
TRAIN: [0 1 2 3 4 5] TEST: [6 7]
TRAIN: [0 1 2 3 4 5 6 7] TEST: [8 9]
TRAIN: [0 1 2 3 4 5 6 7 8 9] TEST: [10 11]
>>> # Add in a 2 period gap
>>> cv = GapWalkForward(n_splits=3, test_size=2, gap_size=2)
>>> for train_index, test_index in cv.split(X):
...    print("TRAIN:", train_index, "TEST:", test_index)
...    X_train, X_test = X[train_index], X[test_index]
...    y_train, y_test = y[train_index], y[test_index]
TRAIN: [0 1 2 3] TEST: [6 7]
TRAIN: [0 1 2 3 4 5] TEST: [8 9]
TRAIN: [0 1 2 3 4 5 6 7] TEST: [10 11]
get_n_splits(X=None, y=None, groups=None)[source]

Returns the number of splitting iterations in the cross-validator

Parameters
Xobject

Always ignored, exists for compatibility.

yobject

Always ignored, exists for compatibility.

groupsobject

Always ignored, exists for compatibility.

Returns
n_splitsint

Returns the number of splitting iterations in the cross-validator.

split(X, y=None, groups=None)[source]

Generate indices to split data into training and test set.

Parameters
Xarray-like, shape (n_samples, n_features)

Training data, where n_samples is the number of samples and n_features is the number of features.

yarray-like, shape (n_samples,)

Always ignored, exists for compatibility.

groupsarray-like, with shape (n_samples,)

Always ignored, exists for compatibility.

Yields
trainndarray

The training set indices for that split.

testndarray

The testing set indices for that split.