Stacked Classifier

class deslib.static.stacked.StackedClassifier(pool_classifiers=None, meta_classifier=None, random_state=None)[source]

A Stacking classifier.

Parameters:
pool_classifiers : list of classifiers (Default = None)

The generated_pool of classifiers trained for the corresponding classification problem. Each base classifiers should support the method “predict”. If None, then the pool of classifiers is a bagging classifier.

meta_classifier : object or None, optional (default=None)

Classifier model used to aggregate the output of the base classifiers. If None, a LogisticRegression classifier is used.

random_state : int, RandomState instance or None, optional (default=None)

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

References

Wolpert, David H. “Stacked generalization.” Neural networks 5, no. 2 (1992): 241-259.

Kuncheva, Ludmila I. Combining pattern classifiers: methods and algorithms. John Wiley & Sons, 2004.

fit(X, y)[source]

Fit the model by training a meta-classifier on the outputs of the base classifiers

Parameters:
X : array of shape = [n_samples, n_features]

Data used to fit the model.

y : array of shape = [n_samples]

class labels of each example in X.

predict(X)[source]

Predict the label of each sample in X and returns the predicted label.

Parameters:
X : array of shape = [n_samples, n_features]

The data to be classified

Returns:
predicted_labels : array of shape = [n_samples]

Predicted class for each sample in X.

predict_proba(X)[source]

Predict the label of each sample in X and returns the predicted label.

Parameters:
X : array of shape = [n_samples, n_features]

The data to be classified

Returns:
predicted_labels : array of shape = [n_samples]

Predicted class for each sample in X.

score(X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:
score : float

Mean accuracy of self.predict(X) wrt. y.