A posteriori

class deslib.dcs.a_posteriori.APosteriori(pool_classifiers, k=7, DFP=False, with_IH=False, safe_k=None, IH_rate=0.3, selection_method='diff', diff_thresh=0.1, rng=<mtrand.RandomState object>)[source]

A Posteriori Dynamic classifier selection.

The A Posteriori method uses the probability of correct classification of a given base classifier \(c_{i}\) for each neighbor \(x_{k}\) with respect to a single class. Consider a classifier \(c_{i}\) that assigns a test sample to class \(w_{l}\). Then, only the samples belonging to class \(w_{l}\) are taken into account during the competence level estimates. Base classifiers with a higher probability of correct classification have a higher competence level. Moreover, the method also weights the influence of each neighbor \(x_{k}\) according to its Euclidean distance to the query sample. The closest neighbors have a higher influence on the competence level estimate. In cases where no sample in the region of competence belongs to the predicted class, \(w_{l}\), the competence level estimate of the base classifier is equal to zero.

A single classifier is selected only if its competence level is significantly higher than that of the other base classifiers in the pool (higher than a pre-defined threshold). Otherwise, all classifiers in the pool are combined using the majority voting rule. The selection methodology can be modified by modifying the hyper-parameter selection_method.

Parameters:
pool_classifiers : list of classifiers

The generated_pool of classifiers trained for the corresponding classification problem. The classifiers should support methods “predict” and “predict_proba”.

k : int (Default = 7)

Number of neighbors used to estimate the competence of the base classifiers.

DFP : Boolean (Default = False)

Determines if the dynamic frienemy pruning is applied.

with_IH : Boolean (Default = False)

Whether the hardness level of the region of competence is used to decide between using the DS algorithm or the KNN for classification of a given query sample.

safe_k : int (default = None)

The size of the indecision region.

IH_rate : float (default = 0.3)

Hardness threshold. If the hardness level of the competence region is lower than the IH_rate the KNN classifier is used. Otherwise, the DS algorithm is used for classification.

selection_method : String (Default = “best”)

Determines which method is used to select the base classifier after the competences are estimated.

diff_thresh : float (Default = 0.1)

Threshold to measure the difference between the competence level of the base classifiers for the random and diff selection schemes. If the difference is lower than the threshold, their performance are considered equivalent.

rng : numpy.random.RandomState instance

Random number generator to assure reproducible results.

References

G. Giacinto and F. Roli, Methods for Dynamic Classifier Selection 10th Int. Conf. on Image Anal. and Proc., Venice, Italy (1999), 659-664.

Ko, Albert HR, Robert Sabourin, and Alceu Souza Britto Jr. “From dynamic classifier selection to dynamic ensemble selection.” Pattern Recognition 41.5 (2008): 1718-1731.

Britto, Alceu S., Robert Sabourin, and Luiz ES Oliveira. “Dynamic selection of classifiers—a comprehensive review.” Pattern Recognition 47.11 (2014): 3665-3680.

R. M. O. Cruz, R. Sabourin, and G. D. Cavalcanti, “Dynamic classifier selection: Recent advances and perspectives,” Information Fusion, vol. 41, pp. 195 – 216, 2018.

estimate_competence(query, predictions=None)[source]

estimate the competence of each base classifier \(c_{i}\) for the classification of the query sample using the A Posteriori method.

The competence level is estimated based on the probability of correct classification of the base classifier \(c_{i}\), for each neighbor \(x_{k}\) belonging to a specific class \(w_{l}\). In this case, \(w_{l}\) is the class predicted by the base classifier \(c_{i}\), for the query sample. This method also weights the influence of each training sample according to its Euclidean distance to the query instance. The closest samples have a higher influence in the computation of the competence level. The competence level estimate is represented by the following equation:

\[\delta_{i,j} = \frac{\sum_{\mathbf{x}_{k} \in \omega_{l}}P(\omega_{l} \mid \mathbf{x}_{k}, c_{i} )W_{k}}{\sum_{k = 1}^{K}P(\omega_{l} \mid \mathbf{x}_{k}, c_{i} )W_{k}}\]

where \(\delta_{i,j}\) represents the competence level of \(c_{i}\) for the classification of query.

Parameters:
query : array cf shape = [n_samples, n_features]

The query sample.

predictions : array of shape = [n_samples, n_classifiers]

Predictions of the base classifiers for the test examples.

Returns:
competences : array of shape = [n_samples, n_classifiers]

Competence level estimated for each base classifier and test example.

fit(X, y)[source]

Prepare the DS model by setting the KNN algorithm and pre-processing the information required to apply the DS method.

Parameters:
X : array of shape = [n_samples, n_features]

Data used to fit the model.

y : array of shape = [n_samples]

class labels of each example in X.

Returns:
self
predict(X)[source]

Predict the class label for each sample in X.

Parameters:
X : array of shape = [n_samples, n_features]

The input data.

Returns:
predicted_labels : array of shape = [n_samples]

Predicted class label for each sample in X.

predict_proba(X)[source]

Estimates the posterior probabilities for sample in X.

Parameters:
X : array of shape = [n_samples, n_features]

The input data.

Returns:
predicted_proba : array of shape = [n_samples, n_classes]

Probabilities estimates for each sample in X.

score(X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:
score : float

Mean accuracy of self.predict(X) wrt. y.

select(competences)[source]

Select the most competent classifier for the classification of the query sample given the competence level estimates. Four selection schemes are available.

Best : The base classifier with the highest competence level is selected. In cases where more than one base classifier achieves the same competence level, the one with the lowest index is selected. This method is the standard for the LCA, OLA, MLA techniques.

Diff : Select the base classifier that is significantly better than the others in the pool (when the difference between its competence level and the competence level of the other base classifiers is higher than a predefined threshold). If no base classifier is significantly better, the base classifier is selected randomly among the member with equivalent competence level.

Random : Selects a random base classifier among all base classifiers that achieved the same competence level.

ALL : all base classifiers with the max competence level estimates are selected (note that in this case the DCS technique becomes a DES technique).

Parameters:
competences : array of shape = [n_samples, n_classifiers]

Competence level estimated for each base classifier and test example.

Returns:
selected_classifiers : array of shape [n_samples]

Indices of the selected base classifier for each sample. If the selection_method is set to ‘all’, a boolean matrix is returned, containing True for the selected base classifiers, otherwise false.