EstimatorReport.metrics.roc_auc#

EstimatorReport.metrics.roc_auc(*, data_source='test', X=None, y=None, average=None, multi_class='ovr')[source]#

Compute the ROC AUC score.

Parameters:
data_source{“test”, “train”, “X_y”}, default=”test”

The data source to use.

  • “test” : use the test set provided when creating the report.

  • “train” : use the train set provided when creating the report.

  • “X_y” : use the provided X and y to compute the metric.

Xarray-like of shape (n_samples, n_features), default=None

New data on which to compute the metric. By default, we use the validation set provided when creating the report.

yarray-like of shape (n_samples,), default=None

New target on which to compute the metric. By default, we use the target provided when creating the report.

average{“macro”, “micro”, “weighted”, “samples”}, default=None

Average to compute the ROC AUC score in a multiclass setting. By default, no average is computed. Otherwise, this determines the type of averaging performed on the data.

  • “micro”: Calculate metrics globally by considering each element of the label indicator matrix as a label.

  • “macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

  • “weighted”: Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).

  • “samples”: Calculate metrics for each instance, and find their average.

Note

Multiclass ROC AUC currently only handles the “macro” and “weighted” averages. For multiclass targets, average=None is only implemented for multi_class="ovr" and average="micro" is only implemented for multi_class="ovr".

multi_class{“raise”, “ovr”, “ovo”}, default=”ovr”

The multi-class strategy to use.

  • “raise”: Raise an error if the data is multiclass.

  • “ovr”: Stands for One-vs-rest. Computes the AUC of each class against the rest. This treats the multiclass case in the same way as the multilabel case. Sensitive to class imbalance even when average == "macro", because class imbalance affects the composition of each of the “rest” groupings.

  • “ovo”: Stands for One-vs-one. Computes the average AUC of all possible pairwise combinations of classes. Insensitive to class imbalance when average == "macro".

Returns:
float or dict

The ROC AUC score.

Examples

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import train_test_split
>>> from skore import EstimatorReport
>>> X_train, X_test, y_train, y_test = train_test_split(
...     *load_breast_cancer(return_X_y=True), random_state=0
... )
>>> classifier = LogisticRegression(max_iter=10_000)
>>> report = EstimatorReport(
...     classifier,
...     X_train=X_train,
...     y_train=y_train,
...     X_test=X_test,
...     y_test=y_test,
... )
>>> report.metrics.roc_auc()
0.99...