CrossValidationReport.metrics.roc_auc#
- CrossValidationReport.metrics.roc_auc(*, data_source='test', average=None, multi_class='ovr', aggregate=None)[source]#
Compute the ROC AUC score.
- Parameters:
- data_source{“test”, “train”}, default=”test”
The data source to use.
“test” : use the test set provided when creating the report.
“train” : use the train set provided when creating the report.
- average{“macro”, “micro”, “weighted”, “samples”}, default=None
Average to compute the ROC AUC score in a multiclass setting. By default, no average is computed. Otherwise, this determines the type of averaging performed on the data.
“micro”: Calculate metrics globally by considering each element of the label indicator matrix as a label.
“macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
“weighted”: Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).
“samples”: Calculate metrics for each instance, and find their average.
Note
Multiclass ROC AUC currently only handles the “macro” and “weighted” averages. For multiclass targets,
average=None
is only implemented formulti_class="ovr"
andaverage="micro"
is only implemented formulti_class="ovr"
.- multi_class{“raise”, “ovr”, “ovo”}, default=”ovr”
The multi-class strategy to use.
“raise”: Raise an error if the data is multiclass.
“ovr”: Stands for One-vs-rest. Computes the AUC of each class against the rest. This treats the multiclass case in the same way as the multilabel case. Sensitive to class imbalance even when
average == "macro"
, because class imbalance affects the composition of each of the “rest” groupings.“ovo”: Stands for One-vs-one. Computes the average AUC of all possible pairwise combinations of classes. Insensitive to class imbalance when
average == "macro"
.
- aggregate{“mean”, “std”} or list of such str, default=None
Function to aggregate the scores across the cross-validation splits.
- Returns:
- pd.DataFrame
The ROC AUC score.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from skore import CrossValidationReport >>> X, y = load_breast_cancer(return_X_y=True) >>> classifier = LogisticRegression(max_iter=10_000) >>> report = CrossValidationReport(classifier, X=X, y=y, cv_splitter=2) >>> report.metrics.roc_auc() LogisticRegression Split #0 Split #1 Metric ROC AUC 0.99... 0.98...