ComparisonReport.metrics.report_metrics#

ComparisonReport.metrics.report_metrics(*, data_source='test', X=None, y=None, scoring=None, scoring_names=None, scoring_kwargs=None, pos_label=None, indicator_favorability=False, flat_index=False)[source]#

Report a set of metrics for the estimators.

Parameters:
data_source{“test”, “train”, “X_y”}, default=”test”

The data source to use.

  • “test” : use the test set provided when creating the report.

  • “train” : use the train set provided when creating the report.

  • “X_y” : use the provided X and y to compute the metric.

Xarray-like of shape (n_samples, n_features), default=None

New data on which to compute the metric. By default, we use the validation set provided when creating the report.

yarray-like of shape (n_samples,), default=None

New target on which to compute the metric. By default, we use the target provided when creating the report.

scoringlist of str, callable, or scorer, default=None

The metrics to report. You can get the possible list of strings by calling report.metrics.help(). When passing a callable, it should take as arguments y_true, y_pred as the two first arguments. Additional arguments can be passed as keyword arguments and will be forwarded with scoring_kwargs. If the callable API is too restrictive (e.g. need to pass same parameter name with different values), you can use scikit-learn scorers as provided by sklearn.metrics.make_scorer().

scoring_nameslist of str, default=None

Used to overwrite the default scoring names in the report. It should be of the same length as the scoring parameter.

scoring_kwargsdict, default=None

The keyword arguments to pass to the scoring functions.

pos_labelint, float, bool or str, default=None

The positive class.

indicator_favorabilitybool, default=False

Whether or not to add an indicator of the favorability of the metric as an extra column in the returned DataFrame.

flat_indexbool, default=False

Whether to flatten the MultiIndex columns. Flat index will always be lower case, do not include spaces and remove the hash symbol to ease indexing.

Returns:
pd.DataFrame

The statistics for the metrics.

Examples

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import train_test_split
>>> from skore import ComparisonReport, EstimatorReport
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
>>> estimator_1 = LogisticRegression(max_iter=10000, random_state=42)
>>> estimator_report_1 = EstimatorReport(
...     estimator_1,
...     X_train=X_train,
...     y_train=y_train,
...     X_test=X_test,
...     y_test=y_test,
... )
>>> estimator_2 = LogisticRegression(max_iter=10000, random_state=43)
>>> estimator_report_2 = EstimatorReport(
...     estimator_2,
...     X_train=X_train,
...     y_train=y_train,
...     X_test=X_test,
...     y_test=y_test,
... )
>>> comparison_report = ComparisonReport(
...     [estimator_report_1, estimator_report_2]
... )
>>> comparison_report.metrics.report_metrics(
...     scoring=["precision", "recall"],
...     pos_label=1,
... )
Estimator       LogisticRegression  LogisticRegression
Metric
Precision                  0.96...             0.96...
Recall                     0.97...             0.97...