ComparisonReport.metrics.precision#

ComparisonReport.metrics.precision(*, data_source='test', X=None, y=None, average=None, pos_label=None)[source]#

Compute the precision score.

Parameters:
data_source{“test”, “train”, “X_y”}, default=”test”

The data source to use.

  • “test” : use the test set provided when creating the report.

  • “train” : use the train set provided when creating the report.

  • “X_y” : use the provided X and y to compute the metric.

Xarray-like of shape (n_samples, n_features), default=None

New data on which to compute the metric. By default, we use the validation set provided when creating the report.

yarray-like of shape (n_samples,), default=None

New target on which to compute the metric. By default, we use the target provided when creating the report.

average{“binary”, “macro”, “micro”, “weighted”, “samples”} or None, default=None

Used with multiclass problems. If None, the metrics for each class are returned. Otherwise, this determines the type of averaging performed on the data:

  • “binary”: Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary.

  • “micro”: Calculate metrics globally by counting the total true positives, false negatives and false positives.

  • “macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.

  • “weighted”: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.

  • “samples”: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score()).

Note

If pos_label is specified and average is None, then we report only the statistics of the positive class (i.e. equivalent to average="binary").

pos_labelint, float, bool or str, default=None

The positive class.

Returns:
pd.DataFrame

The precision score.

Examples

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import train_test_split
>>> from skore import ComparisonReport, EstimatorReport
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
>>> estimator_1 = LogisticRegression(max_iter=10000, random_state=42)
>>> estimator_report_1 = EstimatorReport(
...     estimator_1,
...     X_train=X_train,
...     y_train=y_train,
...     X_test=X_test,
...     y_test=y_test,
... )
>>> estimator_2 = LogisticRegression(max_iter=10000, random_state=43)
>>> estimator_report_2 = EstimatorReport(
...     estimator_2,
...     X_train=X_train,
...     y_train=y_train,
...     X_test=X_test,
...     y_test=y_test,
... )
>>> comparison_report = ComparisonReport(
...     [estimator_report_1, estimator_report_2]
... )
>>> comparison_report.metrics.precision()
Estimator                    LogisticRegression  LogisticRegression
Metric      Label / Average
Precision                 0             0.96...             0.96...
                          1             0.96...             0.96...