EstimatorReport.metrics.precision#
- EstimatorReport.metrics.precision(*, data_source='test', X=None, y=None, average=None, pos_label=None)[source]#
Compute the precision score.
- Parameters:
- data_source{“test”, “train”, “X_y”}, default=”test”
The data source to use.
“test” : use the test set provided when creating the report.
“train” : use the train set provided when creating the report.
“X_y” : use the provided
X
andy
to compute the metric.
- Xarray-like of shape (n_samples, n_features), default=None
New data on which to compute the metric. By default, we use the validation set provided when creating the report.
- yarray-like of shape (n_samples,), default=None
New target on which to compute the metric. By default, we use the target provided when creating the report.
- average{“binary”,”macro”, “micro”, “weighted”, “samples”} or None, default=None
Used with multiclass problems. If
None
, the metrics for each class are returned. Otherwise, this determines the type of averaging performed on the data:“binary”: Only report results for the class specified by
pos_label
. This is applicable only if targets (y_{true,pred}
) are binary.“micro”: Calculate metrics globally by counting the total true positives, false negatives and false positives.
“macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
“weighted”: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
“samples”: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from
accuracy_score()
).
Note
If
pos_label
is specified andaverage
is None, then we report only the statistics of the positive class (i.e. equivalent toaverage="binary"
).- pos_labelint, float, bool or str, default=None
The positive class.
- Returns:
- float or dict
The precision score.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.model_selection import train_test_split >>> from skore import EstimatorReport >>> X_train, X_test, y_train, y_test = train_test_split( ... *load_breast_cancer(return_X_y=True), random_state=0 ... ) >>> classifier = LogisticRegression(max_iter=10_000) >>> report = EstimatorReport( ... classifier, ... X_train=X_train, ... y_train=y_train, ... X_test=X_test, ... y_test=y_test, ... ) >>> report.metrics.precision(pos_label=1) 0.98...