arkas.result¶
arkas.result ¶
Contain results.
arkas.result.AccuracyResult ¶
Bases: BaseResult
Implement the accuracy result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an
array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import AccuracyResult
>>> result = AccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
AccuracyResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'accuracy': 1.0, 'count_correct': 5, 'count_incorrect': 0, 'count': 5, 'error': 0.0}
arkas.result.AveragePrecisionResult ¶
Bases: BaseResult
Implement the average precision result.
This result can be used in 3 different settings:
- binary:
y_true
must be an array of shape(*)
with0
and1
values, andy_score
must be an array of shape(*)
. - multiclass:
y_true
must be an array of shape(n_samples,)
with values in{0, ..., n_classes-1}
, andy_score
must be an array of shape(n_samples, n_classes)
. - multilabel:
y_true
must be an array of shape(n_samples, n_classes)
with0
and1
values, andy_score
must be an array of shape(n_samples, n_classes)
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
label_type
|
str
|
The type of labels used to evaluate the metrics.
The valid values are: |
'auto'
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import AveragePrecisionResult
>>> # binary
>>> result = AveragePrecisionResult(
... y_true=np.array([1, 0, 0, 1, 1]),
... y_score=np.array([2, -1, 0, 3, 1]),
... label_type="binary",
... )
>>> result
AveragePrecisionResult(y_true=(5,), y_score=(5,), label_type='binary', nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': 1.0, 'count': 5}
>>> # multilabel
>>> result = AveragePrecisionResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_score=np.array([[2, -1, -1], [-1, 1, 2], [0, 2, 3], [3, -2, -4], [1, -3, -5]]),
... label_type="multilabel",
... )
>>> result
AveragePrecisionResult(y_true=(5, 3), y_score=(5, 3), label_type='multilabel', nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': array([1. , 1. , 0.477...]),
'count': 5,
'macro_average_precision': 0.825...,
'micro_average_precision': 0.588...,
'weighted_average_precision': 0.804...}
>>> # multiclass
>>> result = AveragePrecisionResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_score=np.array(
... [
... [0.7, 0.2, 0.1],
... [0.4, 0.3, 0.3],
... [0.1, 0.8, 0.1],
... [0.2, 0.3, 0.5],
... [0.4, 0.4, 0.2],
... [0.1, 0.2, 0.7],
... ]
... ),
... label_type="multiclass",
... )
>>> result
AveragePrecisionResult(y_true=(6,), y_score=(6, 3), label_type='multiclass', nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': array([0.833..., 0.75 , 0.75 ]),
'count': 6,
'macro_average_precision': 0.777...,
'micro_average_precision': 0.75,
'weighted_average_precision': 0.777...}
>>> # auto
>>> result = AveragePrecisionResult(
... y_true=np.array([1, 0, 0, 1, 1]),
... y_score=np.array([2, -1, 0, 3, 1]),
... )
>>> result
AveragePrecisionResult(y_true=(5,), y_score=(5,), label_type='binary', nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': 1.0, 'count': 5}
arkas.result.BalancedAccuracyResult ¶
Bases: BaseResult
Implement the balanced accuracy result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an
array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import AccuracyResult
>>> result = BalancedAccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
BalancedAccuracyResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'balanced_accuracy': 1.0, 'count': 5}
arkas.result.BaseResult ¶
Bases: ABC
Define the base class to manage results.
Example usage:
>>> import numpy as np
>>> from arkas.result import AccuracyResult
>>> result = AccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
AccuracyResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'accuracy': 1.0, 'count_correct': 5, 'count_incorrect': 0, 'count': 5, 'error': 0.0}
arkas.result.BaseResult.compute_metrics
abstractmethod
¶
compute_metrics(prefix: str = '', suffix: str = '') -> dict
Return the metrics associated to the result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix
|
str
|
The key prefix in the returned dictionary. |
''
|
suffix
|
str
|
The key suffix in the returned dictionary. |
''
|
Returns:
Type | Description |
---|---|
dict
|
The metrics. |
Example usage:
>>> import numpy as np
>>> from arkas.result import AccuracyResult
>>> result = AccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result.compute_metrics()
{'accuracy': 1.0, 'count_correct': 5, 'count_incorrect': 0, 'count': 5, 'error': 0.0}
arkas.result.BaseResult.equal
abstractmethod
¶
equal(other: Any, equal_nan: bool = False) -> bool
Indicate if two results are equal or not.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
other
|
Any
|
The other result to compare. |
required |
equal_nan
|
bool
|
Whether to compare NaN's as equal. If |
False
|
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import numpy as np
>>> from arkas.result import AccuracyResult
>>> res1 = AccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> res2 = AccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> res3 = AccuracyResult(
... y_true=np.array([1, 0, 0, 0, 0]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> res1.equal(res2)
True
>>> res1.equal(res3)
False
arkas.result.BaseResult.generate_figures
abstractmethod
¶
generate_figures(
prefix: str = "", suffix: str = ""
) -> dict[str, Figure]
Return the figures associated to the result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix
|
str
|
The key prefix in the returned dictionary. |
''
|
suffix
|
str
|
The key suffix in the returned dictionary. |
''
|
Returns:
Type | Description |
---|---|
dict[str, Figure]
|
The figures. |
Example usage:
>>> import numpy as np
>>> from arkas.result import AccuracyResult
>>> result = AccuracyResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result.generate_figures()
{}
arkas.result.BinaryAveragePrecisionResult ¶
Bases: BaseAveragePrecisionResult
Implement the average precision result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryAveragePrecisionResult
>>> result = BinaryAveragePrecisionResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_score=np.array([2, -1, 0, 3, 1])
... )
>>> result
BinaryAveragePrecisionResult(y_true=(5,), y_score=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': 1.0, 'count': 5}
arkas.result.BinaryClassificationResult ¶
Bases: BaseResult
Implement the default binary classification result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target binary labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted binary labels. This input must be an
array of shape |
required |
y_score
|
ndarray | None
|
The target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions. |
None
|
betas
|
Sequence[float]
|
The betas used to compute the F-beta scores. |
(1,)
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryClassificationResult
>>> result = BinaryClassificationResult(
... y_true=np.array([1, 0, 0, 1, 1]),
... y_pred=np.array([1, 0, 0, 1, 1]),
... y_score=np.array([2, -1, 0, 3, 1]),
... )
>>> result
BinaryClassificationResult(y_true=(5,), y_pred=(5,), y_score=(5,), betas=(1,), nan_policy='propagate')
>>> result.compute_metrics()
{'accuracy': 1.0,
'count_correct': 5,
'count_incorrect': 0,
'count': 5,
'error': 0.0,
'balanced_accuracy': 1.0,
'confusion_matrix': array([[2, 0], [0, 3]]),
'false_negative_rate': 0.0,
'false_negative': 0,
'false_positive_rate': 0.0,
'false_positive': 0,
'true_negative_rate': 1.0,
'true_negative': 2,
'true_positive_rate': 1.0,
'true_positive': 3,
'f1': 1.0,
'jaccard': 1.0,
'precision': 1.0,
'recall': 1.0,
'average_precision': 1.0,
'roc_auc': 1.0}
arkas.result.BinaryConfusionMatrixResult ¶
Bases: BaseConfusionMatrixResult
Implement the confusion matrix result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryConfusionMatrixResult
>>> result = BinaryConfusionMatrixResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
BinaryConfusionMatrixResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'confusion_matrix': array([[2, 0], [0, 3]]),
'count': 5,
'false_negative_rate': 0.0,
'false_negative': 0,
'false_positive_rate': 0.0,
'false_positive': 0,
'true_negative_rate': 1.0,
'true_negative': 2,
'true_positive_rate': 1.0,
'true_positive': 3}
arkas.result.BinaryFbetaScoreResult ¶
Bases: BaseFbetaScoreResult
Implement the F-beta result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
betas
|
Sequence[float]
|
The betas used to compute the F-beta scores. |
(1,)
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryFbetaScoreResult
>>> result = BinaryFbetaScoreResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
BinaryFbetaScoreResult(y_true=(5,), y_pred=(5,), betas=(1,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'f1': 1.0}
arkas.result.BinaryJaccardResult ¶
Bases: BaseJaccardResult
Implement the Jaccard result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryJaccardResult
>>> result = BinaryJaccardResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
BinaryJaccardResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'jaccard': 1.0}
arkas.result.BinaryPrecisionResult ¶
Bases: BasePrecisionResult
Implement the precision result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryPrecisionResult
>>> result = BinaryPrecisionResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
BinaryPrecisionResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'precision': 1.0}
arkas.result.BinaryRecallResult ¶
Bases: BaseRecallResult
Implement the recall result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryRecallResult
>>> result = BinaryRecallResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
BinaryRecallResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'recall': 1.0}
arkas.result.BinaryRocAucResult ¶
Bases: BaseRocAucResult
Implement the Area Under the Receiver Operating Characteristic Curve (ROC AUC) result for binary labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import BinaryRocAucResult
>>> result = BinaryRocAucResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_score=np.array([2, -1, 0, 3, 1])
... )
>>> result
BinaryRocAucResult(y_true=(5,), y_score=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'roc_auc': 1.0}
arkas.result.EmptyResult ¶
Bases: Result
Implement an empty result.
This result is designed to be used when it is possible to evaluate a result.
arkas.result.EnergyDistanceResult ¶
Bases: BaseResult
Implement the energy distance between two 1D distributions result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
u_values
|
ndarray
|
The values observed in the (empirical) distribution. |
required |
v_values
|
ndarray
|
The values observed in the (empirical) distribution. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import EnergyDistanceResult
>>> result = EnergyDistanceResult(
... u_values=np.array([1, 2, 3, 4, 5]), v_values=np.array([1, 2, 3, 4, 5])
... )
>>> result
EnergyDistanceResult(u_values=(5,), v_values=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'energy_distance': 0.0}
arkas.result.JensenShannonDivergenceResult ¶
Bases: BaseResult
Implement the Jensen-Shannon (JS) divergence between two 1D distributions result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
p
|
ndarray
|
The true probability distribution. |
required |
q
|
ndarray
|
The model probability distribution. |
required |
Example usage:
>>> import numpy as np
>>> from arkas.result import JensenShannonDivergenceResult
>>> result = JensenShannonDivergenceResult(
... p=np.array([0.1, 0.6, 0.1, 0.2]), q=np.array([0.2, 0.5, 0.2, 0.1])
... )
>>> result
JensenShannonDivergenceResult(p=(4,), q=(4,))
>>> result.compute_metrics()
{'size': 4, 'jensen_shannon_divergence': 0.027...}
arkas.result.KLDivResult ¶
Bases: BaseResult
Implement the Kullback-Leibler (KL) divergence between two 1D distributions result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
p
|
ndarray
|
The true probability distribution. |
required |
q
|
ndarray
|
The model probability distribution. |
required |
Example usage:
>>> import numpy as np
>>> from arkas.result import KLDivResult
>>> result = KLDivResult(p=np.array([0.1, 0.6, 0.1, 0.2]), q=np.array([0.2, 0.5, 0.2, 0.1]))
>>> result
KLDivResult(p=(4,), q=(4,))
>>> result.compute_metrics()
{'size': 4, 'kl_pq': 0.109..., 'kl_qp': 0.116...}
arkas.result.MappingResult ¶
Bases: BaseResult
Implement a result that combines a mapping of result objects into a single result object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
Mapping[str, BaseResult]
|
The mapping of result objects to combine. |
required |
Example usage:
>>> import numpy as np
>>> from arkas.result import MappingResult, Result
>>> result = MappingResult(
... {
... "class1": Result(metrics={"accuracy": 62.0, "count": 42}),
... "class2": Result(metrics={"accuracy": 42.0, "count": 42}),
... }
... )
>>> result
MappingResult(count=2)
>>> result.compute_metrics()
{'class1': {'accuracy': 62.0, 'count': 42}, 'class2': {'accuracy': 42.0, 'count': 42}}
arkas.result.MeanAbsoluteErrorResult ¶
Bases: BaseResult
Implement the mean absolute error (MAE) result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MeanAbsoluteErrorResult
>>> result = MeanAbsoluteErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
MeanAbsoluteErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'mean_absolute_error': 0.0}
arkas.result.MeanAbsolutePercentageErrorResult ¶
Bases: BaseResult
Implement the mean absolute percentage error (MAPE) result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MeanAbsolutePercentageErrorResult
>>> result = MeanAbsolutePercentageErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
MeanAbsolutePercentageErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'mean_absolute_percentage_error': 0.0}
arkas.result.MeanSquaredErrorResult ¶
Bases: BaseResult
Implement the mean squared error (MSE) result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MeanSquaredErrorResult
>>> result = MeanSquaredErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
MeanSquaredErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'mean_squared_error': 0.0}
arkas.result.MeanSquaredLogErrorResult ¶
Bases: BaseResult
Implement the mean squared logarithmic error (MSLE) result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MeanSquaredLogErrorResult
>>> result = MeanSquaredLogErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
MeanSquaredLogErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'mean_squared_log_error': 0.0}
arkas.result.MeanTweedieDevianceResult ¶
Bases: BaseResult
Implement the mean Tweedie deviance regression loss result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
powers
|
Sequence[float]
|
The Tweedie power parameter. The higher power the less weight is given to extreme deviations between true and predicted targets. |
(0,)
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MeanTweedieDevianceResult
>>> result = MeanTweedieDevianceResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
MeanTweedieDevianceResult(y_true=(5,), y_pred=(5,), powers=(0,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'mean_tweedie_deviance_power_0': 0.0}
arkas.result.MedianAbsoluteErrorResult ¶
Bases: BaseResult
Implement the median absolute error (MAE) result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MedianAbsoluteErrorResult
>>> result = MedianAbsoluteErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
MedianAbsoluteErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'median_absolute_error': 0.0}
arkas.result.MulticlassAveragePrecisionResult ¶
Bases: BaseAveragePrecisionResult
Implement the average precision result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassAveragePrecisionResult
>>> result = MulticlassAveragePrecisionResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_score=np.array(
... [
... [0.7, 0.2, 0.1],
... [0.4, 0.3, 0.3],
... [0.1, 0.8, 0.1],
... [0.2, 0.5, 0.3],
... [0.3, 0.3, 0.4],
... [0.1, 0.2, 0.7],
... ]
... ),
... )
>>> result
MulticlassAveragePrecisionResult(y_true=(6,), y_score=(6, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': array([1., 1., 1.]),
'count': 6,
'macro_average_precision': 1.0,
'micro_average_precision': 1.0,
'weighted_average_precision': 1.0}
arkas.result.MulticlassConfusionMatrixResult ¶
Bases: BaseConfusionMatrixResult
Implement the confusion matrix result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassConfusionMatrixResult
>>> result = MulticlassConfusionMatrixResult(
... y_true=np.array([0, 1, 1, 2, 2, 2]),
... y_pred=np.array([0, 1, 1, 2, 2, 2]),
... )
>>> result
MulticlassConfusionMatrixResult(y_true=(6,), y_pred=(6,), nan_policy='propagate')
>>> result.compute_metrics()
{'confusion_matrix': array([[1, 0, 0], [0, 2, 0], [0, 0, 3]]), 'count': 6}
arkas.result.MulticlassFbetaScoreResult ¶
Bases: BaseFbetaScoreResult
Implement the F-beta result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
betas
|
Sequence[float]
|
The betas used to compute the F-beta scores. |
(1,)
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassFbetaScoreResult
>>> result = MulticlassFbetaScoreResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_pred=np.array([0, 0, 1, 1, 2, 2]),
... )
>>> result
MulticlassFbetaScoreResult(y_true=(6,), y_pred=(6,), betas=(1,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'f1': array([1., 1., 1.]),
'macro_f1': 1.0,
'micro_f1': 1.0,
'weighted_f1': 1.0}
arkas.result.MulticlassJaccardResult ¶
Bases: BaseJaccardResult
Implement the Jaccard result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassJaccardResult
>>> result = MulticlassJaccardResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_pred=np.array([0, 0, 1, 1, 2, 2]),
... )
>>> result
MulticlassJaccardResult(y_true=(6,), y_pred=(6,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'jaccard': array([1., 1., 1.]),
'macro_jaccard': 1.0,
'micro_jaccard': 1.0,
'weighted_jaccard': 1.0}
arkas.result.MulticlassPrecisionResult ¶
Bases: BasePrecisionResult
Implement the precision result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassPrecisionResult
>>> result = MulticlassPrecisionResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_pred=np.array([0, 0, 1, 1, 2, 2]),
... )
>>> result
MulticlassPrecisionResult(y_true=(6,), y_pred=(6,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'macro_precision': 1.0,
'micro_precision': 1.0,
'precision': array([1., 1., 1.]),
'weighted_precision': 1.0}
arkas.result.MulticlassRecallResult ¶
Bases: BaseRecallResult
Implement the recall result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassRecallResult
>>> result = MulticlassRecallResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_pred=np.array([0, 0, 1, 1, 2, 2]),
... )
>>> result
MulticlassRecallResult(y_true=(6,), y_pred=(6,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'macro_recall': 1.0,
'micro_recall': 1.0,
'recall': array([1., 1., 1.]),
'weighted_recall': 1.0}
arkas.result.MulticlassRocAucResult ¶
Bases: BaseRocAucResult
Implement the Area Under the Receiver Operating Characteristic Curve (ROC AUC) result for multiclass labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MulticlassRocAucResult
>>> result = MulticlassRocAucResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_score=np.array(
... [
... [0.7, 0.2, 0.1],
... [0.4, 0.3, 0.3],
... [0.1, 0.8, 0.1],
... [0.2, 0.5, 0.3],
... [0.3, 0.3, 0.4],
... [0.1, 0.2, 0.7],
... ]
... ),
... )
>>> result
MulticlassRocAucResult(y_true=(6,), y_score=(6, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'macro_roc_auc': 1.0,
'micro_roc_auc': 1.0,
'roc_auc': array([1., 1., 1.]),
'weighted_roc_auc': 1.0}
arkas.result.MultilabelAveragePrecisionResult ¶
Bases: BaseAveragePrecisionResult
Implement the average precision result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelAveragePrecisionResult
>>> result = MultilabelAveragePrecisionResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_score=np.array([[2, -1, 1], [-1, 1, -2], [0, 2, -3], [3, -2, 4], [1, -3, 5]]),
... )
>>> result
MultilabelAveragePrecisionResult(y_true=(5, 3), y_score=(5, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'average_precision': array([1., 1., 1.]),
'count': 5,
'macro_average_precision': 1.0,
'micro_average_precision': 1.0,
'weighted_average_precision': 1.0}
arkas.result.MultilabelConfusionMatrixResult ¶
Bases: BaseConfusionMatrixResult
Implement the confusion matrix result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelConfusionMatrixResult
>>> result = MultilabelConfusionMatrixResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... )
>>> result
MultilabelConfusionMatrixResult(y_true=(5, 3), y_pred=(5, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'confusion_matrix': array([[[2, 0], [0, 3]],
[[3, 0], [0, 2]],
[[2, 0], [0, 3]]]),
'count': 5}
arkas.result.MultilabelFbetaScoreResult ¶
Bases: BaseFbetaScoreResult
Implement the F-beta result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
betas
|
Sequence[float]
|
The betas used to compute the F-beta scores. |
(1,)
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelFbetaScoreResult
>>> result = MultilabelFbetaScoreResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... )
>>> result
MultilabelFbetaScoreResult(y_true=(5, 3), y_pred=(5, 3), betas=(1,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'f1': array([1., 1., 1.]),
'macro_f1': 1.0,
'micro_f1': 1.0,
'weighted_f1': 1.0}
arkas.result.MultilabelJaccardResult ¶
Bases: BaseJaccardResult
Implement the Jaccard result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelJaccardResult
>>> result = MultilabelJaccardResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... )
>>> result
MultilabelJaccardResult(y_true=(5, 3), y_pred=(5, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'jaccard': array([1., 1., 1.]),
'macro_jaccard': 1.0,
'micro_jaccard': 1.0,
'weighted_jaccard': 1.0}
arkas.result.MultilabelPrecisionResult ¶
Bases: BasePrecisionResult
Implement the precision result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelPrecisionResult
>>> result = MultilabelPrecisionResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... )
>>> result
MultilabelPrecisionResult(y_true=(5, 3), y_pred=(5, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'macro_precision': 1.0,
'micro_precision': 1.0,
'precision': array([1., 1., 1.]),
'weighted_precision': 1.0}
arkas.result.MultilabelRecallResult ¶
Bases: BaseRecallResult
Implement the recall result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must be an array of
shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelRecallResult
>>> result = MultilabelRecallResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... )
>>> result
MultilabelRecallResult(y_true=(5, 3), y_pred=(5, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'macro_recall': 1.0,
'micro_recall': 1.0,
'recall': array([1., 1., 1.]),
'weighted_recall': 1.0}
arkas.result.MultilabelRocAucResult ¶
Bases: BaseRocAucResult
Implement the Area Under the Receiver Operating Characteristic Curve (ROC AUC) result for multilabel labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_score
|
ndarray
|
The target scores, can either be probability
estimates of the positive class, confidence values,
or non-thresholded measure of decisions. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import MultilabelRocAucResult
>>> result = MultilabelRocAucResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_score=np.array([[2, -1, 1], [-1, 1, -2], [0, 2, -3], [3, -2, 4], [1, -3, 5]]),
... )
>>> result
MultilabelRocAucResult(y_true=(5, 3), y_score=(5, 3), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'macro_roc_auc': 1.0,
'micro_roc_auc': 1.0,
'roc_auc': array([1., 1., 1.]),
'weighted_roc_auc': 1.0}
arkas.result.PearsonCorrelationResult ¶
Bases: BaseResult
Implement the Pearson correlation result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
ndarray
|
The first input array. |
required |
y
|
ndarray
|
The second input array. |
required |
alternative
|
str
|
The alternative hypothesis. Default is 'two-sided'. The following options are available: - 'two-sided': the correlation is nonzero - 'less': the correlation is negative (less than zero) - 'greater': the correlation is positive (greater than zero) |
'two-sided'
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import PearsonCorrelationResult
>>> result = PearsonCorrelationResult(
... x=np.array([1, 2, 3, 4, 5]), y=np.array([1, 2, 3, 4, 5])
... )
>>> result
PearsonCorrelationResult(x=(5,), y=(5,), alternative='two-sided', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'pearson_coeff': 1.0, 'pearson_pvalue': 0.0}
arkas.result.PrecisionResult ¶
Bases: BaseResult
Implement the precision result.
This result can be used in 3 different settings:
- binary:
y_true
must be an array of shape(n_samples,)
with0
and1
values, andy_pred
must be an array of shape(n_samples,)
. - multiclass:
y_true
must be an array of shape(n_samples,)
with values in{0, ..., n_classes-1}
, andy_pred
must be an array of shape(n_samples,)
. - multilabel:
y_true
must be an array of shape(n_samples, n_classes)
with0
and1
values, andy_pred
must be an array of shape(n_samples, n_classes)
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import PrecisionResult
>>> # binary
>>> result = PrecisionResult(
... y_true=np.array([1, 0, 0, 1, 1]),
... y_pred=np.array([1, 0, 0, 1, 1]),
... label_type="binary",
... )
>>> result
PrecisionResult(y_true=(5,), y_pred=(5,), label_type='binary', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'precision': 1.0}
>>> # multilabel
>>> result = PrecisionResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 0], [0, 1, 1], [0, 1, 1], [1, 0, 0], [1, 0, 0]]),
... label_type="multilabel",
... )
>>> result
PrecisionResult(y_true=(5, 3), y_pred=(5, 3), label_type='multilabel', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'macro_precision': 0.666...,
'micro_precision': 0.714...,
'precision': array([1., 1., 0.]),
'weighted_precision': 0.625}
>>> # multiclass
>>> result = PrecisionResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_pred=np.array([0, 0, 1, 1, 2, 2]),
... label_type="multiclass",
... )
>>> result
PrecisionResult(y_true=(6,), y_pred=(6,), label_type='multiclass', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'macro_precision': 1.0,
'micro_precision': 1.0,
'precision': array([1., 1., 1.]),
'weighted_precision': 1.0}
>>> # auto
>>> result = PrecisionResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
PrecisionResult(y_true=(5,), y_pred=(5,), label_type='binary', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'precision': 1.0}
arkas.result.R2ScoreResult ¶
Bases: BaseResult
Implement the R^2 (coefficient of determination) regression score result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import R2ScoreResult
>>> result = R2ScoreResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
R2ScoreResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'r2_score': 1.0}
arkas.result.RecallResult ¶
Bases: BaseResult
Implement the recall result.
This result can be used in 3 different settings:
- binary:
y_true
must be an array of shape(n_samples,)
with0
and1
values, andy_pred
must be an array of shape(n_samples,)
. - multiclass:
y_true
must be an array of shape(n_samples,)
with values in{0, ..., n_classes-1}
, andy_pred
must be an array of shape(n_samples,)
. - multilabel:
y_true
must be an array of shape(n_samples, n_classes)
with0
and1
values, andy_pred
must be an array of shape(n_samples, n_classes)
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target labels. This input must
be an array of shape |
required |
y_pred
|
ndarray
|
The predicted labels. This input must
be an array of shape |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import RecallResult
>>> # binary
>>> result = RecallResult(
... y_true=np.array([1, 0, 0, 1, 1]),
... y_pred=np.array([1, 0, 0, 1, 1]),
... label_type="binary",
... )
>>> result
RecallResult(y_true=(5,), y_pred=(5,), label_type='binary', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'recall': 1.0}
>>> # multilabel
>>> result = RecallResult(
... y_true=np.array([[1, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 1], [1, 0, 1]]),
... y_pred=np.array([[1, 0, 0], [0, 1, 1], [0, 1, 1], [1, 0, 0], [1, 0, 0]]),
... label_type="multilabel",
... )
>>> result
RecallResult(y_true=(5, 3), y_pred=(5, 3), label_type='multilabel', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'macro_recall': 0.666...,
'micro_recall': 0.625,
'recall': array([1., 1., 0.]),
'weighted_recall': 0.625}
>>> # multiclass
>>> result = RecallResult(
... y_true=np.array([0, 0, 1, 1, 2, 2]),
... y_pred=np.array([0, 0, 1, 1, 2, 2]),
... label_type="multiclass",
... )
>>> result
RecallResult(y_true=(6,), y_pred=(6,), label_type='multiclass', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 6,
'macro_recall': 1.0,
'micro_recall': 1.0,
'recall': array([1., 1., 1.]),
'weighted_recall': 1.0}
>>> # auto
>>> result = RecallResult(
... y_true=np.array([1, 0, 0, 1, 1]), y_pred=np.array([1, 0, 0, 1, 1])
... )
>>> result
RecallResult(y_true=(5,), y_pred=(5,), label_type='binary', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'recall': 1.0}
arkas.result.RegressionErrorResult ¶
Bases: BaseResult
Implement a "universal" regression error result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import RegressionErrorResult
>>> result = RegressionErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
RegressionErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5,
'mean_absolute_error': 0.0,
'median_absolute_error': 0.0,
'mean_squared_error': 0.0}
arkas.result.Result ¶
Bases: BaseResult
Implement a simple result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics
|
dict | None
|
The metrics. |
None
|
figures
|
dict | None
|
The figures. |
None
|
Example usage:
>>> from arkas.result import Result
>>> result = Result(metrics={"accuracy": 1.0, "count": 42}, figures={})
>>> result
Result(metrics=2, figures=0)
>>> result.compute_metrics()
{'accuracy': 1.0, 'count': 42}
arkas.result.RootMeanSquaredErrorResult ¶
Bases: BaseResult
Implement the mean squared error (MSE) result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
ndarray
|
The ground truth target values. |
required |
y_pred
|
ndarray
|
The predicted values. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import RootMeanSquaredErrorResult
>>> result = RootMeanSquaredErrorResult(
... y_true=np.array([1, 2, 3, 4, 5]), y_pred=np.array([1, 2, 3, 4, 5])
... )
>>> result
RootMeanSquaredErrorResult(y_true=(5,), y_pred=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'root_mean_squared_error': 0.0}
arkas.result.SequentialResult ¶
Bases: BaseResult
Implement a result to merge multiple result objects into a single result object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
Sequence[BaseResult]
|
The results to merge. This order is used to merge the metrics and figures if they have duplicate keys, i.e. only the last value for each key is kept. |
required |
Example usage:
>>> import numpy as np
>>> from arkas.result import SequentialResult, Result
>>> result = SequentialResult(
... [
... Result(metrics={"accuracy": 62.0, "count": 42}),
... Result(metrics={"ap": 0.42, "count": 42}),
... ]
... )
>>> result
SequentialResult(count=2)
>>> result.compute_metrics()
{'accuracy': 62.0, 'count': 42, 'ap': 0.42}
arkas.result.SpearmanCorrelationResult ¶
Bases: BaseResult
Implement the Spearman correlation result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x
|
ndarray
|
The first input array. |
required |
y
|
ndarray
|
The second input array. |
required |
alternative
|
str
|
The alternative hypothesis. Default is 'two-sided'. The following options are available: - 'two-sided': the correlation is nonzero - 'less': the correlation is negative (less than zero) - 'greater': the correlation is positive (greater than zero) |
'two-sided'
|
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import SpearmanCorrelationResult
>>> result = SpearmanCorrelationResult(
... x=np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]),
... y=np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]),
... )
>>> result
SpearmanCorrelationResult(x=(9,), y=(9,), alternative='two-sided', nan_policy='propagate')
>>> result.compute_metrics()
{'count': 9, 'spearman_coeff': 1.0, 'spearman_pvalue': 0.0}
arkas.result.WassersteinDistanceResult ¶
Bases: BaseResult
Implement the Wasserstein distance between two 1D distributions result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
u_values
|
ndarray
|
The values observed in the (empirical) distribution. |
required |
v_values
|
ndarray
|
The values observed in the (empirical) distribution. |
required |
nan_policy
|
str
|
The policy on how to handle NaN values in the input
arrays. The following options are available: |
'propagate'
|
Example usage:
>>> import numpy as np
>>> from arkas.result import WassersteinDistanceResult
>>> result = WassersteinDistanceResult(
... u_values=np.array([1, 2, 3, 4, 5]), v_values=np.array([1, 2, 3, 4, 5])
... )
>>> result
WassersteinDistanceResult(u_values=(5,), v_values=(5,), nan_policy='propagate')
>>> result.compute_metrics()
{'count': 5, 'wasserstein_distance': 0.0}