Feyn Documentation

Feyn Documentation

  • Learn
  • Guides
  • Tutorials
  • API Reference
  • FAQ

›Feyn

Changelog

  • Feyn Changelog

Feyn

  • feyn
  • feyn.datasets
  • feyn.filters
  • feyn.inference
  • feyn.insights
  • feyn.losses
  • feyn.metrics
  • feyn.reference
  • feyn.tools
  • feyn.plots
  • feyn.plots.interactive

feyn.metrics

This module contains functions to help evaluate and compare feyn and other models.

function accuracy_score

def accuracy_score(
    true: Iterable[bool],
    pred: Iterable[float]
) -> float
Compute the accuracy score of predictions

The accuracy score is useful to evaluate classification models. It is the fraction of the predictions that are correct. Formally it is defined as:

(number of correct predictions) / (total number of predictions)

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values (will be rounded)

Returns:
    accuracy score for the predictions

function accuracy_threshold

def accuracy_threshold(
    true: Iterable[bool],
    pred: Iterable[float]
) -> Tuple[float, float]
Compute the accuracy score of predictions with optimal threshold

The accuracy score is useful to evaluate classification models. It is the fraction of the predictions that are correct. Accuracy is normally calculated under the assumption that the threshold that separates true from false is 0.5. Hovever, this is not the case when a model was trained with another population composition than on the one which is used.

This function first computes the threshold limining true from false classes that optimises the accuracy. It then returns this threshold along with the accuracy that is obtained using it.

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values

Returns a tuple with:
    threshold that maximizes accuracy
    accuracy score obtained with this threshold

function calculate_mi

def calculate_mi(
    rv_samples: List[Iterable],
    float_bins: Optional[int] = None
) -> float
Numpy-based implementation of mutual information for n random variables.
This can be used for both categorical (discrete) and continuous variables,
you can have as many as you want of each, in any position in the iterable.

Arguments:
    rv_samples {Iterable[Iterable]} -- Samples from random variables given inside an iterable.
    In the traditional ML sense, these would be the data of the inputs.

Keyword Arguments:
    float_bins {Union[Tuple[int], int]} -- Number of bins in which to count numerical random variables.
    If None is given, numerical variables are divided in equally spaced bins given by max{min{n_samples/3, 10}, 2}.

Returns:
    float -- The mutual information between the input random variables.

function calculate_mi_for_output

def calculate_mi_for_output(
    df: pandas.core.frame.DataFrame,
    output_name: str
) -> pandas.core.frame.DataFrame
Calculates the mutual information between each column of the DataFrame and the output column.

Arguments:
    df {pd.DataFrame} -- DataFrame
    output_name {str} -- Name of the output column

Returns:
    pd.DataFrame -- A DataFrame containing the mutual information between each input and the output

function calculate_pc

def calculate_pc(
    X: numpy.ndarray,
    Y: numpy.ndarray
) -> float
Calculate the Pearson correlation coefficient
for data sampled from two random variables X and Y.

Arguments:
    X {np.ndarray} -- First 1D vector of random data.
    Y {np.ndarray} -- First 1D vector of random data.

Returns:
    float -- The correlation coefficient.

function calculate_spear

def calculate_spear(
    X: Iterable,
    Y: Iterable
)
Calculate the Spearson's correlation coefficient
for data sampled from two random variables X and Y.

Arguments:
    X {Iterable} -- First 1D vector of random data.
    Y {Iterable} -- First 1D vector of random data.

Returns:
    float -- The correlation coefficient.

function confusion_matrix

def confusion_matrix(
    true: Iterable[bool],
    pred: Iterable[float]
) -> numpy.ndarray
Compute a Confusion Matrix.

Arguments:
    true {Iterable[bool]} -- Expected values (Truth - containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values

Returns:
    [cm] -- a numpy array with the confusion matrix

function f1_score

def f1_score(
    true: Iterable[bool],
    pred: Iterable[float]
)
Get F1 score

Args:
    true (Iterable[bool]): Expected values (containing values of 0 or 1)
    pred (Iterable[float]): Predicted values

Returns:
    f1 score

function false_positive_rate

def false_positive_rate(
    true: Iterable[bool],
    pred: Iterable[float]
) -> float
Get the false-positive rate for a set of predictions on a dataset.

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values

Returns:
    float -- The false-positive rate

function get_mutual_information

def get_mutual_information(
    model: feyn._model.Model,
    data: pandas.core.frame.DataFrame
) -> List[float]
Calculate the mutual information between each node of the provided model and the output.

Arguments:
    model {feyn.Model} -- The Model
    data {DataFrame} -- The data

Returns:
    List[float] -- The mutual information between each node and the output, in Model node order.

Raises:
    ValueError: If columns needed for the model are not present in the data.

function get_pearson_correlations

def get_pearson_correlations(
    model: feyn._model.Model,
    data: pandas.core.frame.DataFrame
) -> List[float]
Calculate the pearson correlation coefficient between each node of the model and the output.

Arguments:
    model {feyn.Model} -- The Model
    data {DataFrame} -- The data

Returns:
    List[float] -- The pearson correlation between each node and the output, in Model node order.

Raises:
    ValueError: If columns needed for the model are not present in the data.

function get_posterior_probabilities

def get_posterior_probabilities(
    list_bic: Iterable[float]
) -> List[float]
Get posterior probabilities from a list of BICs

Arguments:
    list_bic {Iterable[float]} -- The list of BICs

Raises:
    TypeError: if inputs don't match the correct type.

Returns:
    List[float] -- Posterior probabilities

function get_spearmans_correlations

def get_spearmans_correlations(
    model: feyn._model.Model,
    data: pandas.core.frame.DataFrame
)
Calculate the Spearman's correlation coefficient between each node of the model and the output.

Arguments:
    model {feyn.Model} -- The Model
    data {DataFrame} -- The data

Returns:
    List[float] -- The spearman correlation between each node and the output, in Model node order.

Raises:
    ValueError: If columns needed for the model are not present in the data.

function get_summary_information

def get_summary_information(
    model: feyn._model.Model,
    df: pandas.core.frame.DataFrame
) -> Dict[str, float]
Get summary metrics for the provided model.

This wraps functions get_summary_metrics_classification and get_summary_metrics_regression, automatically detecting what to output based on the model kind and the output node.

Arguments:
    model {feyn.Model} -- The model to summarise
    df {pd.DataFrame} -- The data

Returns:
    Dict[str, float] -- A dictionary of summary metrics.

Raises:
    ValueError: If columns needed for the model are not present in the data.

function get_summary_metrics_classification

def get_summary_metrics_classification(
    true: Iterable[bool],
    pred: Iterable[float]
) -> Dict[str, float]
Get summary metrics for classification

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values

Returns:
    dict[str, float] -- A dictionary of summary metrics

function get_summary_metrics_regression

def get_summary_metrics_regression(
    true: Iterable[float],
    pred: Iterable[float]
) -> Dict[str, float]
Get summary metrics for regression

Arguments:
    true {Iterable[float]} -- Expected values
    pred {Iterable[float]} -- Predicted values

Returns:
    dict[str, float] -- A dictionary of summary metrics

function mae

def mae(
    true: Iterable[float],
    pred: Iterable[float]
)
Compute the mean absolute error

Arguments:
    true {Iterable[float]} -- Expected values
    pred {Iterable[float]} -- Predicted values

Returns:
    float -- MAE for the predictions

function mse

def mse(
    true: Iterable[float],
    pred: Iterable[float]
)
Compute the mean squared error

Arguments:
    true {Iterable[float]} -- Expected values
    pred {Iterable[float]} -- Predicted values

Returns:
    float -- MSE for the predictions

function p_value

def p_value(
    H0: 'ModelMetricsMixin',
    data: pandas.core.frame.DataFrame,
    threshold: float,
    metric='mse'
) -> float
Compute the one-tailed p-value of a hypothesis producing a value equal to or greater than the provided threshold.

This function first constructs a sample distribution for the provided hypothesis. Assuming that this distribution is a t-distribution, it returns the probability of the hypothesis producing a sample mean as extreme or more extreme than the provided threshold.

Arguments:
    H0 {ModelMetricsMixin} -- The model representing the null hypothesis.
    data {Iterable} -- Data from which the statistical parameters are computed.
    threshold {float} -- The threshold value we want to test the significance of, with a one-tailed probability.

Returns:
    float -- The probability of obtaining a value as extreme or more extreme than threshold from H0.

function plot_p_value

def plot_p_value(
    H0: 'ModelMetricsMixin',
    data: pandas.core.frame.DataFrame,
    threshold: float,
    metric: str = 'mse',
    title: str = 'Significance of threshold',
    ax: Optional = None,
    **kwargs
) -> None
Plot the distribution of the sample mean of the null statistic H0. Add a vertical line corresponding to the threshold value and fill in the area corresponding to the one-sided p-value.

Arguments:
    H0 {ModelMetricsMixin} -- The model representing the null hypothesis.
    data {Iterable} -- Data from which the statistical parameters are computed.
    threshold {float} -- The threshold value we want to test the significance of, with a one-tailed probability.

Keyword Arguments:
    metric {str} -- Metric to use when constructing the test statistic. One of 'mse', 'mae', 'accuracy'. (default: {'mse'})
    ax {Optional} -- Matplotlib axes to plot inside of.

function precision_recall

def precision_recall(
    true: Iterable[bool],
    pred: Iterable[float]
) -> Tuple[float, float]
Get precision and recall

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]) -- Predicted values

Returns:
    Tuple[Float, Float]: precision, recall

function r2_score

def r2_score(
    true: Iterable[float],
    pred: Iterable[float]
) -> float
Compute the r2 score

The r2 score for a regression model is defined as
1 - rss/tss

Where rss is the residual sum of squares for the predictions, and tss is the total sum of squares.
Intutively, the tss is the resuduals of a so-called "worst" model that always predicts the mean. Therefore, the r2 score expresses how much better the predictions are than such a model.

A result of 0 means that the model is no better than a model that always predicts the mean value
A result of 1 means that the model perfectly predicts the true value

It is possible to get r2 scores below 0 if the predictions are even worse than the mean model.

Arguments:
    true {Iterable[float]} -- Expected values
    pred {Iterable[float]} -- Predicted values

Returns:
    r2 score for the predictions

function rmse

def rmse(
    true: Iterable[float],
    pred: Iterable[float]
)
Compute the root mean squared error

Arguments:
    true {Iterable[float]} -- Expected values
    pred {Iterable[float]} -- Predicted values

Returns:
    float -- RMSE for the predictions

function roc_auc_score

def roc_auc_score(
    true: Iterable[bool],
    pred: Iterable[float]
) -> float
Calculate the Area Under Curve (AUC) of the ROC curve.

A ROC curve depicts the ability of a binary classifier with varying threshold.

The area under the curve (AUC) is the probability that said classifier will
attach a higher score to a random positive instance in comparison to a random
negative instance.

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values

Returns:
    AUC score for the predictions

function roc_curve

def roc_curve(
    true: Iterable[bool],
    pred: Iterable[float]
) -> Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray]
Calculate the Receiver Operator Characteristics.

Arguments:
    true {Iterable[bool]} -- Expected values (containing values of 0 or 1)
    pred {Iterable[float]} -- Predicted values

Returns:
    fpr: np.array[float] - Increasing false positive rates
    tpr: np.array[float] - Increasing true positive rates
    thresholds: np.array[float] - Thresholds used

function segmented_loss

def segmented_loss(
    model: feyn._model.Model,
    data: pandas.core.frame.DataFrame,
    by: str = None,
    loss_function: str = 'squared_error'
) -> Tuple[List, List, List]
Compute the bins, counts and statistic values used for plotting the segmented loss.

Arguments:
    model {feyn.Model} -- The model to calculate the segmented loss for
    data {DataFrame} -- The data to calculate the segmented loss on

Keyword Arguments:
    by {str} -- The input or output to segment by (default: {None})
    loss_function {str} -- The loss function to use (default: {"squared_error"})

Returns:
    Tuple[List, List, List] -- bins, counts and statistics

Raises:
    ValueError: if by is not in data.
    ValueError: If columns needed for the model are not present in the data.
← feyn.lossesfeyn.reference →
  • function accuracy_score
  • function accuracy_threshold
  • function calculate_mi
  • function calculate_mi_for_output
  • function calculate_pc
  • function calculate_spear
  • function confusion_matrix
  • function f1_score
  • function false_positive_rate
  • function get_mutual_information
  • function get_pearson_correlations
  • function get_posterior_probabilities
  • function get_spearmans_correlations
  • function get_summary_information
  • function get_summary_metrics_classification
  • function get_summary_metrics_regression
  • function mae
  • function mse
  • function p_value
  • function plot_p_value
  • function precision_recall
  • function r2_score
  • function rmse
  • function roc_auc_score
  • function roc_curve
  • function segmented_loss

Subscribe to get news about Feyn and the QLattice.

You can opt out at any time, and you can read our privacy policy here.

Copyright © 2024 Abzu.ai - Feyn license: CC BY-NC-ND 4.0
Feyn®, QGraph®, and the QLattice® are registered trademarks of Abzu®