A collection of loss functions for use with feyn.
The loss funtions can be specified as arguments to the `QGraph.fit()` and `QGraph.sort()` methods. A good choice of loss function can sometimes speed up the training. For most uses the default loss function, `squared_error` is a fine choice.
Note: Data scientist with experience from other frameworks may be used to thinking that the loss function is very significant. In practice it matters less in QLattices, for reasons that have to do with the large range of initial parameters for the graphs in a QGraph.
Compute the absolute error loss.
Arguments:
y_true -- Ground truth (correct) target values.
y_pred -- Predicted values.
Returns:
nd.array -- The losses as an array of floats.
Compute the cross entropy loss between the labels and predictions.
This is a good alternative choice for binary classification problems. If cannot be used for fitting QGraphs with output data that is not binary. Doing so will result in a RuntimeError.
Arguments:
y_true -- Ground truth (correct) target values.
y_pred -- Predicted values.
Returns:
nd.array -- The losses as an array of floats.
Compute the cross entropy loss between the labels and predictions.
This is a good alternative choice for binary classification problems. If cannot be used for fitting QGraphs with output data that is not binary. Doing so will result in a RuntimeError.
Arguments:
y_true -- Ground truth (correct) target values.
y_pred -- Predicted values.
Returns:
nd.array -- The losses as an array of floats.
Compute the squared error loss.
This is the default loss function used in fitting and selecting graphs from QGraphs.
Arguments:
y_true -- Ground truth (correct) target values.
y_pred -- Predicted values.
Returns:
nd.array -- The losses as an array of floats.