Common helper functions that makes it easier to get started using the SDK. Over time most users will replace these functions with their own versions, that matches their workflow.
def add_registers_from_dataframe( qlattice: feyn._qlattice.QLattice, df ) -> List[feyn._register.Register]
Use columns from a pandas DataFrame as registers in a QLattice. This is useful when you have a data-set where you want to map all your columns as features in a QLattice. Arguments: qlattice -- A QLattice object to add the registers on. df -- A Pandas DataFrame with the columns you want to use as features. Returns: List[Register] -- The registers that was created in the QLattice.
def confusion_matrix( true: Iterable, pred: Iterable ) -> numpy.ndarray
Compute a Confusion Matrix. Arguments: true -- Expected values (Truth) pred -- Predicted values Returns: [cm] -- a numpy array with the confusion matrix
def plot_confusion_matrix( y_true: Iterable, y_pred: Iterable, labels: Iterable = None, title: str = 'Confusion matrix', color_map=<matplotlib.colors.LinearSegmentedColormap object at 0x7f4572671d10> ) -> None
Compute and plot a Confusion Matrix. Arguments: y_true -- Expected values (Truth) y_pred -- Predicted values labels -- List of labels to index the matrix color_map -- Color map from matplotlib to use for the matrix Returns: [plot] -- matplotlib confusion matrix
def plot_regression_metrics( y_true: Iterable, y_pred: Iterable, title: str = 'Regression metrics' ) -> None
Plot metrics for a regression problem. The y-axis is the range of values in y_true and y_pred. The x-axis is all the samples, sorted in the order of the y_true. With this, you are able to see how much your prediction deviates from expected in the different prediction ranges. So, a good metric plot, would have the predicted line close and smooth around the predicted line. Normally you will see areas, where the predicted line jitter a lot scores worse against the test data there. Arguments: y_true -- Expected values (Truth). y_pred -- Predicted values. title -- Title of the plot. Raises: ValueError: When y_true and y_pred do not have same shape
def split( data: Iterable, ratio: List[int] = [1, 1] ) -> List[Iterable]
Split datasets into random subsets This function is used to split a dataset into random subsets - typically training and test data. The input dataset should be either a pandas DataFrames or a dictionary of numpy arrays. The ratio parameter controls how the data is split, and how many subsets it is split into. Example: Split data in the ratio 2:1 into train and test data >>> train, test = feyn.tools.split(data, [2,1]) Example: Split data in to train, test and validation data. 80% training data and 10% validation and holdout data each >>> train, validation, holdout = feyn.tools.split(data, [8,1,1]) Arguments: data -- The data to split (DataFrame or dict of numpy arrays). ratio -- the size ratio of the resulting subsets Returns: list of subsets -- Subsets of the dataset (same type as the input dataset).