Predicting with a graph
by: Kevin Broløs
(Feyn version 1.4 or newer)
The graph is the final artifact, and can be referred to as your output model to keep in line with other machine learning frameworks.
A graph consists of one or more of your input features, some interactions between them and a variety of functions that have been fitted to your dataset, leading to an output.
An example graph for the iris dataset could look like this:
In an IPython
environment, you'll be able to hover over each of the interactions to get a tooltip with the internal state of it - such as the weights, biases and encodings.
from sklearn.datasets import make_classification
import pandas as pd
from feyn import QLattice
from feyn.plots import plot_confusion_matrix
from feyn.tools import split
ql = QLattice()
# Generate a dataset and put it into a dataframe
X, y = make_classification()
data = pd.DataFrame(X, columns=[str(i) for i in range(X.shape[1])])
data['target'] = y
# Train/test split
train, test = split(data)
# Get a classifier
qgraph = ql.get_classifier(train, 'target')
qgraph.fit(train)
# Select a graph from your fitted QGraph
best_graph = qgraph.best()[0]
Having selected a best graph, we can now use it to produce predctions, and plot a confusion matrix:
# Get predictions
prediction = best_graph.predict(test)
plot_confusion_matrix(y_true=test['target'],
y_pred=prediction.round(),
title="Confusion Matrix [Test]")
There's also plenty of other plotting functions, metrics and other goodies to use for inspecting graphs. You can see the full API reference here.