Feyn Documentation

Feyn Documentation

  • Learn
  • Guides
  • Tutorials
  • API Reference
  • FAQ

›FAQ

FAQ

  • Frequently Asked Questions

Frequently Asked Questions

by: Kevin Broløs and Valdemar Stentoft-Hansen

Support & Sign-up

Where can I get support?

We're always available on our Discord Server! Alternatively, you can shoot us an email.


How do I sign up?

You don't! The Community QLattice works out the box! If you want to use the QLattice for commercial purposes then you can contact us Right here!


I'm getting an exception I don't know what means!

We're happy to help! Hit us up on discord or shoot off an email!

Some common issues:

  • Wrong semantic types assigned
  • Numeric names for feature columns
  • NaN or infinite values in either the input or output features
  • For binary classification, output values should either be True/False or 1/0

Also ensure that:

  • You have enough memory available for your dataset, and you don't copy it unnecessarily

Do you have a technological whitepaper?

Not yet! But we're patent pending and as soon as we get that approved, we'll be able to start publishing more details about what makes us tick. If you're still curious, we're happy to explain further on our discord with you if you have any specific questions!


Features and Data

Can I use numeric names for my input features?

We require them to be encoded as strings, so do an str() on them first, and everything will work as expected. If you require the feature to convert the models to a Sympy expression, we highly advise against it, though. It will still work, but it'll make the expression that much harder to read.


If I have binary data points, should they be numerical or categorical?

If there is no apparent ordering of your binary variable - "1 being higher than 0" - then go with the categorical semantic type. Truth be told though, there is little difference in how you treat them due to how we do automatic scaling and learning of weights.


General Usage Questions

Can I do a regression with multiple targets?

Unfortunately, that is not a possibility in the current framework - we instead suggest you to do multiple models - one for each target variable.


Can I classify on multiple classes?

Kind of. For simple multiclass cases, you can train it as a regression problem and round the predictions. For more complicated cases, you'll have to train a classifier for each class, aggregate over the predictions and do a softmax or argmax over the results. You can either just select the class with the highest probability, or select a threshold and then have your model communicate if it’s uncertain (i.e. none of them are above a satisfactory probability score).


How do I get the loss of my model?

You can get the loss of any fitted model, by looking at the property model.loss_value. You can also recompute the loss of the model using either a metric from the feyn.metrics library of functions, or on the function directly on the model itself, such as:

model.absolute_error(data)
model.squared_error(data)

Can I use a different loss function?

You can pass one of the supported loss functions to the loss_function parameter in the auto_run method. As of right now you can choose among squared_error, absolute_error and binary_cross_entropy.


Can I fit using a different metric, such as the F1 score, recall, etc.?

For the actual fitting process you won't be able to apply, for example, recall directly. We are using gradient descent and recall has no gradient. You have two other options though:

  1. You can apply sample_weights to your problem to put more weight on to specific instances. For the recall example putting more weight on "True" values would force the QLattice to find a model that to a larger extend solves for these cases. Sample weights can be applied as an input (array of weights) to the fit function.
  2. You can iterate through your list of models and calculate your metric of choice and sort by that. The best model on your sorted list can be updated to the QLattice and in that way skew the QLattice towards models that optimise for your metric.

Can I interrupt the fitting process based on loss?

We don't have an explicit stop-functionality, but what we recommend is to put in a break in the fitting loop assessing the loss_value vs. your threshold. Note that other metrics are attached to the models as well such as r2_score, rmse, AUC, etc.

You can find the loss_value on the best model after fitting on the property

best = models[0]
best.loss_value

How do I get the top five models?

The method auto_run returns a list of models. So you can do list operations:

best_five = models[:5]

Can I fix the parameters in my models?

It's not currently possible to fix the parameters for the nodes. They're exclusively learnt through fitting the dataset. The options we currently provide for designing the models are through filtering (excluding/including specific cells, depths, edges, that sort of thing), or the query language. Feel free to send us a suggestion if you have a specific use-case where this is important.


What is the "linear" interaction?

The linear transformation is assigning a new weight and bias on to the incoming instances of the interaction in order to minimise the loss of the model as a whole. This is done via backwards propagation as you would know it from neural networks - so there is no "local regression" taking place in the cell - rather it's a model-wide optimisation.


When a categorical binary value is passed into a tanh, isn't that redundant?

Locally in the model, yes for a 0/1 categorical input one-way into a tanh cell, is redundant as the split is "already made". However, consider a model where the categorical input is used in both the tanh and another transformation. In such a case the tanh is not redundant. The tanh assigns new values to the split, so you could have that the input assigns weights of say 0.5 and 0.6 to the two binary values. Whereas the tanh would go and transform these weights into 0 and 1. This will open up the possibilities of other transformation downstream in your model.


Does sympify() print the equation that gives the prediction?

Yes, and no. sympify() converts the model to a SymPyexpression, which represents the underlying mathematical expression of the model. This SymPy expression can then be printed as an equation, but you can also work directly with it. To supplement this, you can use model.fit() to fine-tune a specific model (i.e. letting the stochastic gradient get all the way to the bottom of the minima it is searching in). We do learning rate dampening to ensure that we find the minima. In this way, once you have found your preferred model you can tweak the parameters of it for the optimal version of that model.


How do I save my model once it has been trained?

You can find a guide on how to do that here!


How does the QLattice fare on many categorical and binary features?

“It depends”. It is not an issue even if all the variables are categorical, unless there’s a high uniqueness among values that could lead to overfitting through memorisation.

You should ask yourself some questions: Are they all mutually exclusive (like a one-hot encoding) or can multiple be set simultaneously? We use the categorical semantic type for categorical features, which allows us to fit models without one-hot encoding (and indeed one-hot encoding will hurt performance).

  • If they actually all hold individual (and combined) signal, you might have to be creative about the way you train in order to capture what's most important. We often use techniques such as mutual information, or run the QLattice with a low complexity to find the features that contain the most signal, and reduce the dimensionality of the dataset.
  • If the features are one-hot encoded (mutually exclusive), we recommend undoing that and training it on the true categories instead, using the categorical semantic type. That should improve your performance a lot, and simplify your interpretation of the resulting model.

When I train, I only get a few features in the models - how do I force it to use all of them?

If you feel that the problem would be solved with more features, you might be interested in increasing the max_complexity to allow for more complex models. Alternatively, you can use our filter functionality (using contains), to try out different fixed combinations of features.


I really want more features in my models - what do I do?

The soft cap is probably around the ten features - if you have a lot more you'll see an enormous model where you lose track of what is going on. We search for the simplest possible explanations where the number of features are limited and somewhat comprehensible for the model builder. Having hundreds of signal carrying features will not be a good fit for the QLattice, without some good data preparation.

If possible, try fitting more variation into fewer features, although we know that is probably easier said than done. For example, when dealing with RNA sequences you can get a lot of information out of applying categorical "windows" of SNP's and aggregate statistics of the sequences.


  • Support & Sign-up
    • Where can I get support?
    • How do I sign up?
    • I'm getting an exception I don't know what means!
    • Do you have a technological whitepaper?
  • Features and Data
    • Can I use numeric names for my input features?
    • If I have binary data points, should they be numerical or categorical?
  • General Usage Questions
    • Can I do a regression with multiple targets?
    • Can I classify on multiple classes?
    • How do I get the loss of my model?
    • Can I use a different loss function?
    • Can I fit using a different metric, such as the F1 score, recall, etc.?
    • Can I interrupt the fitting process based on loss?
    • How do I get the top five models?
    • Can I fix the parameters in my models?
    • What is the "linear" interaction?
    • When a categorical binary value is passed into a tanh, isn't that redundant?
    • Does sympify() print the equation that gives the prediction?
    • How do I save my model once it has been trained?
    • How does the QLattice fare on many categorical and binary features?
    • When I train, I only get a few features in the models - how do I force it to use all of them?
    • I really want more features in my models - what do I do?

Subscribe to get news about Feyn and the QLattice.

You can opt out at any time, and you can read our privacy policy here.

Copyright © 2024 Abzu.ai - Feyn license: CC BY-NC-ND 4.0
Feyn®, QGraph®, and the QLattice® are registered trademarks of Abzu®