by: Kevin Broløs
(Feyn version 2.0.0 or newer)
When relating to the models that come out of the
QLattice, it is often useful to look at
complexity. For instance, this is already a measure when we sample models (
max_complexity), and it is indirectly used when calculating
bic for the models when trying to choose parsimonious models.
For that reason, it's of course useful to understand how we measure
complexity and what it means for the models, given the restrictions and ways we build and compose them.
Model complexity is an exact one-to-one metric with the amount of edges in the resulting model graph. This is useful, as adding more edges of course also means adding more features or interactions, which also increases complexity.
It also means, that by having a set
complexity or a defined
max complexity, there's a limit to how many features can be included in the model, as the amount of edges have a direct relationship to the depth of a binary tree, and we never allow more than two inputs into each interaction.
Thus, the maximum amount of features that can be represented is the complexity divided by 2 (rounding up), and the most amount of interactions (given that you have only one feature) is the complexity minus 1. The most interactions you can have is thus the complexity minus the amount of features in the model.
Below, we have composed a little table that'll help you reason about how many features and interactions you can at maximum expect to be present in models of different complexities:
|Complexity||Max Features||Max Interactions|