Neural networks
Fourier Network

Fourier Network

Neural networks sometimes have a preference for low-frequency solutions, an issue called "spectral bias." This can negatively impact how well the model learns during training and the accuracy of the final model. To help with this problem, we can use a technique called input encoding. This means transforming the inputs into a higher dimensional feature space through high-frequency functions. Fourier networks are a great way to do this!

These networks have shown a significant improvement in results over the regular fully connected neural networks due to their ability to capture sharp gradients.

The input encoding layer in is a variation of the one proposed in 1, but adds a twist with trainable encoding, which makes it even more powerful!

You don't need to worry about making any special adjustments to the geometry and constraints when you're tweaking the neural network structures. The best part is that the architecture doesn't depend on the specific problem or the parameters you're working with, so you can apply it to a wide variety of situations without any extra hassle.

Here's a quick tip about frequencies: They play a significant role in these networks. You have the freedom to pick frequencies from various options (full/axis/gaussian/diagonal) and decide how many frequencies you want to use within that spectrum. The perfect number of frequencies will vary for each problem, so it's important to find the right balance between better accuracy and the extra computing effort that comes with additional Fourier features. For example, the default settings work well for CFD simulations of laminar flows. However, for the turbulent flows, you might want to increase the number of frequencies to 30-40.

For some more complex examples, we can also apply encoding to other parameters besides the inputs themselves. This can make the model even more accurate and help it learn faster. The Modulus module in takes care of applying input encoding to both the inputs and the other parameters in a fully decoupled setting and then concatenates the spatial/temporal and parametric Fourier features together.

So, in a nutshell, Fourier networks and input encoding can help neural networks overcome their preference for low-frequency solutions, leading to better learning and more accurate models.


  1. Tancik, Matthew, et al. "Fourier features let networks learn high frequency functions in low dimensional domains." (opens in a new tab) Advances in Neural Information Processing Systems 33 (2020): 7537-7547.