Sinusoidal Representation Networks (SIRENs)
Current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.
Periodic activation functions (e.g. Sin) for implicit neural representations, dubbed sinusoidal representation networks or SIREN, are ideally suited for representing complex natural signals and their derivatives.
SIRENs can be leveraged to solve challenging boundary value problems, such as the Poisson equation, Helmholtz and wave equations.
This network has similarities to the Fourier networks because using a Sin activation function has the same effect as the input encoding for the first layer of the network.
To showcase the capabilities of this type of network, authors of Siren paper (opens in a new tab) use it to solve the inhomogeneous Helmholtz equation. ReLU- and Tanh-based architectures fail entirely to converge to a solution:
In the time domain, Siren succeeds to solve the wave equation, while a Tanh-based architecture fails to discover the correct solution.
The input of each Sin activation has a Gauss normal distribution and the output of each Sin activation, an arcSin distribution. This preserves the distribution of activations allowing deep architectures to be constructed and trained effectively.