implicit neural representations with periodic activation
TRANSCRIPT
CSC2457 3D amp Geometric Deep Learning
2021-03-30
Presenter Zikun Chen
Instructor Animesh Garg
Implicit Neural Representations with Periodic Activation FunctionsVincent Sitzmann Julien N P Martel Alexander Bergman David B Lindell
Gordon Wetzstein
Representation of Signals Discrete
Traditionally discrete representations for signals are used
Pixels Point Cloud Samples of sound wave
Signal Parametrized by Neural Networks
In recent years therersquos been significant research interest on implicit neural representations
Implicit neural representation Signed distance
Input
Coordinates
Benefits of Implicit Neural Representations
bull Unlike discrete representations
bull Agnostic to grid resolution model memory scales with signal complexity
bull Differentiations computed automatically
Input
Coordinates
Implicit neural representation Output SDF
Problems of Implicit Neural Representations
bull Compared to discrete representations can fail to encode high frequency details
Implicit neural representation
Input
Coordinates
Pixels Point Cloud Samples of sound wave
blurry edges missing curtains missing frequencies
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Representation of Signals Discrete
Traditionally discrete representations for signals are used
Pixels Point Cloud Samples of sound wave
Signal Parametrized by Neural Networks
In recent years therersquos been significant research interest on implicit neural representations
Implicit neural representation Signed distance
Input
Coordinates
Benefits of Implicit Neural Representations
bull Unlike discrete representations
bull Agnostic to grid resolution model memory scales with signal complexity
bull Differentiations computed automatically
Input
Coordinates
Implicit neural representation Output SDF
Problems of Implicit Neural Representations
bull Compared to discrete representations can fail to encode high frequency details
Implicit neural representation
Input
Coordinates
Pixels Point Cloud Samples of sound wave
blurry edges missing curtains missing frequencies
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Signal Parametrized by Neural Networks
In recent years therersquos been significant research interest on implicit neural representations
Implicit neural representation Signed distance
Input
Coordinates
Benefits of Implicit Neural Representations
bull Unlike discrete representations
bull Agnostic to grid resolution model memory scales with signal complexity
bull Differentiations computed automatically
Input
Coordinates
Implicit neural representation Output SDF
Problems of Implicit Neural Representations
bull Compared to discrete representations can fail to encode high frequency details
Implicit neural representation
Input
Coordinates
Pixels Point Cloud Samples of sound wave
blurry edges missing curtains missing frequencies
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Benefits of Implicit Neural Representations
bull Unlike discrete representations
bull Agnostic to grid resolution model memory scales with signal complexity
bull Differentiations computed automatically
Input
Coordinates
Implicit neural representation Output SDF
Problems of Implicit Neural Representations
bull Compared to discrete representations can fail to encode high frequency details
Implicit neural representation
Input
Coordinates
Pixels Point Cloud Samples of sound wave
blurry edges missing curtains missing frequencies
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problems of Implicit Neural Representations
bull Compared to discrete representations can fail to encode high frequency details
Implicit neural representation
Input
Coordinates
Pixels Point Cloud Samples of sound wave
blurry edges missing curtains missing frequencies
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problems of ReLU etc
bull ReLU Linearity-gt second or higher order derivatives == 0
bull Losing information in higher-order derivatives of signals
bull Other activations (softplus tanh) derivatives not well behaved
We want second or higher order
derivatives =0 amp tractable derivative
behaviours
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
SIREN Sinusoidal Representation Networks
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Contributions
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Find a class of functions Φ that satisfies the relation F
On the continuous domain of x
Φ implicitly defined by the relation F
Neural networks that parametrize Φ implicit neural representations
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Cm 2-norm of Φ(x)-f(x)
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Problem Setting example
Each Input x Output Φ(x) supervised by Find Φ that minimizes
Gradients of the target image
Find Φ that minimizes
Cm 2-norm of Φrsquo(x)-frsquo(x)
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
SIREN Architecture
- Basically MLPs with Sine activations
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
SIREN Initialization scheme
- Crucial Without carefully chosen uniformly distributed weights SIREN doesnrsquot perform well
- Key idea preserve the distribution of activations such that the final output at initialization does not depend of the number of layers
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- Initialize the weights W of the first layer such that sin(30Wx+b) spans multiple periods over [-11]
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
SIREN Initialization scheme
- Assuming that x~U(-11) x isin Rn
- For other layers with input x in n-dimensional space initialize weights according to U(-radic(1113109 6n) radic(1113109 6n))
- Ensures that input to each activation follows N(01) approximately
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Image reconstruction
bull Derivatives not well behaved Losing gradient informations in reconstructed images
bull SIREN gradient laplacian well preserved
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Video reconstruction
bull SIREN more high frequency details preserved
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Audio reconstruction
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Poisson equation
- gradients nablaf Laplacian ∆f = nabla middot nablaf
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
SIREN
Experiments Poisson equation- Image reconstruction
Quantitative results for PSNR
- tanhampReLU PE both failed for fitting laplacian
- SIREN minor issues due to the ill-posed problem nature
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Poisson equation- Image editing
- Fit 2 SIRENs f1 f2 for image reconstruction with loss function
- Fit a third SIREN for image editing with the same loss function as above except that nablaxf(x)=αmiddotnablaf1(x)+(1minusα)middotnablaf2(x) αisin[01]
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Poisson equation- Image editing
Decent results with gradient supervision only
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Quick recap Signed distance function (SDF)
X-Y plane
Park et al 2019
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Representing shapes with SDF
ψ(x) = exp(minusα middot |Φ(x)|) α ≫ 1 Penalize Φ(x) ==0 when x is not on the surface
Gradient of SDF ==1 SDF = 0 when x is on the surface nablaxΦ(x) == normals
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Representing shapes with SDF
Fine details missing in the baseline(left)
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Solving Differential Equations
Helmholtz equation
Known source functionWave field (unknown) 1(c(x))^2 c(x) wave velocity
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Solving Differential Equations
c(x) known
Only output of SIREN was able to match results from a numerical solver
c(x) unknown
SIREN velocity model outperformed principled solver
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Experiments Learning priors
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Discussions
- Poisson image editing is nothing new Mixing gradients
- The authors didnrsquot give a formal definition of ldquowell-behavedrdquo which is apparently an important property of sine activation
- The authors mentioned that SIREN initialized improperly had bad performance but didnrsquot link this back to the ldquokey ideardquo in the initialization scheme Couldrsquove done an ablation study
- PyTorch Tensorflow also follows a similar default initialization scheme except that it also depends on output dimensions sqrt(6 (fan_in + fan_out))
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications
Contributions (Recap)
- Prior work - periodic activation
- Hypernetwork functional image representation
- Constructed a hypernetwork to produce weights of a target network which parametrizes RGB images Cosine was used as the activation function of the target network
- didnrsquot study behaviours of derivatives or other applications of cosine activation
- Taming the waves sine as activation function in deep neural networks
- Lack of preliminary investigation of potential benefits of sine
- SIREN proposes
- A simple MLP architecture for implicit neural representations that uses sine as activation function
- Compared to prior work this paper
- Proposes a continuous implicit neural representation using periodic activation that fits complicated natural signals as well astheir derivatives robustly
- Provides an initialization scheme for this type of network and validates that weights can be learned using hypernetworks
- Demonstrates a wide range of applications