Gaussian Process Correspondence at Initialization
At initialization, a fully connected neural network with a large number of hidden units in each layer exhibits a remarkable property: as the width of each layer tends to infinity (n→∞), the distribution over functions computed by the network converges to a Gaussian process (GP). This correspondence is foundational for understanding the statistical behavior of neural networks in the infinite-width regime.
To state this result precisely, consider a neural network with L layers, where each layer has n neurons and n→∞. The weights and biases are initialized independently from zero-mean Gaussian distributions, and the activation function ϕ is applied elementwise. The output f(x) of the network for input x is therefore a random variable determined by the random initialization of the weights and biases.
The Gaussian process correspondence asserts that, under these conditions and for any finite set of inputs {x1,…,xm}, the joint distribution of the outputs {f(x1),…,f(xm)} converges to a multivariate Gaussian as n→∞. The mean is zero (assuming zero-mean initialization), and the covariance between f(x) and f(x′) is determined recursively by the architecture and the activation function.
The key assumptions for this correspondence are:
- All weights and biases are initialized independently from zero-mean Gaussian distributions with variances chosen to prevent signal explosion or decay;
- The activation function ϕ is measurable and satisfies mild growth conditions (such as bounded moments);
- The width of each hidden layer tends to infinity, while the depth L is fixed.
The derivation proceeds by noting that, due to the Central Limit Theorem, the pre-activations at each hidden layer become jointly Gaussian as the width increases, provided the previous layer's outputs are independent. This independence holds in the infinite-width limit, allowing you to recursively compute the covariance structure across layers.
The covariance structure of the limiting Gaussian process is deeply influenced by both the neural network architecture and the choice of activation function. For a simple fully connected network with one hidden layer, the covariance between outputs f(x) and f(x′) at initialization is given by
K(1)(x,x′)=σw2Ez∼N(0,Σ(0))[ϕ(zx)ϕ(zx′)]+σb2,where σw2 and σb2 are the variances of the weights and biases, respectively, and Σ(0) is the input covariance matrix:
Σ(0)=(x⊤xx′⊤xx⊤x′x′⊤x′).For deeper networks, the covariance is computed recursively:
K(l+1)(x,x′)=σw2E(u,v)∼N(0,Σ(l))[ϕ(u)ϕ(v)]+σb2,with Σ(l) defined analogously using K(l).
The activation function φ determines how the covariance evolves layer by layer. Using φ(z)=ReLU(z) or φ(z)=tanh(z) leads to different forms of covariance propagation, resulting in distinct function space priors. The architecture — such as the presence of convolutional layers or skip connections — also alters the recursive structure and the resulting Gaussian process kernel.
The mapping from random initial weights to a distribution over functions can be visualized as a process where, for each random draw of weights and biases, the network defines a function f(x) from input space to output space. As the width increases, the randomness in the weights induces a distribution over possible functions, which becomes a Gaussian process in the infinite-width limit.
Kiitos palautteestasi!
Kysy tekoälyä
Kysy tekoälyä
Kysy mitä tahansa tai kokeile jotakin ehdotetuista kysymyksistä aloittaaksesi keskustelumme
Mahtavaa!
Completion arvosana parantunut arvoon 11.11
Gaussian Process Correspondence at Initialization
Pyyhkäise näyttääksesi valikon
At initialization, a fully connected neural network with a large number of hidden units in each layer exhibits a remarkable property: as the width of each layer tends to infinity (n→∞), the distribution over functions computed by the network converges to a Gaussian process (GP). This correspondence is foundational for understanding the statistical behavior of neural networks in the infinite-width regime.
To state this result precisely, consider a neural network with L layers, where each layer has n neurons and n→∞. The weights and biases are initialized independently from zero-mean Gaussian distributions, and the activation function ϕ is applied elementwise. The output f(x) of the network for input x is therefore a random variable determined by the random initialization of the weights and biases.
The Gaussian process correspondence asserts that, under these conditions and for any finite set of inputs {x1,…,xm}, the joint distribution of the outputs {f(x1),…,f(xm)} converges to a multivariate Gaussian as n→∞. The mean is zero (assuming zero-mean initialization), and the covariance between f(x) and f(x′) is determined recursively by the architecture and the activation function.
The key assumptions for this correspondence are:
- All weights and biases are initialized independently from zero-mean Gaussian distributions with variances chosen to prevent signal explosion or decay;
- The activation function ϕ is measurable and satisfies mild growth conditions (such as bounded moments);
- The width of each hidden layer tends to infinity, while the depth L is fixed.
The derivation proceeds by noting that, due to the Central Limit Theorem, the pre-activations at each hidden layer become jointly Gaussian as the width increases, provided the previous layer's outputs are independent. This independence holds in the infinite-width limit, allowing you to recursively compute the covariance structure across layers.
The covariance structure of the limiting Gaussian process is deeply influenced by both the neural network architecture and the choice of activation function. For a simple fully connected network with one hidden layer, the covariance between outputs f(x) and f(x′) at initialization is given by
K(1)(x,x′)=σw2Ez∼N(0,Σ(0))[ϕ(zx)ϕ(zx′)]+σb2,where σw2 and σb2 are the variances of the weights and biases, respectively, and Σ(0) is the input covariance matrix:
Σ(0)=(x⊤xx′⊤xx⊤x′x′⊤x′).For deeper networks, the covariance is computed recursively:
K(l+1)(x,x′)=σw2E(u,v)∼N(0,Σ(l))[ϕ(u)ϕ(v)]+σb2,with Σ(l) defined analogously using K(l).
The activation function φ determines how the covariance evolves layer by layer. Using φ(z)=ReLU(z) or φ(z)=tanh(z) leads to different forms of covariance propagation, resulting in distinct function space priors. The architecture — such as the presence of convolutional layers or skip connections — also alters the recursive structure and the resulting Gaussian process kernel.
The mapping from random initial weights to a distribution over functions can be visualized as a process where, for each random draw of weights and biases, the network defines a function f(x) from input space to output space. As the width increases, the randomness in the weights induces a distribution over possible functions, which becomes a Gaussian process in the infinite-width limit.
Kiitos palautteestasi!