How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Details
The content you want is available to Zendy users.Already have an account? Click here. to sign in.