Adaptive activation functions accelerate convergence in deep and physics-informed neural networks

Ameya Jagtap discusses the nonlinear Klein-Gordon equation, which has smooth solutions, the nonlinear Burgers equation, which can admit high gradient solutions, and the Helmholtz equation.

Image courtesy of Ameya Jagtap

Total
1
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

×
As a Guest, you have insight(s) remaining for this month. Create a free account to view 300 more annually.
Related Posts
error: Faculti Content is protected. Please check our Privacy Policy and Terms and Conditions.

Add the Faculti Web App to your Mobile or Desktop homescreen

Install
×