dmm: (0)
Dataflow matrix machines (by Anhinga anhinga) ([personal profile] dmm) wrote 2021-08-19 02:57 pm (UTC)

Session 6: Reinforcement Learning and Control (I'll skip this one)

PDEs and ODEs: https://msml21.github.io/session7/

last two talks on "committor functions"


A semigroup method for high dimensional committor functions based on neural network, Haoya Li (Stanford University), Yuehaw Khoo (U Chicago); Yinuo Ren (Peking University); Lexing Ying (Stanford University)

Paper highlight, by Jiequn Han

This paper proposes a new method based on neural networks to compute the high-dimensional committor functions. Understanding transition dynamics from the commitor function is a fundamental problem in statistical mechanics with decades of work behind it. Traditional numerical methods have an intrinsic limitation in solving general high-dimensional commitor functions. Algorithms based on neural networks have received much interest in the community, all based on the Fokker-Planck equation’s variational form. This paper’s main innovation lies in proposing a new variational formulation (loss function) based on the differential operator’s semigroup. The new formulation does not contain any differential operator, and the authors explicitly derive the loss’s graidents used for the training. The gradients only involve the first-order derivatives of the neural networks, in contrast to the second-order derivatives required in the previous methods. This feature is conceptually beneficial to the efficient training of neural networks. Numerical results on the standard testing examples and the Ginzburg-Landau model demonstrate the superiority of the proposed method. Besides, the authors also show that in the lazy training regime, the corresponding gradient flow converges at a geometric rate to a local minimum under certain assumptions.



Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting