Active Entries
- 1: "Narrow AGI" this year?
- 2: Tao on coordinate vs coordinate-free math reasoning
- 3: "Aging as a loss of goal-directedness"
- 4: New integrated mode for GPT-4 in ChatGPT+
- 5: Китайский новый год начнётся 10-го февраля
- 6: Automating the Search for Artificial Life with Foundation Models
- 7: "Anatomy of a Formal Proof"
- 8: C to safe Rust automatic translation using LLMs and dynamic analysis
- 9: GonzoML
- 10: Transformers as a Computational Model (workshop)
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2021-08-19 02:57 pm (UTC)PDEs and ODEs: https://msml21.github.io/session7/
last two talks on "committor functions"
A semigroup method for high dimensional committor functions based on neural network, Haoya Li (Stanford University), Yuehaw Khoo (U Chicago); Yinuo Ren (Peking University); Lexing Ying (Stanford University)
Paper highlight, by Jiequn Han
This paper proposes a new method based on neural networks to compute the high-dimensional committor functions. Understanding transition dynamics from the commitor function is a fundamental problem in statistical mechanics with decades of work behind it. Traditional numerical methods have an intrinsic limitation in solving general high-dimensional commitor functions. Algorithms based on neural networks have received much interest in the community, all based on the Fokker-Planck equation’s variational form. This paper’s main innovation lies in proposing a new variational formulation (loss function) based on the differential operator’s semigroup. The new formulation does not contain any differential operator, and the authors explicitly derive the loss’s graidents used for the training. The gradients only involve the first-order derivatives of the neural networks, in contrast to the second-order derivatives required in the previous methods. This feature is conceptually beneficial to the efficient training of neural networks. Numerical results on the standard testing examples and the Ginzburg-Landau model demonstrate the superiority of the proposed method. Besides, the authors also show that in the lazy training regime, the corresponding gradient flow converges at a geometric rate to a local minimum under certain assumptions.