dmm: (Default)
[personal profile] dmm
A good way to mark this occasion is to try to read a new paper which seems to be a major breakthrough in understanding and harnessing the magic of Transformers:

"Uncovering mesa-optimization algorithms in Transformers"

"we demonstrate that minimizing a generic autoregressive loss gives rise to a subsidiary gradient-based optimization algorithm running inside the forward pass of a Transformer. This phenomenon has been recently termed mesa-optimization"
 
"Moreover, we find that the resulting mesa-optimization algorithms exhibit in-context few-shot learning capabilities,
independently of model scale. Our results therefore complement previous reports characterizing the
emergence of few-shot learning in large-scale LLMs"

 

Date: 2023-09-15 04:32 am (UTC)
timelets: (Default)
From: [personal profile] timelets
Thanks

Date: 2023-09-15 09:06 am (UTC)
juan_gandhi: (Default)
From: [personal profile] juan_gandhi

Who would have guessed before that gradient descent wold be strangely useful in doing "AI".

Date: 2023-09-15 01:58 pm (UTC)
juan_gandhi: (Default)
From: [personal profile] juan_gandhi

This is amazing.

Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

September 2025

S M T W T F S
 1 23456
78910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 28th, 2025 06:44 pm
Powered by Dreamwidth Studios