dmm: (Default)
[personal profile] dmm
"Aurochs: An Architecture for Dataflow Threads", by a team from Stanford.

They say they learned to do dataflow-style acceleration for hash tables, B-trees, and things like that. This might be an answer to my long-standing desire to have good parallelization for less regular and less uniform computations. And the best thing is that this is a software solution, one does not have to build specialized processors to take advantage of this:

conferences.computer.org/iscapub/pdfs/ISCA2021-4ghucdBnCWYB7ES2Pe4YdT/333300a402/333300a402.pdf

"Thinking Like Transformers", by a team from Israel

"What is the computational model behind a Transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language."

arxiv.org/abs/2106.06981

Столько всего происходит, времени вдруг стало резко меньше, не получается читать всё, что я обычно читаю... Дней десять назад как-то всё изменилось довольно резко, совсем другая динамика стала, feels like a transition period...

Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

September 2025

S M T W T F S
 1 23456
78910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 28th, 2025 05:39 pm
Powered by Dreamwidth Studios