Jun. 23rd, 2021

dmm: (Default)
"Aurochs: An Architecture for Dataflow Threads", by a team from Stanford.

They say they learned to do dataflow-style acceleration for hash tables, B-trees, and things like that. This might be an answer to my long-standing desire to have good parallelization for less regular and less uniform computations. And the best thing is that this is a software solution, one does not have to build specialized processors to take advantage of this:

conferences.computer.org/iscapub/pdfs/ISCA2021-4ghucdBnCWYB7ES2Pe4YdT/333300a402/333300a402.pdf

"Thinking Like Transformers", by a team from Israel

"What is the computational model behind a Transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language."

arxiv.org/abs/2106.06981

Столько всего происходит, времени вдруг стало резко меньше, не получается читать всё, что я обычно читаю... Дней десять назад как-то всё изменилось довольно резко, совсем другая динамика стала, feels like a transition period...

Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

December 2025

S M T W T F S
 123456
78910111213
141516 17181920
21222324252627
28293031   

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 24th, 2026 03:33 pm
Powered by Dreamwidth Studios