dmm: (Default)
"A second difficulty in communicating alignment ideas was based on differing ontologies. A surface-level explanation is that Japan is quite techno-optimistic compared to the west, and has strong intuitions that AI will operate harmoniously with humans. A more nuanced explanation is that Buddhist- and Shinto-inspired axioms in Japanese thinking lead to the conclusion that superintelligence will be conscious and aligned by default. One senior researcher from RIKEN noted during the conference that “it is obviously impossible to control a superintelligence, but living alongside one seems possible.” Some visible consequences of this are that machine consciousness research in Japan is taken quite seriously, whereas in the West there is little discussion of it."

***

I think it's time for us to start asking if, for example, GPT-4-produced simulations have associated subjective experience.

We have a feed-forward transducer in an autoregressive mode; each time a new token is produced by the feed-forward Transformer, the whole dialog including the just produced token is fed again to the input of the model, so there is a recurrent dynamics here (cf. section 3.4 of "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention", arxiv.org/abs/2006.16236).

So I would not be too surprised if that process actually "feels what it says".

dmm: (Default)
Rovelli and Vidotto seem to be saying that it might be that dark matter is simply "Plank sized while holes". I wonder if this would also provide a good enough explanation of dark energy (white holes if they exist would be wonderful sources of all kinds of things, and Plank-sized ones would create this feel that those things "just appear from nowhere", which seems like a nice setup for dark energy).

"Hard problem of qualia". The qualia-related part of the "hard problem of consciousness" is certainly hard. If one can factor out the "qualia problem", I am no so sure that the remaining part of the problem of consciousness is "hard" in the Chalmers-introduced sense of "hard".

So far, almost all provisional "theories of consciousness" just ignore the "hard problem of qualia" (which is why I quite skeptical of them).

The only approaches which seem to make sense are of "provisional dualism" (one provisionally introduces qualia as additional primitives
and build one's theories on top of that; if one admits qualia as primitives, the remaining part might be tractable). I just learned a couple of days ago that Johannes Kleiner made some rather nice progress in that direction in 2019.

Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

December 2025

S M T W T F S
 123456
78910111213
141516 17181920
21222324252627
28293031   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 4th, 2026 10:51 am
Powered by Dreamwidth Studios