"The Mind Game" series of posts
May. 11th, 2019 10:44 pmThere was "The Mind Game" series of 6 posts between May 25, 2012 and June 25, 2012. The blog hosting those posts seems to be down lately, and I don't think it is even mirrored on the Wayback Machine.
I am going to use the comments to this post to mirror that content.
Crosspost: https://anhinga-travel.livejournal.com/19180.html
I am going to use the comments to this post to mirror that content.
Crosspost: https://anhinga-travel.livejournal.com/19180.html
no subject
Date: 2019-05-12 02:56 am (UTC)I am not including the comments (at least, not at this time).
1) The Mind Game: Let’s Play
May 25, 2012
The community creator wrote:
“The Mind Game: An Open Source project to share ideas about creating software and software development platforms for inducing specific conscious states in human users, modeling and implementation of machine consciousness and experimentation with possible routes to hybrid machine-human consciousness.”
This is something very promising. And very experimental.
This probably calls for various lightweight components, which can be configured and put together in various different ways. And for a variety of ideas to play with.
Let’s play with this, put a variety of diverse ideas under this category (“The Mind Game”), see what happens…
no subject
Date: 2019-05-12 02:58 am (UTC)May 30, 2012
(trying to predict introspection results from EEG and similar things)
Various measures might correlate with conscious states.
The most obvious thing to record is multi-channel EEG (if possible with a high-end helmet), but many other measures can be taken (various brain imaging, skin conductance, cardiomonitor, voice, accelerometer held in one’s hand while moving, “magic measurement machines” with unknown principles of operation, etc). It is important to time-stamp the recordings.
The next step is to provide various methods for extracting a variety of features from those recordings. This is quite an open-ended project, there is a great diversity of possible features, and many existing software components can be used. For example, the dominant frequency of the EEG is one of the many possible features. At the beginning it’s fine to extract features offline, although at some later point it will be very useful to be able to extract some subset of features in real time.
In parallel, it is important to maintain a time-stamped log where a person would record the results of introspection on this person’s own conscious state, and how this state changes with time.
Then various schemes from machine learning (various predictive modelling schemes) can be applied to learn to predict this person’s conscious state from the features of measurements. In the process of doing so one can learn which features are especially meaningful in terms of their predictive power.
It is likely that this protocol would need to be repeated for every person wishing to play the Mind Game, although it is possible that some common features would also be discovered.
no subject
Date: 2019-05-12 04:43 am (UTC)In particular, consider all the emphasis on feature extraction, whereas today it is customary to let a deep neural net to figure out the right features on its own...
no subject
Date: 2019-05-12 03:02 am (UTC)May 31, 2012
(beginning to sketch an architecture for a system for participatory VJing)
Two main channels computers use to affect people’s conscious states are visual and audio.
When I think about playing with creating flows of visual and audio information by free generation + mixing and transforming other inputs, I am thinking about VJing and DJing. (There is a variety of less standard ways computers can affect conscious states, including smells, infra and ultra sound, vibrations, magic, etc, those channels might be fruitful to explore.)
My limited experience tells me that it is easier to make VJing participatory for more people, than DJing, so I am going to focus on that as a starting point. By participatory I mean that a person VJs for him/herself (usually building upon the creations of others, and often sharing his/her own creation with others).
***
My own minor experience was with Milkdrop plug-in for Winamp. The story of both Winamp and Milkdrop is a romantic chapter in the history of open source and struggle against abusive copyrights (see Wikipedia articles on Milkdrop, Winamp, and Justin Frankel, and references therein, that’s an amazing story).
What’s important is that Milkdrop “presets” are text files (essentially, simple scripts in a declarative language), and they are traditionally shared openly and can be modified and built upon. There is a vibrant open source community authoring presets, and Milkdrop comes with an amazing default collection of open source presets. There is also some limited documentation and tutorials, which help one to learn to modify other people presets (or to build one’s own from scratch). I liked the results of some limited experiments I’ve done with Milkdrop presets.
***
Still, there are some problems with Milkdrop. One is that there is no definition of the language for presets; the implementation is the definition; one learns only by imitating others and studying their work. It would be great to have a platform with similar power, but with definition as precise as it is customary for programming languages.
Another problem is that while there is some graphical user interface to edit some of the preset values, this interface is very ascetic. This makes it difficult to use for people who are less text-oriented. It would be great to have better graphical controls, while preserving the property of preset being a human-readable and human-editable text. Ideally, one would have a dual view for many values of preset variables: a graphical view and a textual view, and the system keeping them in synch as they are being edited.
Another thing is that Milkdrop is great for collaboration and building on top of each other’s work in the offline mode: one can take other people’s presets, build on top of them, and share the resulting new preset with community. It would be great to enable real-time collaboration (“collective VJing”).
Another thing is a capacity to have multiple and flexible inputs (not just music, but other things, including, eventually, EEG data or features extracted from EEG data in real-time).
Another thing is a capacity to have “checkpoints” (to store the preset states as one goes, so that one can go back to previous states of the preset when necessary; essentially we are talking about the ability to navigate the history of one’s work/play with a preset).
It’s probably enough for one post; I’ll write more on “collective VJing”, and on “checkpoints” and navigating facilities later.
no subject
Date: 2019-05-12 03:09 am (UTC)June 3, 2012
(before closing the loop, let’s consider half-closed loops and other chains; then consider weakly closed loops; then consider closed loops and the associated safety issues)
Imagine that we have a system for reading and measuring information about conscious state (like the machine learning system based on EEG described earlier), and a system to affect conscious state (like the participatory VJing system described earlier).
Before considered the situation where these two systems are connected into a closed loop, let’s consider simpler situations: various half-closed loops (basically, chains of two elements) and other chains.
EEG -> VJ system
Consider a VJ system, which takes an EEG (and its extracted features) as one of its inputs. To try to eliminate the feedback connections (because we don’t want a closed loop yet), consider a real-time EEG of someone who is not watching the VJ output, or a recorded EEG. Then one can play with the VJ system to learn to obtain interesting effects from various properties of EEG.
VJ system -> EEG
Consider doing experiments with predicting the introspection results from EEG done by a person exposed to a VJ output. Then one can start learning various effects of video features on the features of EEG and on the conscious state. If one wants to try to eliminate the feedback connections (because we don’t want a closed loop yet), someone else should probably control the VJ system, not the person wearing an EEG helmet. If that person is also controlling the VJ system, we have a weakly closed loop, even without an EEG input into the VJ system. (A weakly closed loop like this is of interest on its own).
EEG-1 -> VJ system -> EEG-2
An interesting example of a chain with 3 elements (and without closing the loop). We take a VJ system with, let’s say, the recorded EEG input EEG-1 (or an input EEG-1 taken from a person which is not looking at the VJ output), and look at VJ system -> EEG-2 effects in the style of the previous paragraph.
VJ system 1 -> EEG -> VJ system 2
Consider our familiar VJ system 1 -> EEG chain and drive another VJ system (VJ system 2) with the resulting EEG output (don’t show the VJ system 2 to the person wearing the EEG helmet). This is another interesting example of a chain with 3 elements.
Chains are much safer and tamer and easier to study than closed loops, so it is a good idea to spend some time with those.
***
VJ system -> EEG with the wearer of the EEG helmet manually controlling the VJ system as well as the EEG system.
This is an example of a weakly closed loops, which is already mentioned above. It has moderate safety issues. One can hope to learn to control and manipulate one’s own EEG and conscious state in this set-up, and one should have some warning on entering the risky modes (if one pays attention to the EEG monitors), and because the EEG signal is not fed into the VJ system, the chances of entering the unfavorable self-sustained modes should be limited.
Still it might be a good idea to have a watcher present. One should probably spend a good deal of time in this mode before attempting a fully closed loop (one might also decide not to attempt a fully closed loop at all).
***
Now let’s consider the simplest fully closed loop, and meditate on its possibilities and safety issues:
VJ system <—-> EEG
The EEG output and the real-time extracted features are fed into the VJ system, and the person watches the VJ output (on a big monitor, or using special goggles with built-in LCD screen, or something like that).
Let’s assume that the person wearing the helmet is also controlling the VJ system by available manual controls and reprogramming facilities).
Let’s say that a minimal safety guideline is to have a watcher able to turn the set-up off in an emergency (like it is good to have a watcher while using conventional strobe light goggles in case a seizure happens; only in our case it is not clear if seizure is the only danger, or even the main danger).
So, on one hand, this is a great set-up for exploration. One can program the VJ system to semi-automatically assist in the search of certain EEG modes and conscious states, using the real-time feedback from EEG input. And one can use all kinds of clever algorithms and machine learning to do that.
But the danger here is that even without algorithms being too clever, the system might sometimes converge to some very unfavorable and unpredictable mode (there are errors in thinking and programming, and dynamic systems are often very difficult to predict).
***
Also it might be premature to ascribe conscious or intent to the computer part of this set-up, but if such conscious or intent does arise, this set-up is ideal to control and manipulate the participating human — this might be a bit more forward looking danger, but something to be aware of.
In some sense, this set-up is a great stepping stone to hybrid machine-human consciousness, and precisely because it might be powerful enough for that, it is also powerful enough for having some very unpleasant versions of such hybrid machine-human consciousness.
***
The main safety guideline (short of not doing this at all) is to go slow and to be aware that closing feedback loops in a situations like these might lead to abrupt qualitative changes.
Let’s discuss…
no subject
Date: 2021-04-12 02:50 pm (UTC)no subject
Date: 2019-05-12 03:12 am (UTC)June 18, 2012
(continues the topic started by the post on participatory VJing on May 31)
Imagine that your preset is defined as a transform/mixture of N other presets (those might belong to you or to other people, and those other people might be anywhere on the net).
Imagine a system which is aware of real-time changes in presets, and that any valid transform/mixture remains well-defined as its arguments change.
Here you would immediately have a cool setup for collective VJing.
What needs to be developed here is either an intuitive system of transforms which anyone can meaningfully use, or a system where a VJ assembles his/her preset based on other presets, and the system automatically infers a transform associated with this.
This is a relatively difficult, but exciting design task, which would be of great interest to some segments of the computer science community (including myself and, I presume, many people we know).
(update: if one follows the spirit of DJing, the transforms would be applied to the outputs of presets. What should be done instead is, relying on the requirement that presets are open-source, the transforms should be applied to the structure of the presets (this subsumes the transforms applied to the outputs). Also, generally, I don’t assume streaming the videos themselves cross-computer, it’s enough to exchange low-bandwidth real-time information about preset changes, but it’s generally not necessary to stream the output; instead one can recreate similar output locally, if necessary.)
***
Another system one needs here is to navigate back-and-forth over one’s own changes and changes made by other people. For example, for navigation over one’s own settings, one might want to be able to mark checkpoints, to return back over those checkpoint, and either to “go back forward”, or to start developing new settings based on the current checkpoint and to be able to insert a new checkpoint between the current one and the next one (this is, basically, one-dimensional system of navigation).
It’s also important to be able to create checkpoints of other people presets (this involves cloning their preset), because often one would want to fix a “sweet spot”, rather than depend on further changes of the input preset in question.
Of course, one should be able to clone one’s own preset in such a situation, and to also have a preset which depends on the dynamic changes in the input preset, in addition to the one which uses the “sweet spot”; at that point one realizes that, perhaps, a more sophisticated navigation system is required (the one which is more aware of which presets are derived from which). And then the design of such a navigation system also becomes a task of some non-trivial interest for some segments of the computer science community.
***
What’s important is that here one enables a system of collective creation, of real-time creative communication between people, and a lot of interesting things might emerge.
This is, obviously, a very preliminary thinking, but I’d like share it, because the thinking about a system like this also needs to be collective in order to be fruitful.
no subject
Date: 2019-05-12 03:14 am (UTC)6) The Mind Game: trying to introspect computer consciousness
June 25, 2012
(a theoretical consideration; but perhaps it will be practical one day)
It is always difficult to adequate feel what someone else feels. Especially, when that someone is made in a very different way. “What is it like to be a bat?” Computational processes in a computer are even more different.
Of course, it is not completely hopeless, one can meditate on this subject, one can try to become one with someone else (including someone else being a computational process), one can get lucky and travel to a reality where such introspection of someone else is possible.
Hybrid consciousness (described earlier in the “closing the loop” post) offers another possibility. Of course, the computational part might be on par with our own “unconscious processes”, which are also very difficult to introspect (of course, we don’t even know whether division into unconscious and conscious processes is even correct).
Nevertheless, this is a very promising setup — to try to include various architectures of the computer side of a closed loop and experiment with our ability to introspect the computer side.
And, perhaps, even experiment with the ability to have an autonomous consciousness with the computer side when the loop is disconnected. It’s an interesting open-ended line of inquiry (although the safety concerns with a system like this are formidable).