<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dw="https://www.dreamwidth.org">
  <id>tag:dreamwidth.org,2017-07-05:3235132</id>
  <title>Dataflow matrix machines (by Anhinga anhinga)</title>
  <subtitle>Dataflow matrix machines (by Anhinga anhinga)</subtitle>
  <author>
    <name>Dataflow matrix machines (by Anhinga anhinga)</name>
  </author>
  <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/"/>
  <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom"/>
  <updated>2024-03-23T01:09:56Z</updated>
  <dw:journal username="dmm" type="personal"/>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:82014</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/82014.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=82014"/>
    <title>Vernor Vinge</title>
    <published>2024-03-23T01:09:56Z</published>
    <updated>2024-03-23T01:09:56Z</updated>
    <category term="technological singularity"/>
    <category term="remember"/>
    <category term="scifi"/>
    <dw:security>public</dw:security>
    <dw:reply-count>5</dw:reply-count>
    <content type="html">Vernor Vinge died at 79 on March 20 (due to long decline from Parkinson's disease):&lt;br /&gt;&lt;br /&gt;&lt;a href="https://en.wikipedia.org/wiki/Vernor_Vinge"&gt;en.wikipedia.org/wiki/Vernor_Vinge&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=82014" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:69879</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/69879.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=69879"/>
    <title>GreaterWrong viewer for LessWrong; Conjecture.dev</title>
    <published>2023-02-23T03:44:58Z</published>
    <updated>2023-02-23T04:06:14Z</updated>
    <category term="technological singularity"/>
    <category term="ai safety"/>
    <category term="understanding internals of ai"/>
    <category term="artificial intelligence"/>
    <category term="transformers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>7</dw:reply-count>
    <content type="html">I am reading more and more LessWrong in recent months (mostly, after the Simulator theory by Janus (work done while at &lt;strong&gt;Conjecture&lt;/strong&gt;) has been posted there in September).&lt;br /&gt;&lt;br /&gt;I still think the Simulator theory is probably the single most important research breakthrough of 2022.&lt;br /&gt;&lt;br /&gt;These days LessWrong is dominated by writing related to AI safety (the topic is made particularly acute by the recent progress in LLMs: ChatGPT and even more capable Bing Chat; &lt;strong&gt;no consensus whatsoever, of course&lt;/strong&gt;, but I do think that GPT-3 release in May 2020 is, in some sense, an equivalent of the nuclear fission discovery on 19 December 1938, and that ChatGPT performance (+ Bing Chat clearly drastically enhanced capabilities even compared to that) is, in the same sense, an equivalent of the first working nuclear reactor on 2 December 1942, if one goes by &amp;quot;AI today is what nuclear energy has been back then&amp;quot; analogy).&lt;br /&gt;&lt;br /&gt;So, one thing which might be useful is that there is GreaterWrong alternative viewer (which looks different from LessWrong default viewer and which can be visually tuned in terms of presentation style; also different default front page for the site if one uses GreaterWrong). Which viewer is better might depend on your device (display, browser, etc).&lt;br /&gt;&lt;br /&gt;Another thing, &lt;strong&gt;Conjecture&lt;/strong&gt; people tend to produce some of the best, most interesting articles there.&lt;br /&gt;&lt;br /&gt;I'll put a few links into the comments.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=69879" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:67676</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/67676.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=67676"/>
    <title>New Year resolution</title>
    <published>2023-01-08T05:42:26Z</published>
    <updated>2023-01-08T06:42:11Z</updated>
    <category term="understanding internals of ai"/>
    <category term="artificial intelligence"/>
    <category term="transformers"/>
    <category term="technological singularity"/>
    <category term="twitter"/>
    <category term="ai safety"/>
    <dw:security>public</dw:security>
    <dw:reply-count>18</dw:reply-count>
    <content type="html">To read my &lt;em&gt;&lt;strong&gt;https://twitter.com/home&lt;/strong&gt;&lt;/em&gt; more regularly (that's absolutely the best source of info at the moment).&lt;br /&gt;&lt;br /&gt;A small fraction of today's catch: &lt;br /&gt;&lt;br /&gt;New work by Janus&lt;br /&gt;&lt;br /&gt;A new involved take on AI safety/alignment&lt;br /&gt;&lt;br /&gt;&lt;em&gt;&lt;strong&gt;(What's the right way to organize all that information?)&lt;/strong&gt;&lt;/em&gt;&lt;br /&gt;&lt;br /&gt;Links are in the comments (I think the new work by Janus is more important even for alignment, and is just overall more important of the two topics of this post)...&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=67676" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:65388</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/65388.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=65388"/>
    <title>update on AI progress and AI safety</title>
    <published>2022-12-04T16:18:56Z</published>
    <updated>2022-12-04T16:18:56Z</updated>
    <category term="ai safety"/>
    <category term="technological singularity"/>
    <category term="program synthesis"/>
    <category term="artificial intelligence"/>
    <category term="transformers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>8</dw:reply-count>
    <content type="html">AI-safety-wise, the write-up, &lt;a href="https://scottaaronson.blog/?p=6823" rel="bookmark" title="Permanent Link: My AI Safety Lecture for UT Effective Altruism"&gt;My AI Safety Lecture for UT Effective Altruism&lt;/a&gt; by Scott Aaronson is  very nice reasonably objective and theory-friendly overview of the current state of AI safety as a field of science.&lt;br /&gt;&lt;br /&gt;AI-progress-wise, ChatGPT based on roughly speaking GPT-3.5 has been released recently, with people doing tons of  interesting things with it, including meaningful writing and software  generation... This seems to be another major step-up.&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=65388" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:64931</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/64931.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=64931"/>
    <title>Conferences; research updates</title>
    <published>2022-11-17T01:44:04Z</published>
    <updated>2022-11-17T01:44:04Z</updated>
    <category term="machine learning"/>
    <category term="anthropic ai"/>
    <category term="conference"/>
    <category term="transformers"/>
    <category term="julia"/>
    <category term="technological singularity"/>
    <category term="ai safety"/>
    <category term="physics"/>
    <category term="understanding internals of ai"/>
    <category term="philosophy"/>
    <category term="artificial intelligence"/>
    <dw:security>public</dw:security>
    <dw:reply-count>14</dw:reply-count>
    <content type="html">This week, Nov 17-18, Thu-Fri, 8am-11:45am Boston time, &lt;b&gt;&amp;quot;Quantum physics and the first-person perspective&amp;quot;&lt;/b&gt;: &lt;a href="https://www.essentiafoundation.org/quantum-physics-and-the-first-person-perspective/seeing/"&gt;www.essentiafoundation.org/quantum-physics-and-the-first-person-perspective/seeing/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;JuliaCon 2023&lt;/strong&gt;, &lt;a href="https://juliacon.org/2023/"&gt;juliacon.org/2023/&lt;/a&gt;  the call for proposals is posted, deadline Dec 18: &lt;a href="https://pretalx.com/juliacon2023/cfp"&gt;pretalx.com/juliacon2023/cfp&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;I've spent more quality time focusing of two breakthroughs in understanding the nature and the behavior of machine learning models which came from the &amp;quot;penumbra&amp;quot; of &amp;quot;prosaic alignment&amp;quot; start-ups and which &lt;strong&gt;I wrote about in my previous two posts&lt;/strong&gt;. &lt;br /&gt;&lt;br /&gt;&lt;strong&gt;&amp;quot;Grokking is (more or less) solved.&amp;quot;&lt;/strong&gt; I took brief notes between Oct 21 and Oct 23: &lt;a href="https://github.com/anhinga/2022-notes/tree/main/Grokking-is-solved"&gt;github.com/anhinga/2022-notes/tree/main/Grokking-is-solved&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;strong&gt;&amp;quot;Generative autoregressive models are similators.&amp;quot;&lt;/strong&gt; I took extensive notes between Oct 5 and Oct 23: &lt;a href="https://github.com/anhinga/2022-notes/tree/main/Generative-autoregressive-models-are-similators"&gt;github.com/anhinga/2022-notes/tree/main/Generative-autoregressive-models-are-similators&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;I am continuing to develop thoughts related to these topics, I am going to gradually write more about those topics in the comments.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=64931" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:64434</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/64434.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=64434"/>
    <title>Generative autoregressive models are similators</title>
    <published>2022-09-21T07:25:52Z</published>
    <updated>2022-09-21T07:27:56Z</updated>
    <category term="technological singularity"/>
    <category term="physics"/>
    <category term="ai safety"/>
    <category term="philosophy"/>
    <category term="understanding internals of ai"/>
    <category term="artificial intelligence"/>
    <category term="machine learning"/>
    <category term="transformers"/>
    <dw:security>public</dw:security>
    <dw:reply-count>9</dw:reply-count>
    <content type="html">Вот, наконец, кажется возник правильный подход к пониманию природы моделей вроде GPT-3 и разнообразного волшебства, с этим связанного:&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators"&gt;www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Он говорит, что надо перестать думать про эти модели в терминах более старых AI-систем.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=64434" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:63041</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/63041.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=63041"/>
    <title>OpenAI posted its approach to alignment research</title>
    <published>2022-08-25T13:43:11Z</published>
    <updated>2022-08-25T13:43:11Z</updated>
    <category term="technological singularity"/>
    <category term="openai codex"/>
    <category term="ai safety"/>
    <category term="understanding internals of ai"/>
    <category term="artificial intelligence"/>
    <dw:security>public</dw:security>
    <dw:reply-count>3</dw:reply-count>
    <content type="html">&lt;a href="https://openai.com/blog/our-approach-to-alignment-research/"&gt;openai.com/blog/our-approach-to-alignment-research/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&amp;quot;Our approach to aligning AGI is empirical and iterative. We are  improving our AI systems&amp;rsquo; ability to learn from human feedback and to  assist humans at evaluating AI. Our goal is to build a sufficiently  aligned AI system that can help us solve all other alignment&amp;nbsp;problems.&amp;quot;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=63041" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:46792</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/46792.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=46792"/>
    <title>Our understanding of major AI risks is exactly at zero</title>
    <published>2021-07-31T19:09:18Z</published>
    <updated>2021-07-31T19:48:52Z</updated>
    <category term="artificial intelligence"/>
    <category term="technological singularity"/>
    <category term="ai safety"/>
    <dw:security>public</dw:security>
    <dw:reply-count>1</dw:reply-count>
    <content type="html">&lt;a href="https://astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks"&gt;astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;The main takeaway is that no scenario is considered as much more likely than others by the best experts, and they all look more or less equally likely except for the &amp;quot;scenario not listed here&amp;quot; (which is rated as somewhat more likely than the listed scenarios).&lt;br /&gt;&lt;br /&gt;Also people seem to be very optimistic for some reason (perhaps, they secretly believe in a benevolent G-d or benevolent aliens keeping an eye of us; otherwise their optimism is difficult to explain).&lt;br /&gt;&lt;p&gt;Scott Alexander summarizes the takeaways interesting for him as follows:&lt;br /&gt;&lt;br /&gt;======= QUOTE =======&lt;/p&gt;&lt;p&gt;1. Even people  working in the field of  aligning AIs mostly assign &amp;ldquo;low&amp;rdquo; probability  (~10%) that unaligned AI will  result in human extinction&lt;/p&gt;&lt;p&gt;2. While  some people are still concerned  about the superintelligence scenario,  concerns have diversified a lot  over the past few years&lt;/p&gt;&lt;p&gt;3. People working in the field don't have a specific unified picture of what will go wrong&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=46792" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-07-05:3235132:44860</id>
    <link rel="alternate" type="text/html" href="https://dmm.dreamwidth.org/44860.html"/>
    <link rel="self" type="text/xml" href="https://dmm.dreamwidth.org/data/atom/?itemid=44860"/>
    <title>GitHub Copilot ("we are getting there")</title>
    <published>2021-06-29T16:23:30Z</published>
    <updated>2021-06-29T16:26:46Z</updated>
    <category term="transformers"/>
    <category term="artificial intelligence"/>
    <category term="github copilot"/>
    <category term="openai codex"/>
    <category term="program synthesis"/>
    <category term="technological singularity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>5</dw:reply-count>
    <content type="html">&lt;a href="https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/"&gt;github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&amp;quot;Today, we are launching a technical preview of &lt;a href="http://copilot.github.com" target="_blank" rel="noopener"&gt;GitHub Copilot&lt;/a&gt;,  a new AI pair programmer that helps you write better code. GitHub  Copilot draws context from the code you&amp;rsquo;re working on, suggesting whole  lines or entire functions. It helps you quickly discover alternative  ways to solve problems, write tests, and explore new APIs without having  to tediously tailor a search for answers on the internet. As you type,  it adapts to the way you write code&amp;mdash;to help you complete your work  faster. &lt;p&gt;Developed in collaboration with OpenAI, GitHub Copilot is powered by  OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad  knowledge of how people use code and is significantly more capable than  GPT-3 in code generation, in part, because it was trained on a data set  that includes a much larger concentration of public source code. GitHub  Copilot works with a broad set of frameworks and languages, but this  technical preview works especially well for Python, JavaScript,  TypeScript, Ruby and Go.&amp;quot;&lt;br /&gt;&lt;br /&gt;If you are using Visual Studio Code often, it might make sense to try to sign-up for the technical preview phase...&lt;/p&gt;&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=dmm&amp;ditemid=44860" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
</feed>
