dmm: (Default)
github.com/google/learned_optimization - "Meta-learning optimizers and more with JAX"

This is used by various interesting papers including the famous "persistent evolution strategies" paper which I don't understand and "Gradients are Not All You Need" arxiv.org/abs/2111.05803 tempting paper.

Moreover, it is used by a super-interesting "Practical tradeoffs between memory, compute, and performance in learned optimizers" arxiv.org/abs/2203.11860 must-read paper, which is being published at the following conference lifelong-ml.cc/ (Conference on Lifelong Learning Agents - CoLLAs 2022, Aug 18-24)
dmm: (Default)
1st International Conference on Automated Machine Learning: automl.cc/

Follows ICML 2022 🇺🇦 icml.cc/ (one can attend virtually as well)

Neural Architecture Search is prominent and includes a competition: sites.google.com/view/zero-cost-nas-competition/home

The most notable keynote is by Jeff Clune, "AI-generating algorithms: the fastest path to AGI?"

dmm: (Default)
Yesterday, today, tomorrow (12 hours diff between that and US East Coast):

math.nie.edu.sg/isdt09/

math.nie.edu.sg/isdt09/programme/9th_ISDT_Program1.pdf

Free zoom; I am listening to some of this right now... Very nostalgic...

But... it's just too difficult (and they don't even seem to record).

In any case, that's the kind of "intelligent assistant" I want - a piece of software with which I could discuss a talk and a slide deck like one of those, and which would help me to understand the details of what's going on.
dmm: (Default)
Очень любопытный текст: medium.com/@jcbaillie/beyond-the-symbolic-vs-non-symbolic-ai-debate-96dffce7270c

🇺🇦 Что касается войны, то Пуйлу следует лично проинспектировать ситуацию в Чернобаевке 🇺🇦

dmm: (Default)
Anthropic AI is an organization which has been created approximately a year ago by former OpenAI people who (I believe) have been unhappy about the current direction of OpenAI.

"Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Large, general systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to make progress on these issues."

They have just published their first major paper directed towards better understanding of Transformers. I am going to accumulate various links in the comments.
dmm: (Default)
agi-conference.org/

(Registration for online participation is free; a small fee if one wants to do it in person.)

This is not a "first-tier" AI conference, but this year has a larger number of interesting "external" keynote speakers than usual (perhaps, people are starting to feel that "artificial general intelligence" is no longer a far-in-the-future remote topic, but something becoming acutely relevant): Yoshua Bengio, Francois Chollet, Tomas Mikolov, Joseph Urban.

dmm: (Default)
astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks

The main takeaway is that no scenario is considered as much more likely than others by the best experts, and they all look more or less equally likely except for the "scenario not listed here" (which is rated as somewhat more likely than the listed scenarios).

Also people seem to be very optimistic for some reason (perhaps, they secretly believe in a benevolent G-d or benevolent aliens keeping an eye of us; otherwise their optimism is difficult to explain).

Scott Alexander summarizes the takeaways interesting for him as follows:

======= QUOTE =======

1. Even people working in the field of aligning AIs mostly assign “low” probability (~10%) that unaligned AI will result in human extinction

2. While some people are still concerned about the superintelligence scenario, concerns have diversified a lot over the past few years

3. People working in the field don't have a specific unified picture of what will go wrong



dmm: (Default)
github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/

"Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster.

Developed in collaboration with OpenAI, GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code. GitHub Copilot works with a broad set of frameworks and languages, but this technical preview works especially well for Python, JavaScript, TypeScript, Ruby and Go."

If you are using Visual Studio Code often, it might make sense to try to sign-up for the technical preview phase...

Profile

dmm: (Default)
Dataflow matrix machines (by Anhinga anhinga)

May 2025

S M T W T F S
    123
456 78910
11 121314151617
18192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 11th, 2025 02:56 am
Powered by Dreamwidth Studios