AI safety and transparency
Feb. 7th, 2021 07:41 pmI was reading quite of bit of (new to me) texts on AI safety in recent weeks.
I noticed a couple of things: there are new very strong young people in the field; also there is quite a bit of technical progress.
8 years ago the field looked rather hopeless (it mostly consisted of disagreements, and it did not look like there were any routes to technical progress in the field, it was all just talk). So changes to the better are impressive.
One particularly important motive is work towards better understanding and better transparency and interpretability of neural-like models.
I'll link to the one paper which seemed most interesting and eloquent in this sense, and eventually I'll add more material in the comments.
Evan Hubinger (Nov 2019), "Chris Olah’s views on AGI safety": www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety
I noticed a couple of things: there are new very strong young people in the field; also there is quite a bit of technical progress.
8 years ago the field looked rather hopeless (it mostly consisted of disagreements, and it did not look like there were any routes to technical progress in the field, it was all just talk). So changes to the better are impressive.
One particularly important motive is work towards better understanding and better transparency and interpretability of neural-like models.
I'll link to the one paper which seemed most interesting and eloquent in this sense, and eventually I'll add more material in the comments.
Evan Hubinger (Nov 2019), "Chris Olah’s views on AGI safety": www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety