Vernor Vinge
Mar. 22nd, 2024 09:06 pmen.wikipedia.org/wiki/Vernor_Vinge
Scott Alexander summarizes the takeaways interesting for him as follows:
======= QUOTE =======
1. Even people working in the field of aligning AIs mostly assign “low” probability (~10%) that unaligned AI will result in human extinction
2. While some people are still concerned about the superintelligence scenario, concerns have diversified a lot over the past few years
3. People working in the field don't have a specific unified picture of what will go wrong
Developed in collaboration with OpenAI, GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code. GitHub Copilot works with a broad set of frameworks and languages, but this technical preview works especially well for Python, JavaScript, TypeScript, Ruby and Go."
If you are using Visual Studio Code often, it might make sense to try to sign-up for the technical preview phase...