Active Entries
- 1: Helion details
- 2: "Narrow AGI" this year?
- 3: Tao on coordinate vs coordinate-free math reasoning
- 4: "Aging as a loss of goal-directedness"
- 5: New integrated mode for GPT-4 in ChatGPT+
- 6: Китайский новый год начнётся 10-го февраля
- 7: Automating the Search for Artificial Life with Foundation Models
- 8: "Anatomy of a Formal Proof"
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2020-03-16 06:49 am (UTC)***
"The question of whether and why we should create general AI is a complicated one and
is the focus of many articles [143, 41, 4, 18, 15]. I will not delve into that issue here as it is better served when it is the sole issue of focus. However, the AI-GA path introduces its own unique set of ethical issues that I do want to mention here."
[...]
"It is fair to ask why should I write this paper if I think AI-GA research is more dangerous, as I am attempting to inform people about it potentially being a faster path to general AI and advocating that more people work on this path. One reason is I believe that, on balance, technological advances produce more benefit than harm. That said, this technology is very different and could prove an exception to the rule. A second reason is because I think society is better off knowing about this path and its potential, including its risks and downsides. We might therefore be better prepared to maximize the positive consequences of the technology while working hard to minimize the risks and negative outcomes. Additionally, I find it hard to imagine that, if this is the fastest path to AI, then society will not pursue it. I struggle to think of powerful technologies humanity has not invented soon after it had the capability to do so. Thus, if it is inevitable, then we should be aware of the risks and begin organizing ourselves in a way to minimize those risks. Very intelligent people disagree with my conclusion to make knowledge of this technology public. I respect their opinions and have discussed this issue with them at length. It was not an easy decision for me to make. But ultimately I feel that it is a service to society to make these issues public rather than keep them the secret knowledge of a few experts."
"There is another ethical concern, although many will find it incredible and dismiss it as the realm of fantasy or science fiction. We do not know how physical matter such as atoms can produce feelings and sensations like pain, pleasure, or the taste of chocolate, which philosophers call qualia. While some disagree, I think we have no good reason to believe that qualia will not emerge at some point in artificially intelligent agents once they are complex enough. A simple thought experiment makes the point: imagine if the mimic path enabled us to simulate an entire human brain and body, down to each subatomic particle. It seems likely to me that such a simulation would feel the same sensations as its real-world counterpart."
"Recognizing if and when artificial agents are feeling pain, pleasure, and other qualia that are worthy of our ethical considerations is an important subject that we will have to come to terms with in the future. However, that issue is not specific to the method in which AI is produced, and therefore is not unique to the AI-GA path. There is an AI-GA-specific consideration on this front, however. On Earth, there has been untold amounts of suffering produced in animals en route to the production of general AI. Is it ethical to create algorithms in which such suffering occurs if it is essential, or helpful, to produce AI? Should we ban research into algorithms that create such suffering in order to focus energy on creating AI-GAs that do not involve suffering? How do we balance the benefits to humans and the planet of having general AI vs. the suffering of virtual agents? These are all questions we will have to deal with as research progresses on AI-GAs. They are related to the general question of ethics for artificial agents, but have unique dimensions worthy of specific consideration."
"Some of these ideas will seem fantastical to many researchers. In fact, it is risky for my career to raise them. However, I feel obligated to let society and our community know that I consider some of these seemingly fantastical outcomes possible enough to merit consideration. For example, even if there is a small chance that we create dangerous AI or untold suffering, the costs are so great that we should discuss that possibility. As an analogy, if there were a 1% chance that a civilization-ending asteroid could hit Earth in a decade or ten, we would be foolish not to begin discussing how to track it and prevent that catastrophe."
"We should keep in mind the grandeur of the task we are discussing, which is nothing short than the creation of an artificial intelligence smarter than humans. If we succeed, we arguably have also created life itself, by some definitions. We do not know if that intelligence will feel. We do not know what its values might be. We do not know what its intentions towards us may be. We might have an educated guess, but any student of history would recognize that it would be the height of hubris to assume we know with certainty exactly what general AI will be like. Thus, it is important to encourage, instead of silence, a discussion of the risks and ethical implications of creating general artificial intelligence."