Artificial intelligence. Fears of an AI pioneer.
Author | |
---|---|
Abstract | :
From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding— and fresh concerns about where AI may lead us. One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity. |
Year of Publication | :
2015
|
Journal | :
Science (New York, N.Y.)
|
Volume | :
349
|
Issue | :
6245
|
Number of Pages | :
252
|
Date Published | :
2015
|
ISSN Number | :
0036-8075
|
URL | :
https://www.sciencemag.org/cgi/pmidlookup?view=long&pmid=26185241
|
DOI | :
10.1126/science.349.6245.252
|
Short Title | :
Science
|
Download citation |