Describe why Bill Joy thinks GNR technologies (genetics, nanotechnology and robotics) are especially dangerous
Place your order now for a similar assignment and have exceptional work written by our team of experts, At affordable rates
Requirements for Essay:
around 1700-1800 words
Use a citation systems (Chicago), and use system consistently throughout the essay.
The essay should include reasons for and against the positions under consideration, including critical reflection on those reasons.
In Bill Joy’s article, Why the Future Doesn’t Need Us, Bill Joy argues that GNR technologies (genetics, nanotechnology and robotics) are likely, in the twenty first century, to yield weapons of mass destruction even more dangerous than the weapons yielded by the NBC (nuclear, biological and chemical) technologies of the twentieth century. Describe why Joy thinks GNR technologies are especially dangerous, describe his proposed response to the threats they pose, and critically assess whether it is response that a well-informed person should support, or whether some other response is to be preferred.
This topic refers to the Control Problem.
The following 5 sources maybe helpful.
1.Nick Bostrom’s book, “Superintelligence” maybe helpful for the essay.
2.Stuart Russell ’s book Human compatible : artificial intelligence and the problem of control
The Chapter 6 of this book maybe helpful.
3.This article will be helpful.:
J. Hughes Ph.D. “Global Technology Regulation and Potentially Apocalyptic Technological Threats”
Published in Nanoethics: The Ethical and Social Implications of Nanotechnology (Wiley, 2007)
In this article, Hughes argues, in response to Bill Joy, that it is not practical to relinquish GNR technologies, and that we should instead create powerful international regulatory bodies to police their safe use.
4.”The Coming Technological Singularity”, Vernor Vinge (1993)
This is the article in which Vinge coined the concept of a ‘technological singularity’.
5.George Church and Ed Regis Regenesis : How Synthetic Biology Will Reinvent Nature and Ourselves (2012), Epilogue of “Regenesis”, by George Church and Ed Regis
This chapter includes a discussion of the dangers posed by synthetic biology and the difficulty of regulating it.
The following 7 articles have to be used for the references in the essay. In addition to the following seven sources, other sources you find are also needed for the essay.
Author(s) and Source:
Bill Joy, 2000. “Why the Future Doesn’t Need Us.” Research Technology Management, Vol 43., No. 4
Abstract: Why the future doesn’t need us. Our most powerful 21st-century technologies – robotics, genetic engineering, and nanotech – are threatening to make humans an endangered species
Usage: Using this article to explain Bill Joy’s views as the foundational part of the essay. The question involves Bill Joy’s approach to GNR technologies and his proposed responses, this article would be a mainly explanation for the further analysis.
Author(s) and Source:
JOHN WECKERT?Source: Metaphilosophy , April 2002, Vol. 33, No. 3 (April 2002)
LILLIPUTIAN COMPUTER ETHICS
Abstract: This essay considers some ethical issues of nanotechnology and quantum computing, particularly the issue of privacy, and questions related to artificial intelligence, implants, and virtual reality. It then examines the claim that research in this field should be halted.
Usage : The content of this article leads depth to discussions of Bill Joy’s relinquishment argument, the claims will support my argument.
Author(s) and Source:
John G. Messerly, 2003. “I’m glad the future doesn’t need us: a critique of Joy’s pessimistic futurism.” Acm Sigcas Computers and Society Vol 32, No. 6
Abstract: In his well-known piece, “Why the future doesn’t need us,” Bill Joy argues that 21st century technologies—genetic engineering, robotics, and nanotechnology (GNR)—will extinguish human beings as we now know them, a prospect he finds deeply disturbing. I find his arguments deeply flawed and critique each of them in turn.
Usage : Messerly provides his refute to Joy’s point of view. This will contribute important points to the courter argument.
Title: Technology and Its Discontents: On the Verge of the Posthuman
Author: Joel Dinerstein
Year of publication: 2006
Journal: American Quarterly
Abstract: “Of power and revenge, the nation’s abstract sense of well-being, its arrogant sense of superiority, and its righteous justification for global dominance. In the introduction to Technological Visions, Marita Sturken and Douglas Thomas declare that “in the popular imagination, technology is often synonymous with the future,” but it is more accurate to say that technology is synonymous with faith in the future—both in the future as a better world and as one in which the United States bestrides the globe as a colossus.”
It supports Joy’s standpoint and gives good insight into the idea of us being ‘post human’.
This is an interesting article, relevant to the control problem as it relates to AI.
INTERSTELLAR COMMUNICATION. IX. MESSAGE DECONTAMINATION IS IMPOSSIBLE
Intelligence Explosion and Machine Ethics, by Luke Muehlhauser, Louie Helm
Abstract: Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence,” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity.”
McCauley (2007) “AI Armageddon and the Three Laws of Robotics”, Ethics and Information Technology (9),153–164
Abstract: After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov’s response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic rebellion is made along with a call for personal responsibility and suggestions for implementing safety constraints in intelligent robots.