Be Specific About Books As Superintelligence: Paths, Dangers, Strategies
Original Title: | Superintelligence: Paths, Dangers, Strategies |
ISBN: | 0199678111 (ISBN13: 9780199678112) |
Edition Language: | English |
Nick Bostrom
Hardcover | Pages: 328 pages Rating: 3.87 | 11970 Users | 1189 Reviews

Specify Out Of Books Superintelligence: Paths, Dangers, Strategies
Title | : | Superintelligence: Paths, Dangers, Strategies |
Author | : | Nick Bostrom |
Book Format | : | Hardcover |
Book Edition | : | Anniversary Edition |
Pages | : | Pages: 328 pages |
Published | : | September 3rd 2014 by Oxford University Press, USA (first published July 3rd 2014) |
Categories | : | Science. Nonfiction. Philosophy. Technology. Artificial Intelligence. Computer Science. Psychology |
Representaion To Books Superintelligence: Paths, Dangers, Strategies
Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful--possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
Rating Out Of Books Superintelligence: Paths, Dangers, Strategies
Ratings: 3.87 From 11970 Users | 1189 ReviewsCommentary Out Of Books Superintelligence: Paths, Dangers, Strategies
This is at the same time a difficult to read and horrifying book. The progress that we may or will see from "dumb" machines into super-intelligent entities can be daunting to take in and absorb and the consequences can range from the extinction of human life all the way to a comfortable and effortlessly meaningful one.The first issue with the book is the complexity. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understandingAn extraordinary achievement: Nick Bostrom takes a topic as intrinsically gripping as the end of human history if not the world and manages to make it stultifyingly boring.
As a software developer, I've cared very little for artificial intelligence (AI) in the past. My programs, which I develop professionally, have nothing to do with the subject. Theyre dumb as can be and only following strict orders (that is rather simple algorithms). Privately I wrote a few AI test programs (with more or less success) and read a articles in blogs or magazines (with more or less interest). By and large I considered AI as not being relevant for me.In March 2016 AlphaGo was

The most terrifying book I've ever read. Dense, but brilliant.
A few thoughts:1. Very difficult topic to write about. There's so much uncertainty involved that it's almost impossible to even agree on the basic assumptions of the book.2. The writing is incredibly thorough, given the assumptions, but also hard to understand. You need to follow the arguments closely and reread sections to fully understand their implications.Overall, interesting and thought-provoking book even though the basic assumptions are debatableP.S. (6 months later) Looking back on this
There's no way around it: a super-intelligent AI is a threat.We can safely assume that an AI smarter than a human, if developed, would accelerate its own development getting smarter at a rate faster than anything we'd ever seen. In just a few cycles of self-improvement it would spiral out of control. Trying to fight, or control, or hijack it, would be totally useless for a comparison, try picturing an ant trying to outsmart a human being (a laughable attempt, at best).But why is a
Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford's Future of Humanity Institute, thinks that we can't guarantee it _won't_ happen, and it worries him. It doesn't require Skynet and
0 Comments:
Note: Only a member of this blog may post a comment.