In this video, I react to Sam Harris discussing the possibility of losing control over artificial intelligence. He argues that we need to be very careful about how we build AI, and that we need to make sure that we always have a way to shut it down if necessary.
Artificial General Intelligence (AGI) and Superintelligent AI might be something we have to address in the near future.
The video:
00:00 – Sam Harris’ TED Talk on the dangers of super intelligent AI
00:47 – The dangers of artificial intelligence
01:28 – The Emotional Response to AI
02:11 – The Dangers of Technology
02:52 – The inevitability of continued innovation
03:42 – The point of no return for intelligent machines
04:20 – The intelligence explosion
05:05 – The reality of stepping over them
05:52 – The thought of this is valuable
06:36 – The impossibility of containing a superintelligence
07:24 – The inevitability of building general intelligence into machines
8:07 – The value of intelligence
08:51 – The Train of Artificial Intelligence
09:35 – The summit of intelligence
10:18 – The spectrum of intelligence
11:07 – The speed of a superintelligence
11:48 – The risks of superintelligence
12:28 – The end of human work
13:04 – The Inequality of a Machine-Dominated World
13:45 – The geopolitical implications of super intelligent AI
14:32 – The risks of a superintelligence
15:14 – The Reassuring Argument that AI is a Long Way Off
15:54 – The non-sequitur of referencing the time horizon for developing AI
16:40 – The Emotional Response to AI
17:21 – The mothership arrives in 50 years
17:59 – The implications of a completed neuroscience
18:43 – The need for a Manhattan Project on AI
19:16 – The need to get the initial conditions right for superintelligence
19:58 – The risks of building a god
20:50 – The inevitability of reaching artificial general intelligence
source