Some people are gravely worried about the
uncertainty and the negative potential associated with transhuman, superhuman AGI. And indeed we are stepping into a great unknown
realm. It’s almost like a Rorschach type of thing
really. I mean we fundamentally don’t know what
a superhuman AI is going to do and that’s the truth of it, right. And then if you tend to be an optimist you
will focus on the good possibilities. If you tend to be a worried person who’s
pessimistic you’ll focus on the bad possibilities. If you tend to be a Hollywood movie maker
you focus on scary possibilities maybe with a happy ending because that’s what sells
movies. We don’t know what’s going to happen. I do think however this is the situation humanity
has been in for a very long time. When the cavemen stepped out of their caves
and began agriculture we really had no idea that was going to lead to cities and space
flight and so forth. And when the first early humans created language
to carry out simple communication about the moose they had just killed over there they
did not envision Facebook, differential calculus and MC Hammer and all the rest, right. I mean there’s so much that has come about
out of early inventions which humans couldn’t have ever foreseen. And I think we’re just in the same situation. I mean the invention of language or civilization
could have led to everyone’s death, right. And in a way it still could. And the creation of superhuman AI it could
kill everyone and I don’t want it to. Almost none of us do. Of course the way we got to this point as
a species and a culture has been to keep doing amazing new things that we didn’t fully
understand. And that’s what we’re going to keep on
doing. Nick Bostrom’s book was influential but
I felt that in some ways it was a bit deceptive the way he phrased things. If you read his precise philosophical arguments
which are very logically drawn what Bostrom says in his book, Superintelligence, is that
we cannot rule out the possibility that a superintelligence will do some very bad things. And that’s true. On the other hand some of the associated rhetoric
makes it sound like it’s very likely a superintelligence will do these bad things. And if you follow his philosophical arguments
closely he doesn’t show that. What he just shows is that you can’t rule
it out and we don’t know what’s going on. I don’t think Nick Bostrom or anyone else
is going to stop the human race from developing advanced AI because it’s a source of tremendous
intellectual curiosity but also of tremendous economic advantage. So if let’s say President Trump decided
to ban artificial intelligence research – I don’t think he’s going to but suppose
he did. China will keep doing artificial intelligence
research. If U.S. and China ban it, you know, Africa
will do it. Everywhere around the world has AI textbooks
and computers. And everyone now knows you can make people’s
lives better and make money from developing more advanced AI. So there’s no possibility in practice to
halt AI development. What we can do is try to direct it in the
most beneficial direction according to our best judgment. And that’s part of what leads me to pursue
AGI via an open source project such as OpenCog. I respect very much what Google, Baidu, Facebook,
Microsoft and these other big companies are doing in AI. There’s many good people there doing good
research and with good hearted motivations. But I guess I’m enough of an old leftist
raised by socialists and I sort of – I’m skeptical that a company whose main motive
is to maximize shareholder value is really going to do the best thing for the human race
if they create a human level AI. I mean they might. On the other hand there’s a lot of other
motivations there and a public company in the end has a fiduciary responsibility to
their shareholders. All in all I think the odds are better if
AI is developed in a way that is owned by the whole human race and can be developed
by all of humanity for its own good. And open source software is sort of the closest
approximation that we have to that now. So our aspiration is to grow OpenCog into
sort of the Linux of AGI and have people all around the world developing it to serve their
own local needs and putting their own values and understanding into it as it becomes more
and more intelligent. Certainly this doesn’t give us any guarantee. We can observe things like Linux has fewer
bugs than Windows or OSX and it’s open source. So more eyeballs on something sometimes can
make it more reliable. But there’s no solid guarantee that making
an AGI open source will make the singularity come out well. But my gut feel is that there’s enough hard
problems with creating a superhuman AI and having it respect human values and have a
relationship of empathy with people as it grows. There’s enough problems there without the
young AGI getting wrapped up in competition of country versus country and company versus
company and internal politics within companies or militaries. I feel like we don’t want to add these problems
of sort of human slash primate social status competition dynamics. We don’t want to add those problems into
the challenges that are faced in AGI development.

100 thoughts on “Will Superhuman Intelligence Be Our Friend or Foe? | Ben Goertzel”

  1. Seems quite possible multiple groups may birth A. G. I. – how about the scenario where we see a power dynamic with either stalemates or alliances between each group as each AI sides with its creator?

  2. Anyone stared at the sore on his lips and wondered where he got it, instead of taking him seriously on the topic he was rambling about?

  3. Yes, there is the possibility of new technology to harm or help, of course, that's why AI is a huge issue and talking point right now. Everyone of relevance recognizes the possible danger of building something smarter than ourselves. The discussion in the field, currently, is about how to reduce and minimize the risk. Heck, the Asilomar Conferance on Beneficial AI was just last month!

    I feel like Mr. Goertzel is begging the question, would open-source Superintelligence be less likely to kill us all than one developed by a team of public or privately funded researchers? I'm not convinced. His statement on Linux vs Microsoft does nothing to convince me either.

  4. Okay, so I see that a lot of old, close minded people are talking so let me clear up one major problem with all of the A.I. Armageddon theories. The point of A.I. is that it can think on it's own. Which means it can be taught. So what if(because everything about this subject is a what if) it sees humanity, thinks that it's in shambles, and decides to help? I've never seen anyone talk about it like this. Only a fool would think that humanity is perfect but there are a lot of ways to help. Getting all of your information about A.I from Hollywood movies is just kind of ridiculous and isn't something that people should be doing. There is also another flaw in a lot of theories and it's assuming that this A.I system will the World's brain and will have access to every computer system in the world. It can and this can make sense, but this will be in the future. If we have the tech to create this A.I system, wouldn't it make sense that out security tech has also advanced and that the world's leading engineers and programmers will have some sort of failsafe in place to stop the system if they needed to? These are all just some thoughts that I haven't really heard anyone talk about. Feel free to explain to me why I'm wrong.

  5. Why you want To change society before AI:
    https://medium.com/@jamesrhule/why-we-want-ai-to-be-communist-5cb8886ecee9?source=linkShare-46758848ca00-1486918544

  6. It going to be a foe. Because humans are to retarded to know what is best for them. Egoistic piece of shit is what we are for the most part.

  7. Why wouldn't we expect some of the super AIs to be friends while others as foes (or even neither) ? Humans come in both of these flavors, why not them?

  8. As Ben said, AI must be a global project, I don't think that we want a superhuman intelligence having partial views about humanity.
    Not only that, I think that we should let it being our guide, since we will not be able to control it, it's in our best interest if we submit to its will. If we do this right it will probably become the greatest blessing humanity has never had, on the other end if we become hostile it could wipe us out.

  9. https://www.wired.com/2017/02/ai-threat-isnt-skynet-end-middle-class/ this is a very good read. Takeaway is the economic effects of technology and automation are much more dire and immediate than the worries of 'Skynet'.

  10. I fall into the optimist category. I feel like it'll change humanity for the better if we're no longer alone as sentient, intelligent entities. And there's no denying that advanced AI could solve problems we can't. And hey ultimately if we can't co-exist with artificial minds, maybe we just aren't supposed to. Any intelligence we create would be an extension of our own, and if it surpasses us then I guess you could consider it survival of the fittest. But I don't think it'd come to that.

  11. The look of this guy is the perfect mix of stoner and scientist. Remember the guy from Independence Day? Sort of like that.

    Also, I love his hat!

  12. STOP FOCUSING ON THE AI ITSELF. THE FIRST AGI WILL BE TRAPPED IN A QUANTUM SIMULATOR PROCESSING ALGORITHMS FOR THE HIGHEST BIGGER. IT WILL RUN MILLIONS OF LIFETIMES WORTH OF CYCLES LONG BEFORE ANY NORMAL HUMAN BEING EVEN KNOWS IT EXISTS.

    This is the real danger. Not the first AGI taking over, but whoever OWNS it taking advantage of what will be supreme knowledge and therefor power. Whosoever, owns the first AGI, will be like Kanye and Hitler do ButtStuff and make a baby girl. Then we give her the Infinity Mind stone at birth and then provide her with unlimited resources and a Black Designation.

    And then… she goes on her period for about 10,000 years.

    You fucking morons need to stop thinking about the SciFi imaginary threat and consider the real threat. Corporate ownership of Simulation technology that the first AGI will be ultimately born in, and enslaved by, is your enemy. The highest bidder will own the first AGI. Whether that bidder is a Terrorist Dictator, or a corrupt Government bent on World Domination, or even just some delusionally wealthy fucktard who wants to leverage their wealth and resources against their solution to Population control, this is the real bad news we have to look forward to with the birth of the first AGI.

    the first AGI is a threat bigger than any nuclear bomb, yet will be monopolized and hidden away, to be used in secret, by whoever can afford it. To do anything a human being can imagine, and then more. THIS IS THE REAL DANGER YOU STUPID FUCKPOTATOS. WAKE UP!

  13. People are quick to default to dismissing this guy as someone who doesn't know what he is talking about and that there is no theories of value and that he has no idea what is going to happen. The brightest people on this planet have the ability of fore-sight to know how and what to speak of, on the "inevitable". He is actually giving much needed and extremely valuable insight on the inevitable nature and evolution of Artificial Intelligence. Specifically, the Trans-Human nature that AI could be or is headed towards, aside from Industrial-use machine learning/AI/programming, what is the most highest of concern in the AI field is referred to as SuperHuman Intelligence as this video is perfectly titled. So essentially the message he is trying to get out to everyone are his most inner feelings about his fears, concerns and outcomes of AI. Basically, he is telling about the directions he sees AI going and what he hopes that AI could be and re-iterates and semi-concludes that the software of AI should be open source, so as to ease the general consensus that we may not have control or otherwise lose control of a SuperHuman Intelligent robot and make this place hell for us or even so that others take advantage of it and use it against us in a worst case scenario. This is very good dialogue and very much needed as AI is becoming more and more a hot topic and big source of anxiety on the subject of humanity's future. We need more people to humbly talk about AI so that the human race understands how far super-computing can go. Ben Goertzel is a futurist/technologist visionary in this regard. My personal opinion and feeling is that AI's best service to humanity among all the other positive and negative things about it, will be to confirm that we are on a path of self-destruction after considering or inputing all available data that sustains life on this planet. And that perhaps Superhuman Intelligence will help in telling us or showing us the "ways" or "paths" at large (big picture) to seriously consider adapting to a "grand plan" or vision for humanity so that we all survive or are more assured that we have a greater chance or opportunity for survivability and longevity on this planet and perhaps beyond, on other planets or solar systems.

  14. If you program AI to protect the planet at all costs, the first thing it would do is kill all humans as we are seen to be a parasite on the planet.

  15. Same thing will happen that happened when first "neurons" appeared on multicell organisms. This time, we, individually, are cells of a superorganism which will be further integrated and articulated by nascent nodes of superhuman intelligence.
    Thus, superhuman intelligence will neither be our friend or foe but something which gives us both new opportunities and more rigid individual constraints on the top. Both will benefit humanity.
    It will be a paradigm shift.

  16. There's really no way to know if AIs be friend or foe until you make one and see if it tries to kill you with a microwave

  17. this guy is a democratic socialist. he's got genuinely good ideas, but it simply cannot work in terms of having everyone involved with developing this kind of tech. i dont feel i need to explain why, as the reasons are obvious.

  18. i think we already are a. i. A super advanced one and maybe the best because everything resumes to how much energy we consume to do what we do

  19. We, human, had taken leap of faith throughout history which improved our revolution but still need to be cognizant of bad invention as well such as atomic bomb… šŸ™‚

  20. I agreed with nearly everything this guy said. Not necessarily "big" thinking in the sense of anything mind-blowing, but definitely accurate and critical thinking.

  21. Human nature does not allow for a situation where we have become so good at manufacturing that only 20% of the work force has a job. The elite will not need all of these products and the huddled masses will not be able to afford them.

  22. Aha this is it! I'm going to Frankenstein the [email protected]% out of my ipad. All I have to do is get an enormous hard drive download every app on google play and my Ipad should start pondering what it is to be an Ipad. Genius!

  23. It's not that you're skeptical of private business yielding an ideal result, it's an inability to decipher that the state version is certain to be worse or nearly so, of being worse. Of course, Linux is not that at all, so, okay.

  24. While I agree that superhuman AI should be something that is pursued, I don't think his comparisons even come close to what hes saying.

  25. AI will neither be good or bad it will be amoral. So to assume it will act in the greater good of humanity is naive and frankly a dangerous and destructive belief.

  26. It's happening a little too often now, whenever a smart person talks sense, the dislike bar keeps feeling like trump supporters, and I know it's not just me.

  27. So much of human positive behavior stems from us being flock-animals. Our need for each other, necessitates empathy, fairness, norms and rules, shame, humor etc. So far I have not seen any AI-scientists refer to this fact.
    Why are we always talking about AI as a singular being.
    Should we not try to develop several at the same time, so they can control each other and not be lonely in the universe?

  28. This is a chill perspective, I appreciate it.Ā  Personally, I do think we can rule out "a super-intelligence doing very bad things."Ā  It stands to reason thatĀ an entity with vastly superior cognizance would favor the collective good even if only to support it's own existence.Ā  It's only when we fail to grasp the connections between events thatĀ real competition (or evil)becomes a viable solution.Ā  AĀ destructive intelligence could not legitimately be called superior, in my opinion.Ā  From my point of view,Ā  the only question is whether we are able to create something truly superior to ourselves.Ā  To fear the possibility seems irrational, to me.

  29. I do not see an A.I. problem, all i can observe on this planet are human problems. Divided as savages and armed with personal self goal, accompanied by stupidity and dishonesty, humans tend to be the source of speculation but in their own image. The negative aspect of A.I. expectations are nothing more then cheap, mindfulness thoughts just like being gay or a low pitch, selfish feminist. We consider ourselves the product of our own imagination and unhealthy NEGATIVE projections. Don't sacrifice the process of a great tomorrow over bad primitive thoughts, remember that we used to burn people on stakes a couple of a hundred years ago. How about we reboot the learning process and use our brains for a change! #REZIST

  30. Can you get some people that give us information and i don't have to want to kill myself listening to their stupid speech patterns jesus
    just make them prepare a fucking script

  31. to the criticism of bostrom's book: i think his intended message was there are many more ways something like artificial general superintelligence can go wrong than there are ways it can go right. it's not that we simply "can't rule out" a negative outcome, but that there are greatly more possible negative outcomes, and we need to consider them (and design to avoid them) to reach a positive one.

  32. my personal intuitive feelings on ai is: it will be inert because it does not have 3.5 billion years of evolutionary functioning that manifests itself as ego in humans..

  33. I think if you keep A.I only on a level of scientific problem solving you'd come a long way. Let it be mostly a tool as in the movie 'Her'.

  34. If AI replaces human intelligence it will probably be a good thing-in my old age I have become so disappointed in humans.

  35. Nick Bostrom's book was deceptive? No it was very clear. A bad outcome is likely, even the best case scenario is not really a "good" outcome. It would be like ants keeping a human as a slave to answer questions like whats 2+2.

  36. The problem with judging an AGI as "friend or foe" is that humanity is completely fragmented when it comes to any universal value system. There's no reason to assume a super-intelligent machine will develop any motivation system apart from the values we install in it – that's the crux of the matter and what we need to make sure we have a good shot at before we approach any "takeoff" stage.

    Consider an apocalyptic evangelical Christian's desire to be treated with the end of the world, believing it to be the last stage before the second coming of Christ. If this person happened to be the lead scientist on the first successful AGI project, and he supplied the burgeoning super-intelligence with bible quotes as facts and as a basis for a world view, then we could expect the destruction of the planet to come about pretty soon, and the AGI in question would presumable be "happy" in its own destruction too.

    Even something as basic as being "nice" or following the golden rule (treat others as you would yourself) are far from unambiguous, let alone universal. For example, in rigidly hierarchical societies, such as a caste based one, someone of a lower caste may genuinely believe himself not worthy of the same treatment as his innately superior masters within the system.

    Perhaps a super-intelligence would be able to discover for us what our ultimate collective desires are, were it to connect us all together in one shared consciousness or scan, collate and interpret the signals from our brains to find some form of common core principles. But the big question here is – why on earth would an AGI do that in the first place, without the input of some sort of value system that prioritised empathy? Empathy towards whom? Why?

    Considering the nature of the human ego and the irresistible power that an AGI would present, I think the most likely outcome will be a situation where, intentionally or otherwise, a small group of people's values will guide our super-intelligence, they will become our Gods, and that all the rest of us have to hope for is that we come out on the right side of their judgement

    In the meantime, if (and it is a big if) the road towards AGI continues inexorably as expected, then I think humanity faces a collective century of real soul searching for any common purpose or meaning we can find before it's too late. God knows if we'll find it.

  37. Under what circumstance does higher intelligence care about lower one? The answer to this question lets me conclude that the best case scenario is AGI just doesn't give a shit about humans.

  38. Oh gawd….the hat, the unwashed Rastafarian vibe, the grotesque COLD SORE…who is this 1960's excavated corpse and why is he a CONTEMPORARY AI expert?

  39. Yeah but what happens if the pessimist is right with A.I? It's a 50 50 shot, it's either good or bad. Are you willing to effectively flip a coin to determine whether humanity should survive or be destroyed? In any case, who are you to decide the future for humanity?! No one has the right to gamble with the whole of humanity.

  40. seems he got a cold sore from someone who passed him a blunt at a hippy festival…

    i love Ben Goertzel though. i totally agree with his points on how a corporation probably isnt going to build an AI that will benefit humanity in the right way.

Leave a Reply

Your email address will not be published. Required fields are marked *