Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 80 of Voices in AI features host Byron Reese and Charlie Burgoyne discussing the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com Transcript ExcerptByron Reese: This is Voices in AI brought you by GigaOm and Iâm Byron Reese. Today my guest is Charlie Burgoyne. He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. Heâs also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company. Charlie holds a masterâs degree in theoretical physics from Georgetown University and a bachelorâs in nuclear physics from George Washington University. I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show. And so I think âstrap inâ it should be a fun episode. Welcome to the show Charlie. Charlie Burgoyne: Thanks so much Byron for having me, excited to talk to you today. Letâs start with [this]: maybe re-enact a little bit of our conversation when we first met. Tell me how you think of artificial intelligence, like what is it? What is artificial about it and what is intelligent about it? Sure, so the further I get down in this field, I start thinking about AI with two different definitions. Itâs a servant with two masters. It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those â things like approving time cards and making selections within a retail environment. And thatâs really where the real value of AI is right now in the market and [thereâs] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data. In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and itâs artificial in that we donât need an organic brain to do it. Now the AI that Iâm obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) â that AI definition is actually much more around the nature of intelligence itself, because in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state. And I think thatâs where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself. Now Iâll come kind of straight to the interesting part of this conversation, which is Iâve had not quite a hundred guests on the show. I can count on one hand the number who think it may not be possible to build a general intelligence. According to our conversation, you are convinced that we can do it. Is that true? And if so why? Yes⦠The short answer is I am not convinced we can create a generalized intelligence, and thatâs become more and more solidified the deeper and deeper I go into research and familiarity with the field. If you really unpack intelligent decision making, itâs actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right? A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons. From what Iâve been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or itâs doing the best within the confines of the mechanics we have at our disposal now. In other words, Iâm not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or (b) the nuances that are inherent to intelligence⦠Iâm not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that. Iâll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether itâs Her, or Ex Machina or Skynet or you name it. There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation. What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore? How is that intelligence derived from innate creativity? How are they designing things? How are they thinking about drawings and how are they identifying clothing that they need to put on? All these different nuances that are intelligently derived from that behavior. We really donât have a good understanding of that, and weâre not really making progress towards an understanding of that, because weâve been distracted for the last 20 years with research in fields of computer science that arenât really that closely related to understanding those core drivers. So when you say a sentence like âI donât know if weâll ever be able to make a general intelligence,â ever is a long time. So do you mean that literally? Tell me a scenario in which it is literally impossible â like it canât be done, even if you came across a genie that could grant your wish. It just canât be done. Like maybe time travel, you know â back in time, it just may not be possible. Do you mean that âmay notâ be possible? Or do you just mean on a time horizon that is meaningful to humans? I think itâs on the spectrum between the two. But I think it leans closer towards ânot ever possible under any condition.â I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based off of intuition and experience which are totally fungible assets. But I made this claim that I didnât think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI? And the audience laughed and I stopped and I said: âYou know thatâs actually not the worst idea Iâve been exposed to.â Thatâs not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as little distractions from its normal working mechanics as possible. That may very easily be a credible aid to understanding how the brain works. If we think about gravity, gravity is not a bad analog. Gravity is this force that everybody and their mother whoâs older than, you know whoâs past fifth grade understands how it works, you drop an apple you know which direction itâs going to go. Not only that but as you get experienced you can have a prediction of how fast it will fall, right? If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, youâd know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right? The reality is that we canât create an artificial gravity environment, right? We can create forces that simulate gravity. Centrifugal force is not a bad way of replicating gravity but we donât actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity. In fact it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually thatâs how they do gravitons), putting to rest an argument thatâs been going on since Einstein truly. So I guess my point is that we havenât really made progress in understanding the underlying mechanics, and every step weâve taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence. If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that itâs not impossible that itâs completely impossible altogether. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 80: A Conversation with Charlie Burgoyne
0 Comments
Leave a Reply. |
Clarence Moore
Gamer. Writer. |