Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 78 of Voices in AI features host Byron Reese and Alessandro Vinciarelli as they discuss AI, social signals, and the implications of a computer that can read and respond to emotions. Alessandro Vinciarelli has a Ph.D. in mathematics from the University of Bern and is currently a professor in the School of Computing Science at the University of Glasgow. Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript. Transcript ExcerptByron Reese: This is Voices in AI brought to you by GigaOm. Iâm Byron Reese. Today our guest is Alessandro Vinciarelli. He is a full professor at the University of Glasgow. He holds a Ph.D. in applied mathematics from the University of Bern. Welcome to the show, Alessandro. Alessandro Vinciarelli: Welcome. Good morning. Tell me a little bit about the kind of work you do in artificial intelligence. I work on a particular domain that is called social signal processing, which is the branch of artificial intelligence that deals with social psychological phenomena. We can think of the goal of this particular part of the domain as trying to read the mind of people, and through this to interact with people in the same way as people do with one another. That is like picking up on subtle social cues that people naturally do, teaching machines to do that? Exactly. At the core of this domain there is what we call social signals that are nonverbal behavioral cues that people naturally exchange during their social interactions. We talk here about, for example, facial expressions, spontaneous gestures, the posture, how we talk in a broadcast, the way of speaking â not what people say, but how they say it. The core idea is that basically we can see facial expression with our eyes, can hear the way people speak with our ears⦠and so it is also possible to sense these nonverbal behavioral cues with common sensors â like cameras, microphones, and so on. Through automatic analysis of the signal into the application of artificial intelligence approaches, we can map the data information we extract from images, audio recordings and so on into social cues and their meaning for the people that are involved in an interaction. I guess implicit in that is an assumption that thereâs a commonness of social cues across the whole human race? Is that the case? Yes. Letâs say social signals are the point where nature meets nurture. What does it mean? It means that at the end itâs something that is intimately related to our body, to our evolution, to our very natural being. And in this sense, we all have a disposition of the same expressive means, in the sense that we all have the same way of speaking, the same voice, the same phonetic apparatus. The face is the same for everybody. We have the same muscles of disposition in order to express a facial expression. The body is the same for everybody. So, from the way we talk, to our bodies⦠is the same for all people around the world. However, at the same time as we are a part of society, part of a context, we somewhat learn from the others to express specific meaning, like for example a friendly attitude or a hostile attitude or happiness and so on, in a way that somewhat matches the others. To give an example of how this can work, when I moved to the U.K. ⦠Iâm originally from Italy, and I started to teach in this university. A teaching inspector came to see me and told me, âWell, Alessandro, you have to move your arms a little bit less, because you sound very aggressive. You look very aggressive to the students.â You see, in Italy, it is quite normal to move the hands a lot, especially when we communicate in front of an audience. However, here in the U.K., when people use their arms â because everybody around the world does it â I have to do it in a bit more moderate way, in a more letâs say British way, in order to not sound aggressive. So, you see, gestures communicate all over the world. However, the accepted intensity you use changes from one place to the other. What are some of the practical applications of what youâre working on? Well, it is quite an exciting time for the community working on these types of topics. After the very pioneering years, if we look at the history of this particular branch of artificial intelligence, we can see that roughly the early 2000s was a very pioneering time. Then the community established more or less between the late 2000s and three or four years ago, when the technology started to work pretty well. And now we are at the point where we start seeing applications of these technologies initially developed at the research level in the laboratories in the real world. To give an idea, think of todayâs personal assistants that can not only understand what we say and what we ask, but also how we express our request. Think of many animated characters that can interact with the actual agents, social robots and so on. They are slowly entering into reality and interacting with people like people do â through gestures, through facial expressions and so on. We see more and more companies that are involved and active in these types of domains. For example, we have systems that manage to recognize the emotions of people through sensors that can be carried like a watch on the wrist. We have very interesting systems. I collaborate in particular with a company called Neurodata Lab that analyzes the content of multimedia material, trying to get an idea of its emotional content. That can be useful in any type of services about video on demand. There is a major force toward more human computer interfaces, or more in general human/machine interfaces that can figure out how we feel in order to intervene appropriately and interact appropriately with us. These are a few major examples. So, thereâs voice, which I guess you could use over a telephone to determine some emotional state. And thereâs facial expressions. And there are other physical expressions. Are there other categories beyond those three that bifurcate or break up the world when youâre thinking of different kinds of signals? Yes, somewhat. The very fact that we are alive and we have a body somewhat forces us to have nonverbal behavioral cues, how they are called, to communicate through our body. And even if you try not to communicate, that becomes somewhat of cue and becomes a form of communication. And there are so many nonverbal behavioral cues that psychologists group them into five fundamental classes. One is whatever happens with the head. Facial expressions, weâve mentioned, but there are also movements of the head, shaking, nodding and so on. Then we have the posture. Now in this moment we are talking into a microphone. But, for example, when you talk to people, you tend to face them. You can talk to them by not facing them, but the type of impression would be totally different. Then we have gestures. When we talk about gestures, we talk about the spontaneous movements we make. So, itâs not like the OK gesture with the thumb. Itâs not like pointing to something. These have a pretty specific meaning. For example, self-touching⦠that typically communicates some kind of discomfort. It is restrictive movements we make when we speak from a cognitive point of view. Speaking and gesturing is a cognitive bimodel unit, so itâs something that gets lumped together. Then we have the way of speaking, as I mentioned. Not what we say, but how we say it. So, the sound of the voice, and so on. Then there is appearance, everything we can do in order to change our appearance. So, for example the attractiveness of the person, but also the kind of clothes you wear, the type of ornaments you have, and so on. And the last one is the organization of space. For example, in a company, the more important you are, the bigger your office is. So space from that point of view communicates a form of social verticality. Similarly, we modulate our distances with respect to other people, not only in physical tasks but also in social terms. The closer a person is to us from a social point of view, the closer we let them come from a physical point of view. So, these are the five wide categories of social signals that psychologists fundamentally recognize as the most important. Well, as you go through them, I guess I can see how AI would be used. Theyâre all forms of data that could be measured. So, presumably you can train an artificial intelligence on them. That is exactly the core idea of the domain and of the application of artificial intelligence in these types of problems. So, the point is that to communicate with others, to interact with others, we have to manifest our inner state to our behavior â to what we do. Because we cannot imagine communicating something that is not observable⦠Whatever is observable, meaning it is accessible to our senses, is something that is accessible to artificial sensors. Once you can measure, once you can extract data about something, that is where artificial intelligence comes into play. At the point you can extract data, and the data can be automatically analyzed, then you can automatically infer information about the social and psychological phenomena taking place from the data you managed to capture. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 78: A Conversation with Alessandro Vinciarelli
0 Comments
GigaOm CEO Byron Reese recently sat down with Scott Aaronson to discuss Quantum Computing. Aaronson is the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas in Austin, where he also directs the UT Quantum Information Center. Prior to UT, he taught Electrical Engineering and Computer Science at MIT. Aaronson’s research focuses on the capabilities and limits of quantum computers and computational complexity. Byron Reese: Welcome Scott. Scott Aaronson: Thanks, great to join you. So it seems like you’re on a one-man crusade to dispel all the popular notions of quantum computing. Why don’t we start with that? Okay, well I write a blog and basically what happens is that every time there’s some really outrageous claim about quantum computing, that gets into the press, which is often. People start emailing me and they ask me to respond to it. So just by circumstance, and because no one else sets out to do this, it became me who did a lot of the responding. And so, the thing that you often have to respond to is this idea that a quantum computer can be zero and can have all these qubits that are zero and one simultaneously, and therefore they can solve really complex problems by looking at every possible combination of those all at once, and that’s not true. So explain why that isn’t true or what is true, how ever you want to do it. Well, it takes some time to explain… Take all the time you want. All right, so a qubit, which is the quantum version of a bit, is advanced as can be in what we call a superposition of the zero and one state. So it’s neither definitely zero nor definitely one. And the main problem is that people always want to round this down to something that they already know. They’ll say, “Well you just mean that the bit is either zero or it’s one, and you just don’t know which,” right? And then you look at it and you see which one. Well, if that’s all there was to it, it wouldn’t be so interesting. And this is what the popular articles often do: they switch to saying, “Well it must be both zero and one simultaneously, it must be both,” Then well if I had 1,000 qubits, then they could explore let’s say ‘two to the 1,000th power’ if possible simultaneously and that must be what is producing this enormous ‘speedup’ that quantum computation promises. That is gesturing towards something in the vicinity of the truth, but the problem is, that when you measure a qubit, you only see one result. You see a zero or a one; you don’t see both, and what quantum mechanics is at the core, is a way of calculating the probability that you’re going to see one outcome or another one when you make an observation. Now, the key point is that quantum states don’t obey the normal rules of probability that we know. So a probability is a number from zero to one, so you could have a 30% chance of rain tomorrow, but you never have a -30% chance, right, that would be nonsense, okay? But quantum mechanics is based on numbers called amplitudes, which can be positive or negative. In fact, they can even be complex numbers. So when you make a measurement, these amplitudes turn into probabilities, and so the larger amplitude becomes a larger probability of being something, but when a system is isolated, the amplitude can evolve by rules that are very unfamiliar to everyday experience. That is what pretty much everything you’ve ever heard about the weirdness of the quantum world boils down to. So what a qubit really is is it’s a bit that has some amplitude for being zero and some amplitude for being one, so it has one of these complex numbers attached to each of the possibilities. If I had 1,000 qubits, likewise then there would be an amplitude that I would have to assign to every possible setting of all 1,000 bits. So there’s this sort of, quantum mechanics has been telling us since the 1920s, just to keep track of the state of let’s say, 1,000 measly particles, there is an immense object beneath the surface, and nature is somehow keeping track of this list of ‘two to the 1,000 power’ complex numbers if you like, which is more numbers than can be written in the entire observable universe. But again the problem is, when you make a measurement, you don’t actually see these numbers, you just see a single probabilistic outcome, so you could create what we call an equal superposition over all possible answers to your hard problems, that’s actually a very easy thing to do with a quantum computer. The problem is if you just did that, then when you measure, then quantum mechanics tells you that all you’re going to see would be a random answer, and if you just wanted a random answer, well you could have picked one yourself with a lot less trouble, right? So the entire hope for getting a speed advantage from a quantum computer is to exploit the way that these amplitudes work differently than probabilities. The main thing that amplitudes can do, that probabilities don’t do, is that they can interfere with each other. This is most famously illustrated in the ‘double slit experiment,’ which we were just discussing before the show. This is the thing where you shoot a photon one at a time at a stream with two small slits in it, and you look at where they end up on a screen behind it, and what you find is that there are certain spots where the photon never appears, almost never appears, and yet, if you close off one of the slits, then the photon could appear in those spots. To say that again, like decreasing the number of paths that the photon could take to reach a certain spot, you can increase the chance that it gets to that spot. This is the thing that violates any conventional understanding of probability. If the photon was just going through one slit or the other, this would be nonsense, okay, whereas if you observe which slit the photon is going through more generally, if the information about which slit it’s going through, leaks out into the external world in any way, then again the photon can appear in these spots, and then you stop measuring to see which slit it went through and then it doesn’t appear there anymore. So the quantum mechanical explanation for this is that the photon has some amplitude of reaching the first spot for the first slit and some amplitude of reaching it for the second slit, and to find the final amplitude that it gets to a certain place, you have to add up the amplitudes from all the ways it could have reached that spot. Now if one of those amplitudes is positive and the other one is negative, then those amplitudes can, as we say, interfere destructively and cancel each other out, with the result being that the final amplitude is zero, and so that event doesn’t happen at all, whereas if you close off one off the slits then the amplitude is positive or it’s negative, and so then the photon can appear there. Okay, so, that’s quantum interference, which as I said, is behind pretty much every goofy quantum effect you’ve ever heard about. Now the idea with the quantum computer, is to do something like the double slit experiment, but on a much more massive scale, where instead of just having one particle, we might have thousands or millions of particles, which could all be correlated with each other. The quantum version of correlation is called ‘entanglement,’ okay, and so the state of 1,000 qubits as we said, it involves two to the 1,000 amplitudes and so forth. Now what they’re trying to do with every quantum algorithm, is you’re trying to choreograph things in such a way that for each wrong answer to your computational problem, some of the paths leading to that answer have positive amplitude, and others have negative amplitude, so on the whole, they cancel each other out, whereas the path leading to the right answer should all be ‘in phase’ with each other. They should all have amplitudes of the same side, say all positive or all negative. If you can mostly arrange for that to happen, then when you measure the state of your quantum computer, then you will see the right answer with a large probability. If you don’t see the right answer, you can simply keep repeating the computation until you do. So nature really gives you a very bizarre ‘hammer’ that I think can be explained in five or ten minutes as I did, but it doesn’t really compress well to a one sentence output. People always want to round it down, whereas the quantum computer just tries all of the possible answers at once, but the truth is that if you want to see an advantage, you have to exploit this interference phenomenon. It was really only in the 1990s, so more than a decade after physicists like Richard Feynman, first proposed the idea of quantum computation, that people finally started figuring out: What are some nails that this hammer can hit? What is this interference ability good for? Then that’s what really started quantum computation as a field. And where are we with it now? Quantum machines exist right? Oh yeah absolutely, so… And how many are there? Yeah well, so people have been doing experiments in quantum computation since the 1990s, and just within the last 3-4 years, it’s starting to see really serious investment, where it is no longer just the sort of academic pursuit it was when I entered this field around 2000. It is now Google, IBM, Microsoft, Intel, a bunch of startup companies, all are investing on a scale of hundreds of billions of dollars. Why are they so expensive? Well, you’re trying to build a technology that’s never been built before. At a bare minimum, so there are many different approaches to quantum computing, but if you’re doing superconducting qubits, which is maybe the most popular approach today, then at a bare minimum, you need to cool everything down to 10 mKB, or so, so that your chip superconducts and you see the quantum behavior, so that means that you need to see an enormous cooling system. Then you need all these custom electronics to send signals to the qubit. Now what people are trying to do is integrate a large number of qubits and it’s a serious operation. And who’s the current record holder right now? How many qubits? It’s a mistake to just look at the pure number of qubits, because there’s a tradeoff. What we really care about is the quality of the qubits, so the qubit that maintains it’s quantum state for a long time, without leaking it into the environment is a very good qubit. A qubit that just sort of leaks its state really fast is a bad qubit. Like a discount qubit. Yeah exactly, so, a bad qubit you can’t use for very many steps of computation, right? You can maybe do a few steps of quantum computation but then the qubit will just die, it will just revert to being a classical bit. So you really need to look at quality. If you just cared about the sheer number, there’s a startup company called D-Wave that notoriously has been selling devices with 2,000 qubits. A few people have bought them, but analysis over the past 5 years has found that the qubits do not seem to be of a good enough quality to see a clear speed up over a classical computer, when you do a fair comparison. What players like Google and IBM are trying to do right now, is something similar to what D-Wave did, mainly chip with a large number of integrated superconducting qubits, but now with qubits of a much, much higher quality. So roughly the D-Wave qubits could maintain their coherence for some nanoseconds, and the new generation of superconducting qubits can maintain their quantum coherence for a scale of tens of microseconds, which doesn’t sound like a long time but that’s tens of thousands of times longer than nanoseconds, and it gives you a lot more room to view interesting quantum operations. So to answer your question, with this new generation of qubits, I think that Google and IBM and Rigetti have all built chips on the order of 20 qubits, which work reasonably well, not as well as you would like, and they’re right now in a race actually to scale this up to about 50-70 qubits. The biggest question (they will surely be able to do that) is how well will these qubits perform when they’re all integrated in one chip? Will they still maintain their quantum coherence long enough for us to actually execute interesting algorithms and see an interesting speedup? So when you say they become just a plain old ordinary bit, is that like a light bulb burning out and then they’re not good for anything, or is that like a reset button? There’s a reset button. You can always re-initialize the qubit, but then again you have a very short time to try to do quantum operations before the qubit leaks into the environment. So, how big are these machines? If you went into Google, and said, “show me.” Because we tell all these stories about how the first computers filled a room. Yeah well I was just out there a few months ago in their lab in Santa Barbara. It does fill a room, but it is I guess a device, each one is the size of a small room, but almost all of that is just the cooling system, and it’s the controlled electronics and the actual chip where the qubits are the size of an ordinary computer chip. Is room temperature quantum computing, would that ever be a thing? Conceivably. There are proposals, including optical, photonic quantum computing, yeah that if one could get them to work at all, then they could work at room temperature. The superconducting approach which is maybe the furthest point along right now, does currently require very low temperatures, and trapped ions which is maybe the second approach after superconducting also requires very low temperatures right now. Is the development of these machines progressing along a Moore’s Law kind of arc? Are they doubling by some measure in some capability every X months? I think it’s too early to identify any Moore’s Law pattern. I mean for god sakes we don’t even know which technology is going to be the right one. The community is not converged around is it going to be superconducting or trapped ions or something else? You can make plots of the number of qubits and the coherence time of those qubits, and you do see a strong improvement. But the number of qubits, let’s say it’s gone up from one or two to 20, it’s kind of hard to see an exponential in those numbers. Actually the coherence time, if you plot it over the past 20 years, I think the error rate has been going down more or less exponentially, so there is sort of Moore’s law there. Basically the rates of de-coherence — unwanted interaction between the qubits and their environment — are still too high. They’re still higher than they need to be for us to scale this technology up to say millions of qubits, but on the other hand, they are orders of magnitude better than they were 20 years ago when people first started doing these experiments. That sounds exponential. Well yeah, so I think there is there, but the error rates — we know first of all they can’t be pushed down forever, but secondly they don’t have to be. This is actually a very important point: there were physicists in 1990 when quantum computing was brand new, who said, “This could never work, not even in principle, because you will never perfectly isolate a qubit from its environment, and if it’s not perfectly isolated, then you can only do so many steps until everything is dead. You’re never going to be able to scale this up.” Now what changed most people’s views, was a fundamental discovery in the mid 90s, which was called quantum error correction, and quantum fault power, and what that said is you don’t actually need to get the rate of coherence down to zero, you merely need to make it very small. Let’s say initially it shows one in a billion chance of an error per time per logical operation would suffice. Now I think the estimates are more on the order of one in a thousand, but as long as the decoherence rate is well enough, what was found is that if you can encode the logical qubits, you care about across the collective state of many physical qubits, using a clever error-correcting code, in such a way that even if it’s any, let’s say 1% of your physical qubits are in a dock, go out like that lightbulb, you can detect that. You fix it and you recover the information that you care about from the remaining 99%, and then you just keep going. The main engineering goal in this field, since the discovery of fault tolerance, is to get the physical decoherence rate to be well enough that you can start using these error correcting codes, and then push the effective decoherence rate down to zero. So we’re not quite there yet. Basically if you just look at one or two qubits in isolation, then the Google group and others, just I think four or five years ago, have gotten the decoherence rates good enough, that if you just looked at the qubits in isolation, it looks like they’re you’re already good enough, you’re already past the threshold for fault tolerance. I mean that itself is fairly recent. But now the problem is that when you try to integrate a large number of qubits into a single chip, like let’s say 50 of them or 100, then you need much more controls that are electronic, there are much more interactions, and that pushes the rate of decoherence back up. So now the challenge is to maintain that decoherence rate where you could apply these error correcting codes while integrating a huge number of qubits in a single system. Where is the United States compared to other countries in terms of investment and accomplishments? Is the majority of the activity in the field here in this country, or are we just a small part of it? I would say that the U.S. is the leader. The efforts of Google and IBM and Rigetti and the group in Maryland which is the leader in trapped ions, are all US-based. Also many of the leading theoretical groups like MIT, CalTech, Berkeley. Canada also happens to be a huge player in quantum computing, particularly the University of Waterloo, which may be the world’s biggest center for this field. Besides that, Europe is a big player. In fact, they recently got a $1 billion quantum information funding initiative for the EU, so the Netherlands especially, well the UK (that will no longer be a part of the EU) and a bunch of the countries in Europe. China has, for whatever reason, focused much more on quantum communication which is a different area than quantum computing, and I would say China is now the world leader on quantum communication. They had a breakthrough last summer where for the first time, they could send a quantum state up to a satellite and back down to Earth and so from one end of China to the other end, it hasn’t maintained its quantum state, maintained its coherence, and that could be useful for various applications, but of course that’s a different thing from quantum computing. That’s an entanglement thing. You flip it one way in Shanghai and it instantly flips the same way in Beijing and in theory in communications, it can’t be hacked? Okay well wait, there’s a bunch of things to disentangle there so to speak. The first thing is the Chinese did actually demonstrate distributing this quantum entanglement across thousands of miles, which was a distance record for entanglement. But you have to be careful again, because entanglement cannot be used to send a signal faster than light, so if I have two entangled particles, I can measure one of them, and I see some axon like zero and then instantly I know that the other one is also zero. But it’s not like I got to choose that the outcome should be zero. It was going to be zero or one randomly. But it is faster than the speed of light? Well it’s instantaneous, but it is not a channel of communication, what it is is it’s a form of correlation, which you can actually use to produce correlation between far away particles that you could never have produced classically. That was the famous discovery made by John Bell in the 1960s that there were certain experiments that you could do on entangled particles that could never be explained by any sort of theory where the particles would just sort of agree in advance, ‘Listen if anyone asks, I’ll be a zero and you’ll be a one.’ Right? There’s no theory of that kind that explains the results of these experiments, so entanglement is a real phenomenon in the universe but it’s not useful for actually sending instantaneous signals right? Einstein’s speed limit: you can’t send a signal faster than light is still upheld. Then the other thing you alluded to was quantum cryptography, quantum key distribution, which is a different idea in quantum computing that involves having a theoretically unbreakable cryptography that you would get by sending qubits from across a channel, that actually doesn’t require entanglement. It can be done with current technology. There are even companies which sell quantum cryptography devices today. So far there’s been only a very, very small market for them, because first of all, it doesn’t work over the standard internet. You need a special communication infrastructure to do it, and the current devices, the ones that use a fiber optic cable, they work over systems of about 10 miles. After about 10 miles, the photons lose their quantum coherence, so, it’s good enough for the financial district of a city, but not for really connecting the whole world. That’s why people were excited when China managed to do this to and from a satellite over thousands of miles. Unfortunately though, the bit rate I think is still extremely poor. And it’s been argued by some that human consciousness is itself a quantum phenomenon. Does that mean anything to you? It’s an interesting hypothesis, but I don’t think there’s good evidence at this time for consciousness involving quantum computation, and there are several difficulties that any theory of that kind would have to overcome. The first one is that the brain seems like an incredibly hot and wet and noisy environment. It seems like no place for a qubit. But the answer to that has been that they do believe now that quantum phenomena are occurring like in a bird’s navigation systems and things like that. Oh yeah, that’s right, there is no question that there are quantum effects that are important for biology, that is not in dispute for bird navigation. Also green plant photosynthesis is a quantum effect, but maybe this is not so surprising, because all of chemistry is based on quantum mechanics right? So of course when you go to a small enough scale, you’re going to see quantum phenomena. What’s cool is that evolution was sometimes able to exploit these phenomena, but now the problem is if you want [to see a] human thought display, the brain is a very large system. This is not on the molecular scale, and once things are resolved to the level of a given neuron is inspiring a signal, across an axon or it’s not firing one, right, then that seems like very much a classical event. It’s an event that leaves records in its environment, that’s what I mean by that. But to jump in on that one for just a second, but within the neuron itself, I mean we don’t know how a neuron does what it does, it could be operating at the Planck level, right? The problem is that a neuron does not have anything of a nearly high enough energy to probe physics at the Planck scale. Not even the Large Hadron Collider is able to get anywhere near to Planck scale. This is 20 orders of magnitude bigger than the Planck scale, then yeah it is not ruled out. There could be all sorts of weird quantum phenomena taking place in a neuron, but then one would then have the burden of showing that any of those phenomena were important for consciousness, as opposed to just being like another source of thermal noise, effectively. So that’s where that discussion is. If you want us to have a quantum account of consciousness, I think that there are further difficulties. The first thing is you have to have a reason why is that [needed], what does that help you to explain that was previously unexplained? The answer to that might be along the lines of, there seem to be sorts of problems that the human brain can solve that don’t seem to be solvable by a Turing machine. I’m not sure that that’s true actually. Now I would say we don’t know the answer to that. I mean the famous ‘halting problem’ that proveably no Turing machine is able to solve, but I can’t solve the halting problem either. If I could then I could immediately win the field medal in Math by resolving thousands of famous unsolved math problems. I try to solve math problems, but it’s very much hit or miss, so what we know from the work of Godel and Turing and those people is that you could never build a computer and some people have seized on that point. Some people like Roger Penrose have seized on the observation by Godel and Turing that no machine can be a perfect oracle for mathematics. In order to say that the brain or at least the mathematician’s brain must be doing something that a Turing machine can’t, but the obvious problem with that argument is that humans are not perfect oracles for mathematics either to put it very mildly. To achieve the dreams of AI, a computer would not need to be a perfect mathematician, it would merely have to be as smart as or smarter than us. So, we haven’t really talked about what a quantum computer would be used for, what it would be useful for, but that feeds into this debate as well about quantum mechanics and consciousness, because an issue is that the types of problems that we know that quantum computers would be good at, do not seem like a good fit to what human creativity is good at. To say it very briefly, the main applications that quantum computers are known to have are simulating quantum physics and chemistry, breaking public key encryption systems, and getting probably some modest speedups for optimization in machine learning type of problems. But the most dramatic speedups are for breaking public key cryptography and for simulating quantum mechanics, which I hope you agree are not exactly the things that humans evolved towards to help us survive on the savannah. Right, well clearly birds don’t use quantum effects to navigate because quantum computing’s only good for breaking public key encryption, not for navigating north to south. You’re just saying that what we’re building machines to do in the Model-T era of quantum computers doesn’t seem to be what the brain does. Ergo, the brain is not a quantum device. Well, I’m just saying that these are the burdens that this hypothesis has to pass to get taken seriously. You need to show where the quantum effects could actually be used in a computational way in the brain, and then you need to explain what they’re for that the brain could not be doing just as easily classically, and what you gain by postulating. Fair enough. I think the answer to that, like if you really boil it down is, people say, “Well, we have consciousness, we experience the world,” and how that comes about does not seem to be a question we know how to ask scientifically, nor do we even know what the answer to that would look like scientifically, and so it seems like this big asterisk in the log book of life. Then you all of a sudden get this theory that has all this other weird stuff going on. You say that’s weird too. Maybe the two weirds are paired together, so I think it’s an intuitive thing more than anything. Right well, that does seem to be what the argument boils down to. Consciousness is weird, quantum mechanics is weird, ergo maybe they’re related. I mean the problem is, as I was saying is just a bare quantum computer, it doesn’t seem a lot easier to understand how that could be conscious than to understand how an ordinary computer could be conscious. It seems like there’s a mystery in the one case just as in the other. Regarding consciousness and quantum phenomenon, you talked briefly about some of the things that we plan to use quantum machines for, but surely Google and IBM aren’t investing all of that money because they want to break the public key encryption, right? Right, that’s absolutely right. I think to be perfectly honest, Google and IBM and the other players are not completely sure themselves what the applications are going to be. They’re very excited about the applications to machine learning and optimization. To be honest it’s sort of a question mark right now. Even if you had a perfectly functioning quantum computer with perfect coherence and millions of qubits, we’re not really sure yet exactly how much speed up it could give you for optimization and machine learning problems. There are algorithms that might give a huge speedup but we don’t really know how to analyze them. We may just have to try them out with the quantum computer and see how they perform, and then there are algorithms that can’t be analyzed, that do give huge speedups, but only in very very special situations, where we don’t really know yet if they will be relevant to practice or not. What you typically can get for optimization and machine learning is a square root speedup, so you can typically solve those sorts of problems in something like the square root of the number of steps that a classical computer would need, and that is using one of the most famous quantum algorithms, which is called Grover’s algorithm, discovered in 1996. A square root speedup is very useful that sort of doubles the size of the problem instance you could handle if you’re trying to do computational optimization. What used to take two to the ‘n’ steps, now only takes 2 to the n over two. Okay, but that’s not an exponential speedup. The exponential speed-ups that we know about seem to be much more special. They do include breaking essentially all laws of public key encryption that we happen to use today to secure the internet, so that’s probably the most famous application of quantum computers. That’s called Shor’s algorithm, which was discovered in 1994, but even there there’s a lot of research today on building quantum-proof public key encryption systems, and actually NIST (National Institute for Standards & Technology) is going to have a competition over the next few years to establish standards for quantum resistance encryption, and it looks like we may actually migrate to that over the next decade or two. So, that is a solvable problem. I think the current encryption we use is vulnerable. Now you know what I think is probably the most important application of quantum computing, at least that we know about today is actually the first point that Richard Feynman and the others thought of when they proposed this idea in the 1980s, and that’s simply to use a quantum computer to simulate quantum mechanics itself. That’s something that, it sounds almost too obvious to mention, it’s what a quantum computer does in its sleep, and yet that has an enormous range of applications. If you want to design high temperature superconductors, we talked before about how current super conductors only work at close to absolute zero, well what if you wanted to solve that? That is a quantum mechanics problem. If you wanted to design higher efficiency solar panels, if you wanted to design better ways of making fertilizer, where it could be done at lower temperatures, these are all sort of, many body quantum materials and quantum chemistry problems, where, even with the best super computers that are available today, there’s only a limited amount that we can learn because of this exponentiality of amplitude. So a quantum computer can give you an enormous new window into simulating physical chemistry and that is something that can have a lot of industrial applications. That’s not something that directly affects the end user in the sense that you’re going to use it to check your email or play ‘angry birds’ or something, but that is something where, to improve on any of these sorts of material processes, could be billions of dollars of value. Why is it that we don’t know more about what we would do with quantum machines? Because it would seem to my limited mind, that we know what we’re trying to build, we just don’t have the physics down on actually building it. We know in theory how it would behave, so is it that we don’t know how the machine will work, or we don’t have the imagination at this point, it’s just too soon to have thought it all through? Well, one hypothesis would be that the quantum computers only do give you a speedup for certain specialized applications, and we have discovered many of those applications. That might be the truth of the matter. A second possibility would be that there are many more applications of quantum computers that haven’t been discovered yet, and we just haven’t had the imagination to invent the algorithms. I would guess that the truth is somewhere in between those two. People have been thinking about quantum algorithms seriously now for about 25 years, so it’s not as long as people have been thinking about classical algorithms, but it’s still a significant chunk of time, and there is an immense body of theoretical understanding about quantum algorithms, what they can do, and also what they can’t do in various settings. We know we understand some things about what sorts of tasks seem to be hard even for quantum computers, but some people are disappointed that the set of maybe the most striking quantum algorithms have been in place since the 1990s. Shor’s algorithm, Grover’s algorithm, quantum simulation and all of these things have been enormously generalized and applied to all sorts of other problems but there haven’t been that many entirely new families of thought of algorithms to be discovered. There was maybe one in 2007, something called the HHL algorithm for solving linear systems, and that led to a lot of other developments. The truth is that we’re not even very close to understanding the ultimate capabilities of classical algorithms, let alone quantum algorithms. So you’ve probably heard of the P versus NP question? We can’t even rule out that there’s a super fast classical algorithm, they just solved the traveling salesman problem, to solve all these other NP-complete problems, although most of us believe that that doesn’t exist, but it’s a measure of how far we are from really understanding algorithms, that we can’t rule it out. Far less do we understand the ultimate capabilities and remits of quantum algorithms, but there’s a lot that we do know, and check back in another few years. I hope that we’ll know more. Alrighty well, that’s a good place to leave it. Tell the readers how they can keep up with you and your writing. You mentioned your blog, can you throw out some? So I’m pretty easy to find, my homepage is www.scottaaronson.com. I write a blog about quantum computing and also all sorts of other things, that’s www.scottaaronson.com/blog. If you go to my blog, I’ve got the links to a bunch of popular articles and lecture notes about quantum computing, and then I have my book, “Quantum Computing since Democritus,” which came out in 2013. A reference to, ‘there’s nothing but atoms and the void?’ Yeah, that’s right. Alrighty well thanks a bunch, Scott. via Tumblr Quantum Computing, Capabilities and Limits: An Interview with Scott Aaronson When it rains it pours. It seems regarding Enterprise IT technology innovation, it is common for multiple game-changing innovations to hit the street simultaneously. Yet, if ever the analogy of painting the car while its traveling down the highway is suitable, it’s this time. Certainly, you can take a wait and see approach with regard to adoption, but given the association of these innovations toward greater business agility, you’d run the risk of falling behind your competitors. Let’s take a look at what each of these innovations mean for the enterprise and their associated impact to the business. First, let’s explore the synergies of some of these innovations. Certainly, each innovation can and does have a certain value by themselves, however, when grouped they can provide powerful solutions to help drive growth and new business models.
There are businesses that are already using these technologies to deliver new and innovative solutions, many of which have been promoted in the press and at conferences. While these stories illustrate strong forward momentum, they also tend to foster a belief that these innovations have reached a sufficient level of maturity, such that the solution is not susceptible to lack of availability. This is far from the case. Indeed, these innovations are far from mainstream. Let’s explore what adoption means to IT and the business for these various innovations. Hybrid CloudI specifically chose hybrid cloud versus public cloud because it represents an even greater amount of complexity to enterprise IT than public cloud alone. It requires collaboration and integration between organizations and departments that have a common goal but very different approaches to achieving success. First, cloud is about managing and delivering software services, whereas the data center is charged with delivering both infrastructure and software services. However, the complexity and overhead of managing and delivering reliable and available infrastructure overshadows the complexity of software services, resulting in the latter often receiving far less attention in most self-managed environments. When the complexity surrounding delivery of infrastructure is removed, the operations team can focus solely on delivery and consumption of software services. Security is always an issue, but the maturation process surrounding delivery of cloud services by the top cloud service providers means that it is a constantly changing environment. With security in the cloud, there is no room for error or the applications could be compromised. This, in turn, requires that after each update to the security controls around a service the cloud team (architects, developers, operations, etc.) must educate themselves on the implications of the change and then assess how that change may affect their production environments. Any misunderstanding of these updates and the environment could become vulnerable. Hybrid cloud also often means that the team must retain traditional data center skills while also adding skills related to the cloud service provider(s) of choice. This is an often overlooked aspect of assessing cloud costs. Moreover, highly-skilled cloud personnel are still difficult to attract and usually demand higher than market salaries. You could (and should) upskill your own staff, but you will want a few experts as part of the team on-the-job training for public cloud, as unsecured public cloud may lead to compromising situations for businesses. Internet-of-Things (IoT)The issue with IoT is that it is not one single thing, but a complex network of physical and mechanical components. In a world that has been moving to a high degree of virtualization, IoT represents a marked shift back toward data center skills with an emphasis on device configurations, disconnected states, limitations on size of data packets being exchanged, and low-memory code footprints. Anyone who was around during the early days of networking DOS PC’s will be able to relate to some of the constraints. As with all things digital, security is a highly-complex topic with regard to IoT. There are so many layers within an IoT solution that welcomes compromise: the sensor, the network, the edge, the data endpoint, etc. As many of the devices participating in an IoT network may be resource constrained there’s only so much overhead that can be introduced for security before it impairs the purpose. For many, however, when you say IoT they immediately only see the analytical aspects associated with all the data collected from the myriad of devices. Sure, analyzing the data obtained from the sensor mesh and the edge devices can yield an understanding of the way things worked in ways that were extremely difficult with the coarse-grained telemetry provided by these devices. For example, a manufacturing device that signaled issues with a low hum prior to the use of sensors that now reveal that in tandem with the hum, there’s also a rise in temperature and an increase in vibration. With a few short months of collecting data, there’s no need to even wait for the hum, the data will indicate the beginning of a problem. Of course, the value discussed in the prior paragraph can only be expressed if you have the right skilled individuals across the entire information chain. Those able to modify or configure endpoint devices to participate in an IoT scenario, the cybersecurity and infosec experts to limit potential issues due to breach or misuse, and the data scientists capable of making sense of the volumes of data being collected. Of course, if you haven’t selected the public cloud as the endpoint for your data, you also then have the additional overhead of managing network connectivity and storage capacity management associated with rapidly growing volumes of data. Artificial Intelligence and Machine Learning (AI/ML)If you can harness the power of machine learning and AI you gain insights into your business and industry in a way that was very difficult up until recently. While this is seemingly a simple statement, that one word “harness” is loaded with complexity. First, these technologies are most successful when operating against massive quantities of data. The more data you have the more accurate the outcomes. This means that it is incumbent upon the business to a) find, aggregate, cleanse and store the data to support the effort, b) formulate a hypothesis, c) evaluate the output of multiple algorithms to determine which will best support the outcome you are seeking—e.g. predictive, trends, etc.—and d) create a model. This all equates to a lot of legs to get the job done. Once your model is complete and your hypothesis proven, the machine will do most of the work from there on out but getting there requires a lot of human knowledge engineering effort. A point of caution, make business decisions using the outcome of your AI/ML models when you have not followed every one of these steps and then qualified the outcome of the model against the real world at least two times. BlockchainTouted as the technology that will “change the world,” yet outside of cryptocurrencies, blockchain is still trying to establish firm roots within the business world. There are many issues with blockchain adoption at the moment, the most prevalent one is velocity of change. There is no single standard blockchain technology. There are multiple technologies each attempting to provide the foundation for trusted and validated transactional exchange without requiring a centralized party. Buying into a particular technology at this point in the maturity curve, will provide insight into the value of blockchain, but will require constant care and feeding as well as the potential need to migrate to a completely different network foundation at some point in the future. Hence, don’t bet the farm on the approach you choose today. Additionally, there are still many outstanding non-technical issues that blockchain value is dependent upon, such as the legality of blockchain entries as a form of non-repudiation. That is, can a blockchain be used as evidence in a legal case to demonstrate intent and validation of agreed upon actions? There are also issues related to what effect use of a blockchain may have on various partnering contracts and credit agreements, especially for global companies with GDPR requirements. Finally, is the value of the blockchain a large enough network to enforce consensus? Who should host these nodes? Are the public networks sufficient for business or is there a need for a private network shared among a community with common needs? Containers, DevOps, & Agile SDLCI’ve lumped these three innovation together because unlike the others, they are more technological in nature and carry elements of the “how” more so than the “what”. Still, there is a significant amount of attention being paid to these three topics that extend far outside the IT organization due to their association with enabling businesses to become more agile. To wit, I add my general disclaimer and word of caution, the technology is only an enabler, it’s what you do with it that might be valuable or may have an opposite effect. Containers should be the least impactful of these three topics, as it’s simply another way to use compute resources. Containers are smaller and more lightweight than virtual machines but still facilitate a level of isolation between what is running in the container and what is running outside the container. The complexity arises from moving processes from bare metal and virtual machines into containers as containers leverage machine resources differently than the aforementioned platforms. While it’s fairly simple to create a container, getting a group of containers to work together reliably can be fraught with challenges. This is why container management systems have become more and more complex over time. With the addition of Kubernetes, businesses effectively needs the knowledge of data center operations in a single team. Of course, public cloud service providers now offer managed container management systems that reduce the requirements on such a broad set of knowledge, but it’s still incumbent on operations to know how to configure and organize containers from a performance and security perspective. DevOps and Agile Software Development Lifecycle (SDLC) really force the internal engineering teams to think and act differently if they are transitioning from traditional waterfall development practices. Many businesses have taken the first step of this transition by starting to adopt some Agile SDLC practices. However, because of the need for retraining, hiring, and support of this effort, the interim state many of these businesses are in have been called “wagile” meaning some combination of waterfall and agile. As for DevOps, the metrics have been published regarding the business value of becoming a high-performing software delivery and operations organization. In this age of “software is eating the world” can your organization ignore DevOps and if not ignore take years to transition? You will hear stories from businesses that have adopted DevOps and Agile SDLC and made great strides in reducing latency, increasing the number of releases they can make in a given time period, and deploying new capabilities and functions to production at a much faster rate with fewer change failures. Many of these stories are real, but even in these businesses, you will still find pockets where there is no adoption and they still follow a waterfall SDLC that take ten months to get a single release into production. ConclusionIndividually, each of these innovations requires trained resources, funding, and can be difficult to move beyond proof-of-concept to completely operationalized production outcomes. Taken in combination, on top of existing operational pressures, these innovations can rapidly overwhelm even the most adept enterprise IT organization. Even in cases where there is multi-modal IT and these innovations are occurring outside the path of traditional IT, existing IT knowledge and experience will be required to support. For example, if you want to analyze purchasing trends for the past five years, you will need to support of the teams responsible for your financial systems. All this leads to the really big question, how should businesses go about absorbing these innovations? The pragmatic answer is of course introduce those innovations related to a specific business outcome. However, as stated, waiting to introduce some of these innovations could result in losing ground to competition. This means that you may want to introduce some proof-of-concept projects especially around AI/ML and Agile SDLC with IoT and Blockchain projects where they make sense for your business. via Tumblr Hybrid Cloud, IoT, Blockchain, AI/ML, Containers, and DevOps… Oh My! Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 77 of Voices in AI features host Byron Reese and Nicholas Thompson discussing AI, humanity, social credit, as well as information bubbles. Nicholas Thompson is the editor in chief of WIRED magazine, contributing editor at CBS, co-founder of The Atavist and also worked at The New Yorker and authored a Cold War era biography. Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript. Transcript ExcerptByron Reese: This is Voices in AI, brought to you by GigaOm, Iâm Byron Reese. Today my guest is Nicholas Thompson. He is the editor in chief of WIRED magazine. Heâs also a contributing editor at CBS which means youâve probably seen him on the air talking about tech stories and trends. He also co-founded The Atavist, a digital magazine publishing platform. Prior to being at WIRED he was a senior editor at The New Yorker and editor of NewYorker.com. He also published a book called The Hawk and the Dove, which is about the history of the Cold War. Welcome to the show Nicholas. Nicholas Thompson: Thanks Byron. How you doing? Iâm doing great. So⦠artificial intelligence, whatâs that all about? (Laughs) Itâs one of the most important things happening in technology right now. So do you think it really is intelligent, or is it just faking it? What is it like from your viewpoint? Is it actually smart or not? Oh, I think itâs definitely smart. I think that the premise of artificial intelligence, which if you define it as machines making independent decisions, is very smart right now and soon to get even smarter. Well it always sounds like Iâm just playing what they call semantic gymnastics or something. But does the machine actually make a decision, or is it just no more than your clock makes a decision to advance the minute hand one minute? The computer is as deterministic as that clock. It doesnât really decide anything it just is a giant clockwork isnât it? Right. I mean that gets you into about 19 layers of a really complicated discussion. I would say âyesâ in a way it is like a clock. But in other ways, machines are making decisions that are totally independent from the instructions or the data that was initially fed it, are finding patterns that the humans wonât see, and couldnât be coded in. So in that way it becomes quite different from a clock. Iâm intrigued by that. I mean the compass points to the north. It doesnât know which way north is. That would be giving it too much credit. But it does something that we canât do, called magnetic north. So how is that really is the compass intelligent by the way you see the world? Is the compass intelligent by the way I see the world? Well the compass is⦠I mean one of the issues here is that artificial intelligence uses two words that have very complicated meanings and their definition evolves as we learn more about artificial intelligence. And not only that, but the definition of artificial intelligence and the way itâs used changes constantly both as our technology evolves as it learns to do new things, and as it develops its brand value. So back to your initial question, âIs a compass that points to the north intelligent?â It is intelligent in the sense that itâs adding information to our world, but itâs not doing anything independent of the person who created it, who built the tools and who imagined what it would do. You build a compass you know that itâs going to point north, you put the pieces inside of it, [and] you know it will do that. Itâs not breaking outside of the box of the initial rules that were given to it and the premise of artificial intelligence is that it is breaking out of that box. So Iâd like to really understand that a little more. Like if I buy a NEST learning thermometer and over time Iâm like, âoh Iâm too hot, Iâm too cold, Iâm too cold,â and it âfigures it outâ but how is it breaking out of what it knows? Well what would be interesting about a NEST thermometer, (I donât know the details of how a NEST thermometer works, but) a NEST thermometer is looking at all the patterns of when you turn on your heat and when you donâtâ¦. If you program in a NEST thermometer and you say please make the house hotter between 6:00 in the morning and 10:00 oâclock at night, thatâs relatively simple. If you just install a NEST thermometer and then it watches you and follows your patterns and then reaches the same conclusion, itâs ended up at the same output, but itâs done it in a different way which is more intelligent right? Well thatâs really the question isnât it? The reason I dwell on these things is not to kind of count angels dancing on heads of pins. But to me this kind of speaks to the ultimate limit of what this technology can do. Like if it is just a giant clockwork, then you have to come to the question, âIs that what we are? Are we just a giant clockwork?â If weâre not and it is, then there are limits to what it can do. If we are and it is or weâre not and itâs not, then maybe someday it can do everything we can do. Do you think that someday it can do everything we can to do? Yes. I thought this might be where you were going and this is where it gets so interesting. And that was where in my initial answer I was starting to head in this direction, but my instinct is that we are like a giant clock, an extremely complex clock and a clock thatâs built on rules that we donât understand and wonât understand for a long time, and that is built on rules that defy the way we normally programmed rules into clocks and calculators, but that essentially we are reducible to some form of math, and with infinite wisdom we could reach that that there isnât a special spiritual unknowable element in the box⦠Let me pause right there. Letâs put a pin in that word âspiritualâ for a minute, but I want to draw attention to when I asked you if AI is just a clockwork, you said âNo itâs more than that,â and if I ask you if a humanâs a clockwork, you say âyeah I think so.â Well thatâs because I was taking your definition of clock, right? So I think what you said a minute ago is really where itâs at â which is: either we are clocks and the machines are clocks, or we are machines, we are clocks and theyâre not clocks, there are four possibilities there. And my instinct is that if weâre going to define it that way, Iâm going to define clocks in an incredibly broad sense meaning mathematical reasoning including mathematics we donât understand today, Iâll make the argument that both humans and machines youâre creating are clocks. If weâre thinking of clocks in a much narrower sense, which is just a set of simple instructions input/output, then machines can go beyond that and humans can go beyond that too. But no matter how we define the clocks, Iâm putting the humans and the machines in the same category. So I either agree depending on what your base definitions are that humans and machines both are category A or theyâre both not category A, that there isnât any fundamental difference between the humans and the machines. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 77: A Conversation with Nicholas Thompson Welcome to the New Year. 2018 was a pivotal year for GigaOm with significant growth in both our research and new media initiatives, and as we kick-start 2019 I’m excited to attend CES. There I will join nine notable authors who are invited to share their books as part of Gary’s Book Club at the CES event. Each year CTA president/CEO Gary Shapiro promotes what he considers the best technology-related books published in the previous year. This year’s authors include Mark Mueller-Eberstein and Phil Klein, Heidi Forbes Öste, Ph.D., Frances West, Gary Shapiro, Charlie Fink, Kristen Gallerneaux, Kate O’Neill, John Chambers and Diane Brady, Scott Brown and myself. Our collective works explore everything from blockchain, to digital mastery, to disruptive innovation, to AR and VR, to the future of technology and humanity, and much more. I am excited and honored that my latest work “The Fourth Age: Smart Robots, Conscious Computers and the Future of Humanity” is also among the titles featured at CES. Authors will start speaking at 10 am today, January 8, on the CTA Stage and will continue on the half-hour throughout the day and tomorrow with a book signing following each discussion. I am especially looking forward to learning more about all the books featured this year. I hope you’ll join me. Tuesday, January 8Mark Mueller-Eberstein & Phil Klein CTA Stage: 10 AM Blockchain technology is as fundamental as the Internet and the invention of governance and documentation of property. Learn the implications of being able to agree with certainty, of systems of trust where transactions are visible, and where people, organizations and governance can be more successful and maintain accountability Heidi Forbes Öste, Ph.D. CTA Stage: 1 PM Dr. Forbes Oste is a behavioral scientist, author of best-selling Digital Self Mastery series and Executive Producer of Evolving Digital Self podcast. Her groundbreaking work provides a unique perspective on the how to survive and thrive in the digital era, integrating behavior science, wellbeing, and system thinking. She is a passionate advocate for digital wellbeing as a key element for future proofing human and organization systems. Frances West In this essential blueprint, Frances reveals how putting humans first—and building inclusion into business strategies, technological infrastructure, and organizational processes—can enable companies to bring principle, purpose, and profit into a state of harmonious alignment for sustainable talent acquisition, market expansion, and business differentiation. Byron Reese “You don’t have to have all the answers — even the experts don’t agree. But The Fourth Age successfully forces us to face just how thorny those questions are…entertaining and engaging…” – The New York Times. Futurist and GigaOm CEO, Byron Reese, offers fascinating insight into AI, robotics, and their extraordinary implications for our species. Wednesday, January 9Gary Shapiro Booth Signings: Tuesday, January 8 Thursday, January 10 Friday, January 11 Gary Shapiro, president and CEO of the Consumer Technology Association, casts his eye toward the future, charting how the innovative technologies of today will transform not only the way business is done but society itself—and how we can use them to remain competitive in a rapidly evolving world Charlie Fink Charlie Fink, who covers XR for Forbes, brings thirty-five years of experience as an entertainment and technology executive to what he calls “the greatest business and technology story of our time.” Fink has created a guide to emerging VR & AR that is engaging to professionals, accessible to non-technical readers, and relentlessly entertaining to everyone. The book features original character animation, presented through a free app, “Fink Metaverse,” available in the App Store and Google Play. Kristen Gallerneaux CTA Stage: 1 PM In High Static, Dead Lines, media historian and artist Kristen Gallerneaux weaves a literary mix tape that explores the entwined boundaries between sound, material culture, landscape, and esoteric belief. Essays and fictocritical interludes are arranged to evoke a network of ley lines for the “sonic spectre” to travel through—a hypothetical presence that manifests itself as an invisible layer of noise alongside the conventional histories of technological artifacts. Kate O’Neill CTA Stage: 3:30 PM O’Neill defines a new model of business leader — the “tech humanist” — as developing honest assessments of organizational goals that move far beyond traditional P&L statements, and peer deeper into the consequences of everyday human experience design within our increasingly tech-driven culture. Thursday, January 10 John Chambers & Diane Brady CTA Stage: 11:30 AM In Connecting the Dots, former Chairman and CEO of Cisco, John Chambers, shares his unique strategies for winning in a digital world. From his early lessons and struggles with dyslexia in West Virginia to his bold bets and battles with some of the biggest names in tech, Chambers gives readers a playbook on how to act before the market shifts, tap customers for strategy, partner for growth, build teams, and disrupt themselves. Scott Brown CTA Stage: 12:30 PM Scott Brown is an 8x startup founder and coveted messaging coach to founders around the world. He has now released his framework that allows anyone to sell their idea to investors, customers, and the media using ©lean Messaging. About CES: CES® is the world’s gathering place for all who thrive on the business of consumer technologies. It has served as the proving ground for innovators and breakthrough technologies for 50 years-the global stage where next-generation innovations are introduced to the marketplace. As the largest hands-on event of its kind, CES features all aspects of the industry. Owned and produced by the Consumer Technology Association (CTA)TM, it attracts the world’s business leaders and pioneering thinkers. Check out CES video highlights. Follow CES online at CES.tech and on social. About Consumer Technology Association: Consumer Technology Association (CTA) is the trade association representing the $321 billion U.S. consumer technology industry, which supports more than 15 million U.S. jobs. More than 2,200 companies – 80 percent are small businesses and startups; others are among the world’s best known brands – enjoy the benefits of CTA membership including policy advocacy, market research, technical education, industry promotion, standards development and the fostering of business and strategic relationships. CTA also owns and produces CES® – the world’s gathering place for all who thrive on the business of consumer technologies. Profits from CES are reinvested into CTA’s industry services. via Tumblr Gary’s Book Club Features 2018’s Best Tech Books on CTA Stage at CES 2019 Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 76 of Voices in AI features host Byron Reese and Rudy Rucker discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanityâs good or ill. Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series. Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript. Transcript ExcerptByron Reese: This is Voices in AI brought to you by GigaOm, Iâm Byron Reese. Today my guest is Rudy Rucker. He is a mathematician, a computer scientist and a science fiction author. He has written books of fiction and nonfiction, and heâs probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware and realware. The first two of those won Philip K. Dick awards. Welcome to the show, Rudy. Rudy Rucker: Itâs nice to be here Byron. This seems like a very interesting series you have and Iâm glad to hold forth on my thoughts about AI. Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial? Well a good working definition has always been the Turing test. If you have a device or program that can convince you that itâs a person, then thatâs pretty close to being intelligent. So it has to master conversation? It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it canât converse, itâs not AI? No those other things are also a big part of if. Youâd want it to be able to write a novel, ideally, or to develop scientific theoriesâto do the kinds of things that we do, in an interesting way. Well, let me try a different tack, what do you think intelligence is? I think intelligence is to have a sort of complex interplay with whatâs happening around you. You donât want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, âdo not help me.â You want something thatâs flexible and playful in intelligence. I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind. Itâs a richness of behavior, a sort of complexity that engages your imagination. And do you think itâs artificial? Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesnât actually have any, thereâs no one actually home? Right, well I think the word artificial is misleading. I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolframâs points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if itâs not programmed. So then, itâs not clear that thereâs some bright line that separates human intelligence from the rest of the intelligence. I think when we say âartificial intelligence,â what weâre getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting. So, on the Stephen Wolfram thread, his view is everythingâs computation and that you canât really say thereâs much difference between a human brain and a hurricane, because whatâs going on in there is essentially a giant clockwork running its program, and itâs all really computational equivalence, itâs all kind of the same in the end, do you ascribe to that? Yeah Iâm a convert. I wouldnât use the word âclockworkâ that you use because that already slips in an assumption that a computation is in some way clunky and with gears and teeth, because we can have thingsâ But itâs deterministic, isnât it? Itâs deterministic, yes, so I guess in that sense itâs like clockwork. So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything isânot a clockwork, I wonât use that wordâbut everything is deterministic. But, even the most deterministic things, when you iterate them, become unpredictable, and theyâre not unpredictable inherently, like from a universal standpoint. But theyâre unpredictable from how finite our minds are. Theyâre in practice unpredictable? Correct. So, a lot of natural processes, like well thereâs like when you take Physics I, you say oh, I can predict where, if I fire an artillery shot where itâs going to land, because itâs going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds. And then when you get into reality, well they donât actually travel on perfect parabolas, they have this odd shaped curve due to air friction, thatâs not linear, it depends how fast theyâre going. And then, you skip into saying âWell, I really would have to simulate this click.â And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and thatâs the catch. We can take a natural process and itâs computational in the sense that itâs deterministic, so you think well, cool, Iâll just find out the rule itâs using and then Iâll use some math tricks and Iâll predict what itâs going to do. For most processes, it turns out there arenât any quick shortcuts, thatâs actually all. It was worked on by Alan Turing way back when he proved that you canât effectively get extreme speed ups of universal processes. So then weâre stuck with saying, maybe itâs deterministic, but we canât predict it, and going slightly off on a side thread here, this question of free will always comes up, because we say well, âweâre not like deterministic processes, because nobody can predict what we do.â And the thing is if you get a really good AI program thatâs running at its top level, then youâre not going to be able to predict that either. So, we kind of confuse free will with unpredictability, but actually unpredictabilityâs enough. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 76: A Conversation with Rudy Rucker As a first take, the 2018 AWS Re:Invent conference event held in Las Vegas seemed to be slicker, even though bigger, than previous years. While conference sessions were far too distributed across multiple hotels to feed a coherent view, the big-barn expo exuded a feeling of knowing what it was about. Even the smallest vendors had stands which went beyond the lowest-common-denominator quick-assembly cube, suggesting either (a) the organisers had put more thought into it or (b) the vendors were better-established and (therefore) had more money. All in all, it felt less of a bun fight — more space between stands, less urgency to get from one place to another. It would be too much of an extrapolation to suggest this reflects the state of the cloud marketplace in general, and AWS in particular; however, it does serve as a useful backdrop upon which to paint a picture of an industry maturing beyond its “look at us, over here, we are different!” roots. From the sessions held for analysts, a couple of notably aligned moments stand out: the first involving use of the H-word, met with a smattering of laughter as an AWS representative spoke of embracing (my word) hybrid architectures and deploying (in the form of AWS Snowball Edge) capabilities inside the enterprise boundary. The second, also met with more of an accepting shrug than anything, was a presentation by Keith Jarrett, AWS’ worldwide lead on cloud economics, which accepted, nay endorsed the fact that AWS’ cloud models wouldn’t always be the cheapest option for everything. Any thoughts of “ah-HA! Got you!” were almost immediately overtaken by, “Well, of course, how could it be?” — unless someone has also invented the perpetual motion machine or some other magical device. At a risk of repeating the obvious, there is no silver bullet/single solution/one-model-to-rule-them all in technology, never has been and never will be. Keith went on to present a series of KPIs around value creation, rather than pure cost. So, with maturity comes the circumspection of understanding one’s place in the world, what one brings to the party, and therefore a level of differentiation based on competence, not capability: in a nutshell, it’s not about “use cloud” but, “if you want to use cloud-based services, work with us as we do it better than anyone else.” We saw this across the AWS portfolio, for example through the repeated theme of ‘frameworks’ — AWS has one AI (as presented by Swami Sivasubramanian, VP, Amazon Machine Learning), one for IoT (thank you Dirk Didascalou, VP, AWS IoT), one for more general cloud adoption (hat-tip Dave McCann, VP, AWS Marketplace and Todd Weatherby, VP, AWS Professional Services). It all makes sense — if the platform is (increasingly) a commodity, the differentiator becomes how it is used. We see this over and again: now that Kubernetes is (becoming) the de facto target for containerised applications for example, to say “we do Kubernetes” is no longer interesting. Nor for that matter are the frameworks, from a business perspective — illustrated by the current trend away from DevOps being an end in itself and towards governance models and tooling such as Value Stream Management. Most important are whether organisations can innovate and deliver faster, harness opportunities, deliver new customer experiences and generate business value, more effectively with one provider or another. This is all good news for the enterprise, as the terminology and philosophical underpinnings of cloud computing increasingly align with the more traditional thinking pervading our largest organisations. Across the past ten years, it has been enough to ‘do’ cloud, or ‘do’ open source in order to create competitive advantage: indeed, upstart organisations (the usual suspects of AirBnB, Uber, indeed Amazon et al) have built their businesses on the basis of rapid time-to-value. Simply put, older companies, with all their meetings, legacy systems and indeed thinking, have not been able to deliver as quickly as businesses without all that baggage. Indeed, they still can’t. But them old companies are still there, for a number of reasons. First that the new breed have largely tackled the customer-facing elements of business, but there’s only so much of that to go around. It is completely unsurprising that Amazon is opening (albeit automated) shops, and that Uber (together with Toyota) is investing in (driverless) car fleets: someone has to do the infrastructure stuff. Meanwhile, not all customer-oriented business can be done on an ad-hoc basis. Take Healthcare for example, which (thank goodness) has not thrown itself gaily into adopting the heck-why-not-throw-away-the-old-rules-and-see-what-happens business models of the platform economy. And indeed, while big old businesses are still big and old, and therefore unable to act quite so responsively as the youngsters, three things are happening: not only are they getting better at that whole innovation thing — or indeed, learning how to align new models of innovation with their own approaches, but also, the younger companies are having to learn that they can’t get away with avoiding complexity for ever. And in parallel, as we have already seen, technology providers such as AWS are maturing to fit the evolving needs and capabilities of both sides. It’s not just the big players: at Re:Invent I was also able to talk to both organisations in Amazon’s partner ecosystem and their customers, notably a conversation with that quite popular gaming company Fortnite about both AWS and MongoDB. Where does this leave us? First that AWS is establishing itself not as a cloud player but as a technology provider, and rightly so, moving away from a false debate based on cost and towards one based on value. Second, AWS recognises that it cannot go it alone, nor does it need to (historically echoing of Microsoft’s attempts to play the better together card, which worked to an extent but could never be the whole answer). Third, that this reflects a more general maturing of the industry’s relationship with business, as attention moves beyond the platform and towards how to get the most out of it in what, frankly, a highly complex and constantly evolving world. Whatever happens, complexity of all types will continue to constrain our ability to maximise the value we can get from technology. While technological complexity may appear to be a Gordian knot, it is more a Hydra — cut off one head and many more grow back. Understanding this and trying to tame and align as a platform, rather than looking to restrict and present one model above all, holds the key to unlocking future innovation for businesses of all sizes. via Tumblr AWS Re:Invent 2018 Reflects An Industry Coming Of Age Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 75 of Voices in AI features host Byron Reese and Kevin Kelly discuss the brain, the mind, what it takes to make AI and Kevinâs thoughts on its inevitability. Kevin has written books such as âThe New Rules for a New Economyâ, âWhat Technology Wantsâ, and âThe Inevitableâ. Kevin also started Wired Magazine, an internet and print magazine of tech and culture. Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript. Transcript ExcerptByron Reese: This is Voices in AI, brought to you by GigaOm, and Iâm Byron Reese. Today I am so excited we have as our guest, Kevin Kelly. You know when I was writing the biography for Kevin, I didnât even know where to start or where to end. Heâs perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path]. He has written a number of books, The New Rules for a New Economy, What Technology Wants, and most recently, The Inevitable, where he talks about the immediate future. Iâm super excited to have him on the show, welcome Kevin. Kevin Kelly: Itâs a real delight to be here, thanks for inviting me. So what is inevitable? Thereâs a hard version and a soft version, and I kind of adhere to the soft version. The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth. The soft version is to say that there are biases in the world, in biology as well as its extension into technology, and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random. So that would say that things like youâre going to find on any planet that has water, youâll find fish, it has life and in water youâll find fish, or will things, if you rewound the tape of life youâd probably get flying animals again and again, but youâll never, but I mean, a specific bird, a robin is not inevitable. And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones. So telephones are inevitable, but the iPhone is not. And the internetâs inevitable, but Googleâs not. AIâs inevitable, but the particular variety or character, the specific species of AI is not. Thatâs what I mean by inevitableâthat there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions. And what are some examples of those that you discuss in your book? So, technologyâs basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is. So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity. We have a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism. Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex. The idea that thereâs any kind of simplification going on in technology is completely erroneous, there isnât. Itâs not that the iPhone is any simpler. Thereâs a simple interface. Itâs like you have an egg, itâs a very simple interface but inside itâs very complex. The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized. So, the history of technology in photography was there was one camera, one kind of camera. Then there was a special kind of camera you could do for high speed; maybe thereâs another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera. So, all these things become more and more specialized and thatâs also going to be true about AI, we will have more and more specialized varieties of AI. So letâs talk a little bit about [AI]. Normally the question I launch this withâand I heard your discourse on itâis: What is intelligence? And in what sense is AI artificial? Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is. We think we know when we see it, but we donât really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we donât really know how it works and what it is. Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so, I think the best way to think of this is we have a âzooâ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person to person and a lot between different animals in the natural world and so⦠That collection is still being mapped, and we know that thereâs something like symbolic reasoning. We know that thereâs kind of deductive logic, that thereâs something about spatial navigation as a kind of intelligence. We know that thereâs mathematical type thinking; we know that thereâs emotional intelligence; we know that thereâs perception; and so far, all the AI that we have been âwowedâ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception. So all the deep learning neural net stuff that weâre doing is really just varieties of perception of perceiving patterns, and whether thereâs audio patterns or image patterns, thatâs really as far as weâve gotten. But thereâs all these other types, and in fact we donât even know what all the varieties of types [are]. We donât know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work. So itâs not just that weâre creating artificial minds, itâs the fact that that creationâthat processâis the scope that weâre going to use to discover what our minds are made of. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 75: A Conversation with Kevin Kelly MongoDB came onto the scene alongside a number of data management technologies, all of which emerged on the basis of: “You don’t need to use a relational database for that.” Back in the day, SQL-based approaches became the only game in town first due to the way they handled storage challenges, and then a bunch of open source developers came along and wrecked everything. So we are told. Having firmly established itself in the market and proved that it can deliver scale (Fortnite is a flagship customer), the company is nonetheless needing to move with the times. Having spoken to Seong Park, VP of Product Marketing & Developer Advocacy, several times over the past 6 weeks, I thought it was worth capturing the essence of our conversations. Q1: How do you engage with developers that is the same, or different to data-oriented engineers? Traditionally these have been two separate groups to be treated separately, is this how you see things? MongoDB began as the solution to a problem that was increasingly slowing down both developers and engineers: the old relational database simply wasn’t cutting the mustard anymore. And that’s hardly surprising, since the design is more than 40 years old. MongoDB’s entire approach is about driving developer productivity, and we take an object-focused approach to databases. You don’t think of data stored across tables, you think of storing info that’s associated, and you keep it together. That’s how our database works. We want to make sure that developers can build applications. That’s why we focus on offering uncompromising user experiences. Our solution should be as easy, seamless, simple, effective and productive as possible. We are all about enabling developers to spend time on the things they care about: developing, coding and working with data in a fast, natural way. When it comes to DevOps, a core tenet of the model is to create multi-disciplinary teams that can collectively work in small squads, to develop and iterate quickly on apps and microservices. Increasingly, data engineers are a part of that team, along with developers, operations staff, security, product managers, and business owners. We have built capabilities and tools to address all of those groups. For data engineers, we have in-database features such as the aggregation pipeline that can transform data before processing. We also have connectors that integrate MongoDB with other parts of the data estate – for example, from BI to advanced analytics and machine learning. Q2: Database structures such as MongoDB are an enabler of DevOps practices; at the same time, data governance can be a hindrance to speed and agility. How do you ensure you help speed things up, and not slow them down? Unlike other non-relational databases, MongoDB gives you a completely tunable schema – the skeleton representing the structure of the entire database. The benefit here is that the development phase is supported by a flexible and dynamic data model, and when the app goes into production, you can enforce schema governance to lock things down. The governance itself is also completely tunable, so you can set up your database to support your needs, rather than being constrained by structure. This is an important differentiator for MongoDB. Another major factor which reduces speed and agility is scale. Over the last two to three years, we have been building mature tooling that enterprises and operators alike will care about, because they make it easy to manage and operate MongoDB, and because they make it easy to apply upgrades, patches and security fixes, even when you’re talking about hundreds of thousands of clusters. One of the key reasons why we have seen such acceleration in the adoption of MongoDB, not only in the enterprise but also by startups and smaller businesses, is that we make it so easy to get started with MongoDB. We want to make it easy to get to market very quickly, while we’re also focusing on driving down cost and boosting productivity. Our approach is to remove as much friction in the system as possible, and that’s why we align so well with DevOps practices. In terms of legacy modernization, we are running a major initiative enabling customers to apply the latest innovations in development methodologies, architectural patterns, and technologies to refresh their portfolio of legacy applications. This is much more than just “lift and shift”. Moving existing apps and databases to faster hardware, or on to the cloud might get you slightly higher performance and marginally reduced cost, but you will fail to realize the transformational business agility, scale, or deployment freedom that true legacy modernization brings. In our experience, by modernizing with MongoDB organizations can build new business functionality 3-5x faster, scale to millions of users wherever they are on the planet, and cut costs by 70 percent and more, all by unshackling from legacy systems. Q3: Traditionally you’re either a developer or a database person … does this do away with database engineers? Do we need database engineers or can developers do everything? Developers are now the kingmakers; they are the hardest group of talent to retain. The biggest challenge most enterprises see is about finding and keeping developer talent. If you are looking for the best experience in working with data, MongoDB is the answer in our opinion! It is not just about the persistence and the database …MongoDB Stitch is a serverless platform, drives integration with third-party cloud services, and enables event-based programming through Stitch triggers. Ultimately, it comes down to a data platform that any number of roles can use, in their “swim lanes”. With the advent of cloud, it’s so easy for customers not to have to worry about things they did before, since they consume a pay-as-you-go service. Maybe you don’t need a DBA for a project any more: it’s important to allow our users to consume MongoDB in the easiest way possible. But the bottom line is that we’re not doing away with database engineers, but shifting their role to focus on making a higher-value impact. For engineers we have capabilities and features like the aggregation pipeline, allowing us to transform data before processing. Q4: IoT-related question … in retail, you want to put AI into the supermarket environment, it could be video surveillance or inventory management. It’s not about distributing across crowd but into the Edge and “fog” computing… At our recent MongoDB Europe event in London, we announced the general availability of MongoDB Mobile as well as the beta for Stitch Mobile Sync. Since we already have a lot of customers on the network edge (you’ll find MongoDB on oil rigs, across the IoT, used by airlines, and for the management of fleets of cars and trucks), a lot of these elements are already there. The advantage is how easy we make it to work with that edge data. We’re thinking about the experience we provide in terms of working with data – and giving people access to what they care about – tooling, integration, and to look at what MongoDB can provide natively on a data platform. Q5: I’m interested to know what proportion of your customer base, and/or data/transaction base, are ‘cloud native’ versus more traditional enterprises. Indeed, is this how you segment your customers, and how do you engage with different groups that you do target? We’d argue that every business should become cloud native – and many traditional enterprises are on that journey. Around 70 percent of all MongoDB deployments are on a private or public cloud platform, and from a product portfolio perspective, we work to cover the complete market – from startup programs to self-service cloud services, to corporate and enterprise sales teams. As a result, we can meet customers wherever they are, and whatever their size. My take: better ways exist, but how to preach to the non-converted? Much that we see around us in technology is shaped as a result of the constraints of its time. Relational databases enabled a step up from the monolithic data structures of the 1970s (though of course, some of the latter are still running, quite successfully), in no small part by enabling more flexible data structures to exist. MongoDB took the same idea one step further, doing away with the schema completely. Is the MongoDB model the right answer for everything? No, and that would never be the point – nor are relational models, nor any other data management structures (including the newer capabilities in MongoDB’s stable). Given that data management vendors will continue to innovate, more important is choosing the right tool for the job, or indeed, being able to move from one model to another if need be. This is more about mindset, therefore. Traditional views of IT have been to use the same technologies and techniques, because they always worked before. Not only does this risk trying to put square pegs in round holes, but also it can mean missed opportunities if the definition of what is possible is constrained by what is understood. I would love to think none of this needs to be said, but in my experience large organisations still look backward more than they look forward, to their loss. We often talk about skills in data science, the shortage of developers and so on, but perhaps the greater gap is in senior executives that get the need for an engineering-first mindset. If we are all software companies now, we need to start acting accordingly. via Tumblr Five Questions For… Seong Park at MongoDB While the notion of healthcare technology may be in the spotlight with AI, blockchain and all that, the coalface of care requires building an understanding of patient needs and responding in an appropriate way. Today, in many cases, even some of the most common conditions are subject to a dearth of information, or worse, misinformation that results in poor diagnosis and treatment. I learned this when working with a London hospital on care pathways for DVT; I was naturally interested in the work of Live UTI Free, which offers a clear information resource for patients, practitioners and indeed, researchers. Read on to learn from Melissa Kramer, founder, how not all technological innovations need to maximise the use of buzzwords or bandwagons, and what lessons can be learned across healthcare diagnostics and beyond. 1. Let’s set some context — what’s the purpose behind Live UTI Free?We founded Live UTI Free to address a gap in the sharing of evidence-based information to sufferers of recurrent and chronic urinary tract infection (UTI). To provide some context for why closing that gap is important, 1 in 2 females will experience a UTI in their lifetime, and of those, up to 44% will suffer a recurrence. With each recurrence, the chance of another increases. For many, recurrent UTI is debilitating, and the impact extends to the economy, with billions spent each year on UTI alone. Despite how common UTI is, there has never been an accurate, standard method of UTI testing. Although the impact of this issue is significant on many levels, UTI remains an area of women’s health that suffers from steadfastly held misinformation on both sides of the patient/practitioner relationship. We aim to act as a conduit of information between researchers and patients, bridging gaps in knowledge where possible and shedding light on potential avenues for better diagnosis and treatment. Ultimately, our goal is to use our insights to advance research and development in this space. 2. How do you go about collating and delivering information, or is it ‘simply’ that even the most straightforward info is difficult to find today?We created our platform because we identified how difficult it was for patients to find straightforward information online, and we wanted to fix this. In order to do so, we first had to collect information from patients themselves, to discover what it was they were looking for and how. We spent more than 6 months interviewing patients and learning about their online behaviour, before we put a single piece of information online. This activity alone meant we had collected more patient-perspective data on the subject than most recent studies. Once we understood the typical patient journey, and where the glitches were, we started to collate scientific evidence and to interpret it into everyday language. We do this with the help of researchers, but the process is hardly straightforward. If we relied on peer-reviewed studies alone, there would be little we could offer our audience in terms of new diagnosis and treatment options. Instead, we’ve developed our offering via a combination of studies, and direct input from practitioners, researchers and pharmacists. This requires a continuous loop of interviews, academic research, and amendments to the information we provide. And on top of that is another layer of patient feedback that directly shapes what we offer on our site. Long story short: straightforward info, particularly on health topics, is difficult to find. But once you do find it, you also have to make sure it’s useful to whoever it’s intended for. 3. What mechanisms do you have to do this, beyond the online site and do you think your user-centric approach has been worthwhile?Aside from the patient interviews mentioned in the last question, we also launched a patient quiz at the same time as launching our site. The quiz has served two purposes:
Beyond the online site, we have developed a network of scientists, practitioners and other medical professionals. We’re also in regular contact with commercial companies that are working on products or services that address specific aspects of recurrent UTI. By maintaining a user-centric approach and fostering relationships with other key stakeholders, we hope to provide value that extends beyond problem-solving for individual patients. We have already begun to steer change for those in our network. 4. What challenges have you faced starting up Live UTI Free, and how have you overcome them?We are, and always have been, acutely aware of the position we hold in between patients and practitioners, and information that connects the two. Our primary concern revolves around how to achieve our goals, while adhering to the ethical standards we’ve placed upon ourselves. This in itself is a challenge. We look at everything we do through this ethics lens. We question how any potential partnership or revenue opportunity fits within our own ethical guidelines, and we carefully consider data privacy when it comes to our patient quiz, interviews and correspondence we receive. To help overcome this challenge we’ve put in place a funding policy and community guidelines, as well as implementing an ethics advisory board to help with these decisions. A further challenge has been navigating the line between neutral accuracy, and providing information that is actionable for our audience. We don’t provide recommendations of any kind, but we know through our research that patients want a work flow, rather than a ‘choose your own adventure’. We’ve partially overcome this by constructing our content in such a way that the user is guided through a logical sequence. The rest is a work in progress, as the required scientific study to truly point someone towards action steps for recurrent UTI, is still in the future. When it exists, we’ll be ready to relay the information to our audience. 5. How do you see things moving into the future?The data we have collected via our patient quiz is one of a kind, and we’re now starting to use these insights to help guide product R&D for this patient population. We are currently assessing grant opportunities, in collaboration with researchers, with a focus on patient perspective data. Our reach means we will make a valuable partner in larger research studies and clinical trials, and we’re open to discussion in this regard. We plan to launch an evidence based ecommerce site next year, to bring our many user requests for this to fruition. Live UTI Free will continue as a user-centric patient advocacy organisation, existing to support our fast-growing community, which includes sufferers of chronic and recurrent UTI, practitioners, and researchers. Readers can get in touch if interested in:
via Tumblr Five questions for: Melissa Kramer of Live UTI Free |
Clarence Moore
Gamer. Writer. |