Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 81 of Voices in AI features host Byron Reese and Siraj Raval discussing how teaching AI to the world can help improve the quality of life for everyone, and what the footfalls along the way are. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com Transcript ExcerptByron Reese: This is voices in AI brought to you by GigaOm, Iâm Byron Reese. Today my guest is Siraj Raval. He is the director of the School of AI. He holds a degree in computer science from Columbia University. Welcome to the show, Siraj. Siraj Raval: Thank you so much for having me, Byron. I always like to start off with just definitions. What is artificial intelligence and specifically whatâs artificial about it? Thatâs a great question. So, AI, Artificial Intelligence is actually⦠I like to think of it as a giant circle. Iâm a very visual person so just imagine a giant circle and weâll label that circle AI, okay? Inside of the circle, there is a smaller circle, and this would be the subfield of the eye. One of them would be heuristics. These are statistical techniques to try to play games a little better. When Garry Kasparov was defeated by big blue â that was using heuristics. Thereâs another bubble inside of this bigger AI bubble called machine learning and thatâs really the hottest area of AI right now and thatâs all about learning from data. So thereâs heuristics, thereâs learning from data â which is machine learning â and there is deep learning as well, which is a smaller bubble inside of machine learning. So AI is a very broad term. And people in computer science are always arguing about what is AI, what isnât AI? But for me, I like to keep it simple. I think of AI as any kind of machine that mimics human intelligence in some way. Well hold on a minute though, you canât say artificial intelligence is a machine that mimics human intelligence because youâre just defining the word with what weâre trying to get at. So whatâs intelligence? Thatâs a great question. Intelligence is the ability to learn and apply knowledge. And we have a lot of it. Well, some of us anyway (just kidding) Thatâs interesting because of AlphaGo â the emphasis on it being able to learn is a pretty high bar. Something like my cat food dish that refills itself when the cat eats all the food, that isnât intelligent in your book, right? Itâs not learning anything new. Is that true? Yeah. So itâs not learning. So there has to be some kind of feedback, some kind of response to stimulus, so whether thatâs from data or whether thatâs a statistical technique based on the number of wins versus losses, did this work, did this not work? Itâs got to have this feedback loop of something outside of it being external to it is affecting it. In the way that we perceive the world, something external to our heads and that affects how we act in the world. So [take] the smartest program in the world. Once itâs instantiated as a single program, is no longer intelligent. Is that true? Because it stopped learning at that point. It can be as sophisticated as can be, but in your mind, if itâs not learning something new itâs not intelligent. Thatâs a good question. Well, I mean, the point at which it would not need to learn or there would be nothing for it to learn would be the point in which, to get âout there,â it saturates the entire universe. Well, no. I mean like, letâs take AlphaGo. Letâs say they decide, letâs put out an iPhone version of Go and letâs just take the latest and greatest version of this. Letâs make a great program that plays Go. At that point it is no longer AI, if we rigidly follow your definition because it stopped learning, itâs now frozen in capability. Yeah, I can play it a thousand times in a game 1001 itâs not doing any better. Sure. Okay, but to stick to my rigid definition, Iâve said that intelligence is the ability to learn and apply knowledge. Right. That we will be doing in the latter part. Do you think that itâs artificial in that it isnât really intelligence, it just looks like it? Is what a computer does actually intelligent or is it mimicking intelligence? Or is there a difference between those two things? There are different kinds of intelligences in the world. I mean, think of it like a symphony of intelligences like, our intelligence is really good at doing a huge range of tasks, but a dog has a certain type of intelligence that keeps it more aware of things than we would be, right? Dogs have superhuman hearing capability. So in that way a dog is more intelligent than us for that specific task. So when we say âartificial intelligence,â you know, talking about the AlphaGo example, that algorithm is better than any human on the planet for that specific task. Itâs a different kind of intelligence. âForeign,â âalien,â âartificialâ â you know, all of those words would kind of describe its capability. Youâre the Director of School of AI. What is that? Tell me the mission and what youâre doing. Sure. So Iâve been making educational videos about AI on YouTube for the past couple of years and I had the idea about nine months ago, to have this call to action for people who watch my videos. And I had this idea of saying, âLetâs start an initiative where Iâm not the only one teaching but there are other people, and weâll call ourselves The School of AI and we have one mission which is to teach people how to use AI technology for the betterment of humanity for free.â And so weâre a non-profit initiative. And since then, we have, what are called âdeans.â Itâs 800 of them spread out across the world, across 400 cities globally. And theyâre teaching people in their local communities from Harare, Zimbabwe to Zurich to parts of South America. Itâs a global community. Theyâre building their local schools, Schools of AI, you know, School of AI Barcelona, what have you, and itâs been an amazing, amazing couple of months. It feels like every day I wake up, I look in our side channel, I see a picture of a bunch of students in, say, Mexico City and our school there, our logo there and itâs like, âIs this real?â But it is real. Yeah, itâs been a lot of fun so far. Put some flesh on those bones. What does it mean to learn⦠what are people learning to do? Right. So the guideline that weâre following â weâre talking about the betterment of humanity â are the 17 sustainable development goals (SDGs) outlined by the United Nations. One of them would be no poverty, no extreme poverty, sustainable action on the climate, things like that. Basically trying to fulfill the basic needs for humans both in developed and developing countries so that eventually we can all reach that stage of self-actualization and be able to contribute and create and discover, which is what I think we humans are best at. Not doing trivial laborious repetitive tasks. Thatâs what machines are good for. So if we can teach our students, we call them âwizards,â if we can teach our wizards how to use a technology to automate all of that away, then we can get to a world where all of us are contributing to the betterment and the progress of our species whether itâs in science or art, etcetera. But specifically, what are people learning to do like on a day to day basis? One example would be classifying images, and thatâs a very generic example, but we can use that example to say, help farmers in parts of South Africa to detect plants that are diseased, or that are not diseased. Another example would be anomaly detection. So kind of finding the needle in the haystack. What here doesnât fit in with the rest? And that can be applied to fraud detection, right? If youâve got thousands and thousands of transactions, and one of them is a fraud, and AI can learn âwhat fraud isâ better than any human could because itâs just so much data. Thatâs just two, I can get some more. Thereâs quite a lot but I think that⦠No, but I mean, whatâs the clue⦠so itâs the idea that there just arenât enough people that have the basic skills to âdo AIâ and youâre trying to fill that gap? That is what it is. And yeah, in that the concepts behind this technology, the mathematical concepts I donât believe are accessible yet to a wide enough audience. So we at School of AI are trying to broaden that audience and trying to make it accessible not just to developers but eventually to everybody. You know, moms, dads, grandmas, grandpas, people who â just theyâre not like the most technical people â weâre trying to reach them and make this something that everybody does, because we sincerely believe that this is going to be a part of our lives and eventually everybody is going to be implementing AI in some way or another. It doesnât necessarily have to be code. It can be through some application or some kind of âdrag and dropâ interface, but itâs definitely in the future of work. So yes, thatâs what it is. And also itâs the fact that we are facing so many huge problems, daunting problems as a species â existential threats. And we think we might not be good enough alone to solve these problems. Climate change, for example: a lot of people think that itâs too late to solve climate change, but we think that we have a huge amount of data available and we think that the answers to some of the hardest problems related to CO2 emission and how we can allocate resources for that goal lie hidden in that data, and using AI we can find them. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 81: A Conversation with Siraj Raval
0 Comments
HCI is very popular with organizations of all sizes now, but it’s not perfect! A compromise must be found between performance, flexibility, usability, and cost. This applies to most types of infrastructure, hyper-converged infrastructure (HCI) included. What’s more, it is quite hard to find a single storage infrastructure that covers all use cases that need to be addressed. HCI is a good solution for small organizations, but the architecture imposes some trade-offs, limiting its potential in larger enterprises where there is a sizeable and established environment to support. Put this way, HCI is the perfect example of the 80/20 rule, whereby 80% of the existing workloads are predictably dynamic and a good fit for HCI. The problem is the remaining 20% of your infrastructure is expected to grow exponentially with Artificial Intelligence/ Machine Learning (AI/ML), Internet of Things (IoT), edge computing projects and more —all of which organizations are evaluating now and which will impact business competitivity in the following years. THE GOOD OF HCIUser-friendliness is what most organizations like about HCI. There is no need to change infrastructure operations, allowing the use of the same hypervisor the team is accustomed to and fewer complications around storage management. The infrastructure is simplified thanks to the modular scale-out approach for which each new single node adds more CPU, RAM, and storage. As a result of this, HCI delivers good TCO figures as well. THE BAD OF HCIThe limitations of HCI arise from exactly what makes it a good solution for ordinary workloads and virtualization. In fact, it’s not really designed to cover all type of workloads (think about big data for example). In addition, not all applications scale the same, meaning that sometimes different types of resources are needed for your infrastructure (e.g. storage-only nodes for capacity). Last but not least, most HCI products on the market focus on edge or core use cases, but are not able to cover them both concurrently or efficiently at a reasonable cost. These limitations might be of secondary importance today, but in the long term that could radically change with new unforeseen performance, capacity, and technology requirements. THE UGLY OF HCIThe initial investment to adopt HCI is often pretty high, especially if storage and server amortization cycles are different. Since HCI includes purchasing all server, storage, and networking together, some are forced to choose individual workloads to begin HCI adoption, or finance options, to purchase the entire infrastructure at once. This is merely a financial issue, but I’ve seen it happen several times with customers of all sizes while trying to manage their budget wisely. This can delay and slow down HCI adoption and result in a long transition period that benefits no one. CLOSING THE CIRCLEThe perfect infrastructure that excels in every aspect does not yet exist. HCI is a good compromise for a lot of workloads but fails with the most demanding ones – and what is usually considered strengths can quickly become weaknesses. Recently, I had the chance of being briefed by DataCore on their HCI solution. I really like their approach, and I think it could address some of the issues I discuss in this blog. I’ll be hosting a webinar with them in a couple of weeks and we will be talking about how to deploy Hybrid-converged Infrastructures. Yes, you read it right, it’s not hyper- but Hybrid-converged, and if you want to learn more about it sign up and join us. I’m interested in your opinions and will make the webinar as interactive as possible, with quick polls and questions that you’ll be invited to ask the presenters. Originally posted on Juku.it via Tumblr The Good, The Bad, and The Ugly of Hyper-Converged Infrastructure (HCI) Customer experience, or CX, is one of those areas that makes you wonder why it’s being discussed: after all, which organisation would go out of it way to say that customers were not a priority? Nonetheless, talking about customers can be very different to actually improving how they interact with the business, not least because the link between theory and technical practicality will not always be evident. In the case of connectivity, the task is even harder. In principle there should be a connection – if you (as a customer) can’t connect to the service you need, or if it is slow or unresponsive, your experience will be less good. In practice however, connectivity is often seen as low-level infrastructure, with little value to add beyond linking things up. These challenges made our research on the link between connectivity and CX, conducted in partnership with Colt, all the more fascinating. The top-line finding was that organizations did see a link, and furthermore, were actively looking for ways to improve CX via connectivity. Following the research, I sat down with Keri Gilder, Chief Commercial Officer, Colt Technology Services, to find out what she thought of the findings, and what the provider was doing in response.
Customers in all sectors are demanding much more from their providers – the consumerisation of IT isn’t a new trend but it’s still highly relevant. People look at the flexibility and service they get from consumer facing companies and are asking why that doesn’t apply to their B2B suppliers. Many telco companies have been slow to adapt to these demands, so the result is that connectivity can be treated as a commodity rather than a differentiator. Our customers are dealing with massive change, from the growth in cloud applications and the changing structure of the workplace, to security challenges and the constant state of digital transformation. This means the network becomes even more critical for those with a focus on delivering the best experience to customers. When customers are dealing with these challenges it’s not good enough to sit back and wait for them to tell us what they need – we need to work together to help shape requirements, acting as advisors instead of just a supplier.
A ‘good’ customer experience can mean different things to different people and sectors, so it’s not a surprise to see people struggling to identify the best course of action. To some degree it’s the obvious things that people expect – delivering quickly and on time, while ensuring they have access to the information they need. But for suppliers it’s also about putting yourself in the customer’s shoes; what challenges are they facing and what are their customers demanding of them? From there it’s easier to see how to make a difference to their business and, in turn, how you can improve their experience of working with you.
Our customers have always expected connectivity that just works – the challenge we’re seeing now is that it’s much harder to predict network demand for the coming years or even months. CIOs are having to manage capacity requirements for applications or activities that might not even be on their radar and that’s driving a need for flexibility. This shows how connectivity can directly impact customer experience goals – if the network can’t manage these new services or if it doesn’t have the ability to quickly add new locations or services then it’ll be seen as a barrier, rather than a platform for innovation.
We closely track our NPS score – it’s an excellent way for us to measure ourselves as it covers so many aspects of what we provide to customers. But we know it isn’t and shouldn’t be the only measure of good customer experience. It’s the other factors identified in the research like delivering on time and how you respond if something goes wrong. If you don’t deliver on promises, meet expectations or go above and beyond to keep the customer happy then you won’t score highly. I don’t think it was a surprise to see that people don’t use NPS as a way to measure their suppliers, but if suppliers are getting everything else right, then their NPS score will naturally improve.
We’ve always been focussed on customer experience, and our vision is to be known as the most customer-oriented businesses in the industry. This means that we need to do much more than providing connectivity to our customers. Whether that’s Enterprise, Capital Markets or Wholesale, it’s about working in partnership with our customers to find out what their goals are and then collaborating to show how we can help achieve them. A crucial part of achieving this comes from listening to our customers and taking the time to understand the challenges they’re facing; one way in which we do this is through Innovation Workshops. These take part in the early stages of an engagement, bringing together multiple stakeholders with Colt experts to fully understand the broader business problems and how we can use technology to solve them. This means we’re providing more than just technology – we’re helping customers with their business objectives. The other aspect is in leading from the front – everyone at Colt has a performance objective relating to customer experience. We also have several internal programs running which don’t just superficially look at customer experience but are seeing the business invest in new tools and create new processes to ensure we’re going above and beyond what people expect from a connectivity supplier.
via Tumblr Five questions for… Keri Gilder, Chief Commercial Officer, Colt Technology Services. Can Connectivity be linked to Customer Experience? In my day to day job, I talk to a lot of end users. And when it comes to the cloud, there are still many differences between Europe and the US. The European cloud market is much more fragmented than the American one for several reasons, including the slightly different regulations in each country. Cloud adoption is slower in Europe and many organizations still like to maintain data and infrastructure in their premises. The European approach is quite pragmatic, and many enterprises take somewhat advantage of the experiences made by similar organizations on the other side of the pond. One similarity is cloud storage or, better, cloud storage costs and reactions. The fact that data is growing everywhere at an incredible pace is nothing new, and often faster than predicted in the past years. At first glance, an all-in cloud strategy looks very compelling, low $/GB, less CAPEX and more OPEX, increased agility and more, until of course your cloud bill starts growing out of control. As I wrote in one of my latest reports, “Alternatives to Amazon AWS S3”, the $/GB is the first item on the bill, there are several others, including egress fees that come after that. An aspect that is often initially overlooked at the beginning and has unpleasant consequences later. There are at least two reasons why a cloud storage bill can get out of control:
OPTIMIZATIONStart by optimizing the cloud storage infrastructure. Many providers are adding additional storage tiers and automations to help with this. In some cases, it adds some complexity (someone must manage new policies and ensure they work properly). Not a big deal but probably not a huge saving either. Also, try to optimize the application. But that is not always easy, especially if you don’t have control on the code and the application wasn’t already written with the intent to run in a cloud environment. Still, this could pay off in the mid- to long term, but are you ready to invest in this direction? BRING DATA BACK…A common solution, adopted by a significant number of organizations now, is data repatriation. Bringing back data on premises (or a colocation service provider), and accessing it locally or from the cloud. Why not? At the end of the day, the bigger the infrastructure the lower the $/GB and, above all, no other fees to worry about. When thinking about petabytes, there are several ways to optimize and take advantage of which can lower the $/GB considerably: fat nodes with plenty of disks, multiple media tiers for performance and cold data, data footprint optimizations, and so on, all translating into low and predictable costs. At the same time, if this is not enough, or you want to keep a balance between CAPEX and OPEX, go hybrid. Most storage systems in the market allow to tier data to S3-compatible storage systems now, and I’m not talking only about object stores – NAS and block storage systems can do the same. I covered this topic extensively in this report but check with your storage vendor of choice and I’m sure they’ll have solutions to help out with this. …OR GO MULTI-CLOUDAnother option, that doesn’t negate what is written above, is to implement a multi-cloud storage strategy. Instead of focusing on a single-cloud storage provider, abstract the access layer and pick up what is best depending on the application, the workloads, the cost, and so on, all determined by the needs of the moment. Multi-cloud data controllers are gaining momentum with big vendors starting to make the first acquisitions (RedHat with NooBaa for example) and the number of solutions is growing at a steady pace. In practice, these products offer a standard front-end interface, usually S3 compatible and can distribute data on several back-end repositories following user-defined policies. This leaves the end user with a lot of freedom of choice and flexibility regarding where to put (or migrate) data while allowing to access it transparently regardless of where it’s stored. Last week, for example, I met with Leonovus which has a compelling solution that associates what I just described to a strong set of security features. There are several alternatives to major service providers when it comes to cloud storage, some of them focus on better pricing, and lower or no egress fees, while others work on high performance too. As I wrote last week in another blog, going all-in with a single service provider could be an easy choice at the beginning but a huge risk in the long term. CLOSING THE CIRCLEData storage is expensive and cloud storage is no exception. Those who think they will save money by just moving all of their data to the cloud as-is are making a big mistake. For example, cold data is a perfect fit for the cloud, thanks to its low $/GB, but as soon as you begin accessing it over and over again the costs can rise to an unsustainable level. To avoid dealing with this problem later, it’s best to think about the right strategy now. Planning and executing the right hybrid or multi-cloud strategy can surely help to keep costs under control while giving that agility and flexibility needed to preserve IT infrastructure, therefore business, competitivity. To learn more about multi-cloud data controllers, alternatives to AWS S3, and two-tier storage strategy, please check my reports on GigaOm. And subscribe to Voices in Data Storage Podcast to listen to the latest news, market, and technology trends with opinions, interviews and other stories coming from the data and data storage field via Tumblr Cloud Storage Is Expensive? Are You Doing it Right? How can businesses use connectivity to drive improved CX for their customer? GigaOm asked 350+ strategic enterprise decision-makers from North America and Europe to share their experiences. Check out the infographic below and then read the full Research Byte here. via Tumblr GigaOm Infographic: Connectivity and Customer Experience Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 80 of Voices in AI features host Byron Reese and Charlie Burgoyne discussing the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com Transcript ExcerptByron Reese: This is Voices in AI brought you by GigaOm and Iâm Byron Reese. Today my guest is Charlie Burgoyne. He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. Heâs also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company. Charlie holds a masterâs degree in theoretical physics from Georgetown University and a bachelorâs in nuclear physics from George Washington University. I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show. And so I think âstrap inâ it should be a fun episode. Welcome to the show Charlie. Charlie Burgoyne: Thanks so much Byron for having me, excited to talk to you today. Letâs start with [this]: maybe re-enact a little bit of our conversation when we first met. Tell me how you think of artificial intelligence, like what is it? What is artificial about it and what is intelligent about it? Sure, so the further I get down in this field, I start thinking about AI with two different definitions. Itâs a servant with two masters. It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those â things like approving time cards and making selections within a retail environment. And thatâs really where the real value of AI is right now in the market and [thereâs] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data. In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and itâs artificial in that we donât need an organic brain to do it. Now the AI that Iâm obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) â that AI definition is actually much more around the nature of intelligence itself, because in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state. And I think thatâs where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself. Now Iâll come kind of straight to the interesting part of this conversation, which is Iâve had not quite a hundred guests on the show. I can count on one hand the number who think it may not be possible to build a general intelligence. According to our conversation, you are convinced that we can do it. Is that true? And if so why? Yes⦠The short answer is I am not convinced we can create a generalized intelligence, and thatâs become more and more solidified the deeper and deeper I go into research and familiarity with the field. If you really unpack intelligent decision making, itâs actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right? A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons. From what Iâve been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or itâs doing the best within the confines of the mechanics we have at our disposal now. In other words, Iâm not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or (b) the nuances that are inherent to intelligence⦠Iâm not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that. Iâll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether itâs Her, or Ex Machina or Skynet or you name it. There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation. What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore? How is that intelligence derived from innate creativity? How are they designing things? How are they thinking about drawings and how are they identifying clothing that they need to put on? All these different nuances that are intelligently derived from that behavior. We really donât have a good understanding of that, and weâre not really making progress towards an understanding of that, because weâve been distracted for the last 20 years with research in fields of computer science that arenât really that closely related to understanding those core drivers. So when you say a sentence like âI donât know if weâll ever be able to make a general intelligence,â ever is a long time. So do you mean that literally? Tell me a scenario in which it is literally impossible â like it canât be done, even if you came across a genie that could grant your wish. It just canât be done. Like maybe time travel, you know â back in time, it just may not be possible. Do you mean that âmay notâ be possible? Or do you just mean on a time horizon that is meaningful to humans? I think itâs on the spectrum between the two. But I think it leans closer towards ânot ever possible under any condition.â I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based off of intuition and experience which are totally fungible assets. But I made this claim that I didnât think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI? And the audience laughed and I stopped and I said: âYou know thatâs actually not the worst idea Iâve been exposed to.â Thatâs not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as little distractions from its normal working mechanics as possible. That may very easily be a credible aid to understanding how the brain works. If we think about gravity, gravity is not a bad analog. Gravity is this force that everybody and their mother whoâs older than, you know whoâs past fifth grade understands how it works, you drop an apple you know which direction itâs going to go. Not only that but as you get experienced you can have a prediction of how fast it will fall, right? If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, youâd know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right? The reality is that we canât create an artificial gravity environment, right? We can create forces that simulate gravity. Centrifugal force is not a bad way of replicating gravity but we donât actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity. In fact it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually thatâs how they do gravitons), putting to rest an argument thatâs been going on since Einstein truly. So I guess my point is that we havenât really made progress in understanding the underlying mechanics, and every step weâve taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence. If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that itâs not impossible that itâs completely impossible altogether. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 80: A Conversation with Charlie Burgoyne Last year, at Re:invent, Amazon AWS launched Outpost and finally validated the concept of hybrid-cloud. Not that it was really necessary, but still… At the same time, what was once defined as cloud-first strategy (with the idea of starting every new initiative on the cloud often with a single service provider), today is evolving into a multi-cloud strategy. This new strategy is based on a broad spectrum of possibilities that range from deployments on public clouds to on-premises infrastructures. Purchasing everything from a single service provider is very easy and solves numerous issues but, in the end, this means accepting a lock-in that doesn’t pay off in the long run. Last month I was speaking with the IT director of a large manufacturing company in Italy who described how over the last few years his company had enthusiastically embraced one of the major cloud providers for almost every critical company project. He reported that the strategy had resulted in an IT budget out of control, even when taking into account new initiatives like IoT projects. The company’s main goal for 2019 is to find a way to regain control by repatriating some applications, building a multi-cloud strategy, and avoiding past mistakes like going “all in” on a single provider. There Is Multi-Cloud and Multi-CloudMy recommendation to them was not to merely select a different provider for every project but to work on a solution that would abstract applications and services from the infrastructure. Meaning that you can buy a service from a provider, but you can also decide to go for raw compute power and storage and build your own service instead. This service will be optimized for your needs and will be easy to replicate and migrate on different clouds. Let’s make an example here. You can have access to a NoSQL database from your provider of choice, or you can decide to build your NoSQL DB service starting from products which are available in the market. The first is easier to manage, whereas the latter is more flexible and less expensive. Containers and Kubernetes can make it easier to deploy, manage and migrate from cloud to cloud. Kubernetes is now available from all major providers in various forms. The core is the same, and it is pretty easy to migrate from one platform to the other. And once into containers, you’ll find loads of prepared images and others that can be prepared for every need. Multi-Cloud StorageStorage, as always, is a little bit more complicated than compute. Data has gravity and, as such, is difficult to move; but there are a few tools that come in handy when you plan for multi-cloud. Block storage is the easiest to move. It is usually smaller in size, and now there are several tools that can help protect, manage and migrate it — both at the application and infrastructure levels. There are plenty of solutions. In fact, almost every vendor now offers a virtual version of its storage appliances that run on the cloud, as well as other tools to facilitate the migration between clouds and on-premises infrastructures. Think about Pure Storage or NetApp, just to name a couple. It’s even easier at the application level. Going back to the NoSQL mentioned earlier, solutions like Rubrik DatosIO or Imanis Data can help with migrations and data management. Files and objects stores are significantly bigger and, if you do not plan in advance, it could get a bit complicated (but is still feasible). Start by working with standard protocols and APIs. Those who choose S3 API for object storage needs will find it very easy to select a compatible storage system both on the cloud and for on-premises infrastructures. At the same time, many interesting products now allow you to access and move data transparently across several repositories (the list is getting longer by the day but, just to give you an idea, take a look at HammerSpace, Scality Zenko, RedHat Noobaa, and SwiftStack 1Space). I recently wrote a report for GigaOm about this topic and you can find more here. The same goes for other solutions. Why would you stay with a single cloud storage backend when you can have multiple ones, get the best out of them, maintain control over data and manage it on a single overlaying platform that hides complexity and optimizes data placement through policies? Take a look at what Cohesity is doing to get an idea of what I’m saying here. The Human Factor of Multi-CloudRegaining control of your infrastructure is good from the budget perspective and for the freedom of choice it provides in the long term. On the other hand, working more on the infrastructure side of things requires an investment in people and their skills. I’d put this as an advantage, but not everybody thinks this way. In my personal opinion it is highly likely that a more skilled team will be able to make better choices, react quicker, and build optimized infrastructures which can give a positive impact to the competitiveness of the entire business but, on the other hand, if the organization is too small it is hard to find the right balance. Closing the CircleAmazon AWS, Microsoft Azure and Google Cloud are building formidable ecosystems and you can decide that it is ok for you to stick with only one of them. Perhaps your cloud bill is not that high and you can afford it anyway. You can also decide that multi-cloud means multiple cloud silos, but that is a very bad strategy. Alternatively, there are several options out there to build your Cloud 2.0 infrastructure and maintain control over the entire stack and data. True, it’s not the easiest path and neither the least expensive at the beginning, but it is the one that will probably pay off the most in the long term and will increase the agility and level of competitiveness of your infrastructure. This March, on the 26th, I will be co-hosting a GigaOm’s webinar sponsored by Wasabi on this topic, and there is an interview I recorded not too long ago with Zachary Smith (CEO of Packet) about new ways to think about cloud infrastructures. it is worth a listen if you are interested in knowing more about a different approach to cloud and multi-cloud. Originally posted on Juku.it via Tumblr Isn’t It Time to Rethink Your Cloud Strategy? Good AI talent is hard to find. The talent pool for anyone with deep expertise in modern artificial intelligence techniques is terribly thin. More and more companies are committing to data and artificial intelligence as their differentiator. The early adopters will quickly find difficulties in determining which data science expertise meets their needs. And the AI talent? If you are not Google, Facebook, Netflix, Amazon, or Apple, good luck. With the popularity of AI, pockets of expertise are emerging around the world. For a firm that needs AI expertise to advance its digital strategy, finding these data science hubs becomes increasingly important. In this article we look at the initiatives different countries are pushing in the race to become AI leaders and we examine existing and potential data science centers. It seems as though every country wants to become a global AI power. With the Chinese government pledging billions of dollars in AI funding, other countries don’t want to be left behind. In Europe, France plans to invest €1.5 billion in AI research over the next 4 years while Germany has universities joining forces with corporations such as Porsche, Bosch, and Daimler to collaborate on AI research. Even Amazon, with a contribution of €1.25 million, is collaborating in the AI efforts in Germany’s Cyber Valley around the city of Stuttgart. Not one to be left behind, the UK pledged £300 million for AI research as well. Other countries to commit money to AI are Singapore, which committed $150 million and Canada, which not only committed $125 million, but also has large data science hubs in Toronto and Montreal. Yoshua Bengio, one of the fathers of deep learning, is from Montreal, the city with the biggest group of AI researchers in the world. Toronto has a booming tech industry that naturally attracts AI money. Data scientists worldwide. Examining a variety of sources, data science professionals are spread across the regions where we would expect them. The graphic below shows the number of members of the site Data Science Central. Since the site is in English, we expect most of its members to come from English speaking countries; however, it still gives us some insight as to which countries have higher representation. It becomes difficult then to determine AI hubs without classifying talent by levels. One example of this is India; despite its large number of data science professionals, many of them are employed in lower-skilled roles such as data labeling and processing. So what would be considered a data science hub? The graphic below defines a hub by the number of advanced AI professionals in the country. The countries shown here have AI talent working in companies such as Google, Baidu, Apple and Amazon. However, this omits a large group of talent that is not hired by these types of companies. Matching the previous graph with a study conducted by Element AI, we see some commonalities, but also see some new hubs emerge. The same talent centers remain, but more countries are highlighted on the map. Element AI’s approach consisted of analyzing LinkedIn profiles, factoring in participation in conferences and publications and weighting skills highly. As you search for AI talent, we recommend basing your search on 4 factors: workforce availability, cost of labor, English proficiency, and skill level. Kaggle, one of the most popular data science websites, conducted a salary survey with respondents from 171 countries. The results can be seen below. Salaries are as expected, but show high variability. By aggregating salary data and the talent pool map, you can decide which countries suit your goals better. The EF English Proficiency Index shows which countries have the highest proficiency in English and can further weed out those that may have a strong AI presence or low cost of labor, but low English proficiency. In the end, you want to hire professionals that understand the problems you are facing and can tailor their work to your specific needs. With a global mindset, companies can mitigate talent scarcity. If you are considering sourcing talent globally, we recommend hiring strong leadership locally, who act as AI product managers that can manage a team. Hire production managers located on-site with your global talent. They can oversee any data science or AI development and report back to the product manager. KUNGFU.AI will continue to study these global trends and help ensure companies are equipped with access to the best talent to meet their needs. via Tumblr The AI Talent Gap: Locating Global Data Science Centers Todayâs leading minds talk AI with host Byron Reese About this EpisodeEpisode 79 of Voices in AI features host Byron Reese and Naveen Rao discussing intelligence, the mind, consciousness, AI, and what the day to day looks like at Intel. Byron and Naveen also delve into the implications of an AI future. Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript. Transcript ExerptByron Reese: This is Voices in AI brought to you by GigaOm, and Iâm Byron Reese. Today Iâm excited that our guest is Naveen Rao. He is the Corporate VP and General Manager of Artificial Intelligence Products Group at Intel. He holds a Bachelor of Science in Electrical Engineering from Duke and a Ph.D. in Neuroscience from Brown University. Welcome to the show, Naveen. Naveen Rao: Thank you. Glad to be here. Youâre going to give me a great answer to my standard opening question, which is: What is intelligence? That is a great question. It really doesnât have an agreed-upon answer. My version of this is about potential and capability. What I see as an intelligent system is a system that is capable of decomposing structure within data. By my definition, I would call a newborn human baby intelligent, because the potential is there, but the system is not yet trained with real experience. I think thatâs different than other definitions, where we talk about the phenomenology of intelligence, where you can categorize things, and all of this. I think thatâs where the outcropping of having actually learned the inherent structure of the world. So, in what sense by that definition is artificial intelligence actually artificial? Is it artificial because we built it, or is it artificial because itâs not real intelligence? Itâs like artificial turf; it just looks like intelligence. No. I think itâs artificial because we built it. Thatâs all. Thereâs nothing artificial about it. The term intelligence doesnât have to be on biological mush, it can be implemented on any kind of substrate. In fact, thereâs even research on how slime mold, actually⦠Right. It can work mazes⦠⦠can solve computational problems, Yeah. How does it do that, by the way? Thatâs really a pretty staggering thing. Thereâs a concept that we call gradients. Gradients are just how information gets more crystalized. If I feel like Iâm going to learn something by going one direction, that direction is the gradient. Itâs sort of a pointer in the way I should go. That can exist in the chemical world as well, and things like slime mold actually use chemical gradients that translate into information processing and actually learn dynamics of a system. Our neurons do that. Deep neural networks do that in a computer system. Theyâre all based on something similar at one level. So, letâs talk about the nematode worm for a minute. Okay. Youâve got this worm, the most successful creature on the planet. Seventy percent of all animals are nematode worms. Heâs got 302 neurons and exhibits certain kinds of complex behavior. There have been a bunch of people in the OpenWorm Project, who spent 20 years trying to model those 302 neurons in a computer, just to get it to duplicate what the nematode does. Even among them, they say: âWeâre not even sure if this is possible.â So, why are we having such a hard time with such a simple thing as a nematode worm? Well, I think this is a bit of a fallacy of reductive thinking here, that, âHey, if I can understand the 302 neurons, then I can understand the 86 billion neurons in the human brain.â I think that fallacy falls apart because there are different emergent properties that happen when we go from one size system to another. Itâs like running a company of 50 people is not the same as running a company of 50,000. Itâs very different. But, to jump in there⦠my question wasnât, âWhy doesnât the nematode worm tell us something about human intelligence?â My question was simply, âWhy donât we understand how a nematode worm works?â Right. I was going to get to that. I think there are a few reasons for that. One is, interaction of any complex system â hundreds of elements â is extremely complicated. Thereâs a concept in physics called the three-body problem, where if I have two pool balls on a pool table, I can actually 100 percent predict where the balls will end up if I know the initial state and I know how much energy Iâm injecting when I hit one of the balls in one direction with a certain force. If you make that three, I cannot do that in a closed form system. I have to simulate steps along the way. That is called a three-body problem, and itâs computationally intractable to compute that. So, you can imagine when it gets to 302, it gets even more difficult. And what we see in big systems like in mammalian brains, where we have billions of neurons, and 300 neurons, is that you actually have pockets of closely interacting pieces in a big brain that interact at a higher level. Thatâs what I was getting at when I talked about these emergent properties. So, you still have that 302-body problem, if you will, in a big brain as you do in a small brain. That complexity hasnât gone away, even though it seemingly is a much simpler system The interaction between 302 different things, even when you know precisely how each one of them is connected, is just a very complex matter. If you try to model all the interactions and youâre off by just a little bit on any one of those things, the entire system may not work. Thatâs why we donât understand it, because you canât characterize every piece of this, like every synapse⦠you canât mathematically characterize it. And if you donât get it perfect, you wonât get a system that functions properly. So, do you say that suggesting by extension that the Human Brain Project in Europe, which really is⦠Youâre laughing and nodding. Whatâs your take on that? I am not a fan of the Human Brain Project for this exact reason. The complexity of the system is just incredibly high, and if youâre off by one tiny parameter, by a tiny little amount, itâs sort of like the butterfly effect. It can have huge consequences on the operation of the system, and you really havenât learned anything. All youâve learned how to do is model some microdynamics of a system. You havenât really gotten any true understanding of how the system really works. You know, I had a guest on the show, Nova Spivack, who said that a single neuron may turn out to be as complicated as a supercomputer, and it may even operate down at the Planck level. Itâs an incredibly complex thing. Yeah. Is that possible? It is a physical system â a physical device. One could argue the same thing about a single transistor as well. We engineer these things to act within certain bounds⦠and I believe the brain actually takes advantage of that as well. So, a neuron⦠to completely, accurately describe everything a neuron is doing, youâre absolutely right. It could take a supercomputer to do so, but we donât necessarily need to abstract a supercomputerâs worth of value from each neuron. I think thatâs a fallacy. There are lots of nonlinear effects and all this kind of crazy stuff that are happening that really arenât useful to the overall function of the brain. Just like an individual neuron can do very complicated things, when we put a whole bunch of [transistors] together to build a processor, weâre exploiting one piece of the way that transistor behaves to make that processor work. Weâre not exploiting everything in the realm of possibility that the transistor can do. Weâre going to get to artificial intelligence in a minute. Itâs always great to have a neuroscientist on the show. So, we have these brains, and you said they exhibit emergent properties. Emergence is of course the phenomenon where the whole of something takes on characteristics that none of the components have. And itâs often thought of in two variants. One is weak emergence, where once you see the emergent behavior, with enough study you can kind of reverse engineer⦠âAh, I see why that happened.â And one is a much more controversial idea of strong emergence that may not be discernible. The emergent property may not be derivable from the component. Do you think human intelligence is a weak emergent property, or do you believe in strong emergence? I do in some ways believe in strong emergence. Let me give you the subtlety of that. I donât necessarily think it can be analytically solved because the system is so complex. What I do believe is that you can characterize the system within certain bounds. Itâs much like how a human may solve a problem like playing chess. We donât actually pre-compute every possibility. We donât do that sort of a brute force kind of thing. But we do come up with heuristics that are accurate most of the time. And I think the same thing is true with the bounds of a very complex system like the brain. We can come up with bounds of these emergent properties that are accurate 95 percent of time, but we wonât be accurate 100 percent of the time. Itâs not going to be as beautiful as some of the physics we have that can describe the world. In fact, even physics might fall into this category as well. So, I guess the short answer to your question is: I do believe in strong emergence that will never actually 100 percent describe⦠But, do you think fundamentally intelligence could, given an infinitely large computer, be understood in a reductionist format? Or is there some break in cause and effect along the way, where it would be literally impossible. Are you saying itâs practically impossible or literally impossible? â¦To understand the whole system top to bottom, from the emergingâ¦? Well, to start with, this is a neuron. Yeah. And it does this, and you put 86 billion together and voilà , you have Naveen Rao. I think itâs literally impossible. Okay, Iâll go with that. Thatâs interesting. Why is it literally impossible? Because the complexity is just too high, and the amount of energy and effort required to get to that level of understanding is many orders of magnitude more complicated than what youâre trying to understand. So now, letâs talk about the mind for a minute. We talked about the brain, which is physics. To use a definition that most people I think wouldnât have trouble with, Iâm going to call the mind all the capabilities of the brain that seem a little beyond what three pounds of goo should be able to do⦠like creativity and a sense of humor. Your liver presumably doesnât have a sense of humor, but your brain does. So where do you think the mind comes from? Or are you going to just say itâs an emergent property? I do kind of say itâs an emergent property, but itâs not just an emergent property. Itâs an emergent property that is actually the coordination of the physics of our brain â the way the brain itself works â and the environment. I donât believe that a mind exists without the world. You know, a newborn baby, I called intelligent because it has the potential to decompose the world and find meaningful structure within it in which it can act. But if it doesnât actually do that, it doesnât have a mind. You can see that⦠if you had kids yourself. I actually had a newborn while I was studying neuroscience, and it was actually quite interesting to see. I donât think a newborn baby is really quite sentient yet. That sort of emerges over time as the system interacts with the real world. So, I think the mind is an emergent property of brain plus environments interacting. Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com .voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. via Tumblr Voices in AI â Episode 79: A Conversation with Naveen Rao Just three weeks into 2019, Veeam announced a $500M funding round. The company is privately held, profitable and with a pretty solid revenue stream coming from hundreds of thousands of happy customers. But, still, they raised $500M! I didn’t see it coming, but if you look at what is happening in the market, it’s not a surprising move. Market valuation of companies like Rubrik and Cohesity is off the chart and it is pretty clear that while they are spending boatloads of money to fuel their growth, they are also developing platforms that are well beyond traditional data protection. Backup Is BoringBackup is one of the most tedious, yet critical, tasks to be performed in the IT space. You need to protect your data and save a copy of it in a secure place in case of a system failure, human error or worse, like in the case of natural disasters and cyberattacks. But as critical as it is, the differentiations between backup solutions are getting thinner and thinner. Vendors like Cohesity got it right from the very beginning of their existence. It is quite difficult, if not impossible, to consolidate all your primary storage systems in a single large repository, but if you concentrate backups on a single platform then you have all of your data in a single logical place. In the past, backup was all about throughput and capacity with very low CPU, and media devices were designed for few sequential data streams (tapes and deduplication appliances are perfect examples). Why are companies like Rubrik and Cohesity so different then? Well, from my point of view they designed an architecture that enables to do much more with backups than what was possible in the past. Next-gen Backup ArchitecturesAdding a scale-out file system to this picture was the real game changer. Every time you expand the backup infrastructure to store more data, the new nodes also contribute to increase CPU power and memory capacity. With all these resources at your disposal, and the data that can be collected through backups and other means, you’ve just built a big data lake … and with all that CPU power available, you are just one step away from transforming it into a very effective big data analytics cluster! From Data Protection to Analytics and ManagementStarting from this background it isn’t difficult to explain the shift that is happening in the market and why everybody is talking more about the broader concept of data management rather than data protection. Some may argue that it’s wrong to associate data protection with data management and in this particular case the term data management is misleading and inappropriately applied. But, there is much to be said about it and it could very well become the topic for another post. Also, I suggest you take a look at the report I recently wrote about unstructured data management to get a better understanding of my point of view. Data Management for EverybodyNow that we have the tool (a big data platform), the next step is to build something useful on top of it, and this is the area where everybody is investing heavily. Even though Cohesity is leading the pack and has started showing the potential of this type of architecture years ago with its analytics workbench, the race is open and everybody is working on out-of-the-box solutions. In my opinion these out-of-the-box solutions, which will be nothing more that customizable big data jobs with a nice and easy to use UI on top, will make data management within everyone’s reach in your organization. This means that data governance, security and many business roles will benefit from it. A Quick Solution RoundupAs mentioned earlier, Cohesity is in a leading position at the moment and they have all the features needed to realize this kind of vision, but we are just at the beginning and other vendors are working hard on similar solutions. Rubrik, which has a similar architecture, has chosen a different path. They’ve recently acquired Datos IO and started offering NoSQL DB data management. Even though NoSQL is growing steadily in enterprises, this is a niche use case at the moment and I expect that sooner or later Rubrik will add features to manage data they collect from other sources. Not long ago I spoke highly about Commvault, and Activate is another great example of their change in strategy. This is a tool that can be a great companion of their backup solution, but can also live alone, enabling the end user to analyze, get insights and take action on data. They’ve already demonstrated several use cases in fields like compliance, security, e-discovery and so on. Getting back to Veeam … I really loved their DataLabs and what it can theoretically do for data management. Still not at its full potential, this is an orchestrator tool that allows to take backups, create a temporary sandbox, and run applications against them. It is not fully automated yet, and you have to bring your own application. If Veeam can make DataLabs ready to use with out-of-the-box applications it will become a very powerful tool for a broad range of use cases, including e-discovery, ransomware protection, index & search and so on. These are only a few examples of course, and the list is getting longer by the day. Closing the CircleData management is now key in several areas. We’ve already lost the battle against data growth and consolidation, and at this point finding a way to manage data properly is the only way to go. With ever larger storage infrastructures under management, and sysadmins that now have to manage petabytes instead of hundreds of terabytes, there is a natural shift towards automation for basic operations and the focus is more on what is really stored in the systems. Furthermore, with the increasing amount of data, expanding multi-cloud infrastructures, new demanding regulations like GDPR, and ever evolving business needs, the goal is to maintain control over data no matter where it is stored. And this is why data management is at the center of every discussion now. Originally posted on Juku.it via Tumblr From Data Protection to Data Management and Beyond |
Clarence Moore
Gamer. Writer. |