59,422 views | 01:34:35
Computers are getting smarter and more creative, offering spectacular possibilities to improve the human condition. There’s a call to redefine Artificial Intelligence as Augmented Intelligence, to emphasize the potential of humans working with AI as opposed to being replaced by AI. In this program, AI experts, data scientists, engineers, and ethicists explore how we can work alongside machines, when we should trust machines to make cognitive decisions, and how all of society can benefit from the productive and financial gains promised by advances in AI.
This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.
Steve Lohr has covered technology, business, and economics for The New York Times for more than twenty years. In 2013, he was part of the team awarded the Pulitzer Prize for Explanatory Reporting. He is the author of Data-ism, which examines the field of data science and decision-making.
Read MoreKathryn Hume is VP Product & Strategy for integrate.ai, a Toronto-based startup that helps large enterprises reinvent customer experiences using artificial intelligence. Prior to joining integrate.ai, Hume was President of Fast Forward Labs.
Read MoreVasant Dhar is a Professor at the NYU Stern School of Business and the Center for Data Science, and Editor-in-Chief of the Big Data journal. He is also the founder of SCT Capital Management, a machine learning based investment entity in New York City that implements a systematic process of knowledge discovery to make trading decisions autonomously.
Read MoreDr. John R. Smith is an IBM Fellow and Manager of Multimedia and Vision at IBM T. J. Watson Research Center. He leads IBM’s Research & Development on Visual Comprehension including IBM Watson Developer Cloud Visual Recognition, Intelligent Video Analytics, and Video Understanding for Augmented Creativity.
Read MoreS. Matthew Liao is Arthur Zitrin Professor of Bioethics, Director of the Center for Bioethics, and Affiliated Professor in the Department of Philosophy at New York University. He is the author of The Right to Be Loved, Moral Brains: The Neuroscience of Morality, and over 60 articles in philosophy and bioethics.
Read MoreOur brief today for this panel is to explore the ways in which artificial intelligence can work for us. Augmented intelligence is the phrase, and it’s the benevolent face of artificial intelligence. But let’s step back for a moment for a little bit of history. John J.C. Licklider was a Harvard trained psychologist at MIT teacher and he was the first head of the technology office, at the advanced research projects agency. He funded some of the basic research behind the personal computer and what became the Internet. And in 1960 he wrote a classic essay man, computer symbiosis. And in it he said that the appropriate role for computing was to augment same word, a human knowledge and intelligence rather than supplanted. And so for decades we’ve been debating this issue back and forth, always in a new context. And it is, the theme seems familiar. The context today seems drastically different and that’s what our panel is going to explore today.
Our first guest is an IBM fellow and manager of Multimedia Division at IBM Watson Research Center. He reads IBM’s research and development on visual comprehension, including its use in IBM Watson. Please welcome John R Smith.
Next is the Arthur Zitrin professor of bioethics and the director of the Center for bioethics at New York University. He is also the author of current controversies in bioethics and is the editor in chief for the Journal of moral philosophy. Please welcome Matthew Liao.
Our next panelist is vice president for products and strategy for integrating AI, a Toronto based startup. Her company uses artificial intelligence to help businesses improve and transform customer experiences to make those experiences more natural, more human like. Please welcome Kathryn Hume
Also joining us is a professor at the NYU Stern School of business and it’s center for data science. He is editor in chief of the journal big data and he is also the founder of set capital management tools. Machine learning software is used to make automated trading decisions in the financial markets. Please welcome Vasant Dar.
So John, we’ll start with you. Give us your take on artificial intelligence and you what, what is different now and where is it, where are we, where is it headed? And give us a few examples from the work that IBM is doing currently since you guys have been doing this for decades to it. However it’s been defined over the years.
Sure, so I’m a research scientist at the IBM and by training, you know, I’m an expert in computer vision and at IBM, what this has meant is I’m teaching Watson to see. But let me take a step back a little bit from that, there was a seminal moment more recently, in AI and in 2011 where IBM built a computer system in a lot of ways right at the cusp of this second push in AI that was able to participate on Jeopardy. Not only was it able to compete against humans, it was able to defeat the world champions at Jeopardy. So this really opened the minds of a lot of researchers and of course a lot of work has continued since then. And I think we’re finding ourselves in a period of AI where, again, it seems like amazing things are possible. Since then, since 2011 of course, you know, with Jeopardy all of the questions, we’re basically dealing with language. So there were no images on Jeopardy. However, since then the challenges around computer vision where I, where I do work around language translation, around speech recognition, you know, many perceptual tasks, AI has been able to make really great, great advances. And I think it’s put us in a position now where we can think of many different industries and how we can take these capabilities, which you know, are now doing amazing things and combine them together with human expertise to make really significant impact.
Kathryn, you’ve written a bit about this, but what, what do you think is the biggest misperception about artificial intelligence?
Oh God, I think there’s all sorts of misperceptions that are out there. It makes headlines to talk about the end of work and in and announced panel, we’re talking about augmented intelligence versus the machine suddenly replacing us all and we can all just sit around for the rest of our lives and play video games and smoke pot and try to figure out what to do with ourselves, so I think that’s a misperception. I think, my perception, my perspective here is coming from doing a lot of work with large enterprises who are in the process of trying to adopt and make real applications from a lot of the theoretical research coming out of academic research units and things like computer vision. So using computers and showing them pictures of a puppy or pictures of a glass of wine and without any associated metadata they can, they can recognize that and they can accurately label those tasks.
So in doing this work with enterprises, I’ve seen that this is just a lot harder than one would think. It’s not like you can as in the Matrix, right? If anybody’s seen the Matrix where Keanu reeves puts little chip in his brain and the next thing you know he can. He’s an expert karate master right? We have this impression when we think about augmented intelligence that the systems will get so smart that we can put our little chips in and then we’re, we’ll be fluent in German after one day of work as opposed to passing the Malcolm Gladwell 10,000 hours in order to learn some new skill. But in practice it does take time and it often takes an articulated, artfully articulated collaboration between men and machine in order to get going with systems.
When I’m working with large Wall Street banks or insurance companies or media companies, often I’ll meet with sort of an executive layer and the impression that they have is that they can go from absolutely manual processes to complete Netflix like automation in three or four months. And they ask us to scope out projects like this where they can just plug in a machine and, you know, we’re sort of ready to go, and what I like about Stitch Fix, and I’ll use this as a metaphor to help them understand what this might really look like…
Does everyone know what it is?
I’ll explain it, so this is a, it’s an e-commerce shopping personal shopping site. So you go in and, say you’re a woman and you’re sick and tired of shopping so you can sign up for an account and they ask you to fill out a form that tells you some data about yourself. So you know, your height, your weight, your size, some of your taste in clothing. So you can go on Pinterest and pick out examples of, of images, of clothing that you might like and you send this off into the ether. And the next thing you know, a month later they get a little box that has five items of clothing that the Stitch Fix assumes and predicts might be, might be something that you’d like. So this is partially algorithmic, right? So the first past that Stitch Fix does is to put this into some big algorithmic recommender system where it parses the features that you’ve input and then outputs recommended items, but then they pass it to a set of personal stylists who are just 1099 workers who go through and they curate a selection that that will likely be of interest to the final and end consumer of the service.
So basically they’ve artfully combined human intelligence to give feedback to those algorithms. And then all of the data oriented algorithmic work, and I liked this as a metaphor or I mean literal business model as well as a metaphor for enterprises that are trying to get started with these tools. Because if we take an example of say automating a sales process in a large bank, there’s a lot of know how and knowledge and subject matter expertise trapped within the heads of the current employees. And the task here is to tease out some of those insights and transform them into statistical patterns and systems so that there can be this give and take and always with a feedback loop to eventually five years down the line, maybe get to the point where the scales tip to the system’s doing more of the work than the people, but certainly not over overnight. So, the big misperception here I think is the sort of inflated hyperbole around the fact that the machines are getting so smart, they’re gonna take over jobs tomorrow.
Vassant, why don’t you pick up on that, since you’re in a domain it is, you know, quant hedge fund, if you will,
Sure. So the way I sort of look at the financial landscape is in terms of sort of high frequency trading where the machines out at the end of the day just very short term decisions on one extreme and on the other hand you have a long term decision making a la Warren Buffet style where you’re looking at factors that you know, the machine really has no basis for picking up on and that’s a very human
The management team
Yes. And it’s just much more qualitative and there’s not enough data, right? So it really boils down to does the machine have enough data? And then the middle you have sort of short term trading where you might hold for days a weeks. So in the on the left hand side and the high frequency side, there’s lots of examples, lots of repeated instances, and you can learn the machine has a basis for learning and that game was over a long time ago. Machines took that over from humans. On the right hand side you have machines that really don’t have a basis for making a decision. So that’s an inherently human endeavor, not that humans do particularly well at it. In fact, most humans actually underperformed the market, you know, Warren Buffet and a few others being notable exceptions. And in the middle you have sort of this intermediate space where you have enough data and the machine has a basis to learn and trade on its own.
My experience is that humans really aren’t capable of making too many decisions in a day, right? If you actually have enough data, then models of men tend to be better than man, right? That’s what the research generally tends to show, you know, human emotions kind of get in the way. And so, you know, when we talk about augmented intelligence, you have to ask yourself who’s augmenting who, you know, is the machine augmenting humans, which has been sort of the traditional decision support model for 50 years. And that makes a lot of sense because their problems where you don’t have enough data for the machine to actually make the, to be making a decision. And so that makes a lot of sense. On the other hand, if it’s machine augmenting a, if it’s man augmenting machine, then it, then I feel that it’s sort of a matter of time before the machine does better because you’re training it to do better and then you should expect it to do better over time.
So sort of to put this all together, the way I sort of look at the world is, you know, if you can imagine sort of predictability on the x axis and cost per error on the y axis than problems that fall on the lower right are very amenable to automated decision making. You know, you have high predictability and low cost per error and problems to the left are difficult to automate because there’s low predictability relative to cost per error. Now, interestingly, driverless cars are highly predictable until you might expect those to be automated, but at the moment the cost of mistakes is very high. So they’re also very high on the y axis, which is why we’re reluctant to cede control of our transportation yet to the machine because we’re just not sure of the sort of edge cases or when stuff goes wrong. It could go badly wrong, so I suspect that that will happen very gradually
And isn’t there another kind of aspect here where there are categories of decisions. I mean so much of this technology, which we used to call it data science, which is data plus machine learning algorithms. We now call AI, right? Its principle use was increasing your odds of making a sale, you know, predictive product predictions, targeted advertising, that sort of thing. And that’s fine for decisions were better on average is great. To me, one of my favorite data sciences woman named Claudia Perlich and her line on this, she works, used to work for a Watson and now works for ad targeting firming. And her line is that, look, this is a great time for experimentation in marketing because what happens if my algorithm’s wrong? It’s, you know, it’s somebody sees the wrong ad. It’s not a false positive for breast cancer, but we’re moving into these categories that things were better on you. You’re affecting individuals’ lives, medical diagnosis, hiring decisions, and lending decisions where, you know, individual, these are high stakes decisions for individual lives. And that seems to me it’s a different kind of category, isn’t it?
Well, so for medical diagnosis, your trust that falls sort of somewhere in the middle, so let’s say diabetes prediction than diagnosis falls somewhere in the middle of the spectrum and machines do reasonably well, but they still make lots of errors but it. So they still made this still significant numbers of false positives and false negatives, which can be particularly injurious if you sort of missed something. And so for that reason, the way machines are really used as sort of to make that first cart and categorize people into various levels of risks that humans can then pursue in a more structured kind of fashion.
But it sort of comes down to that the cost of errors there are still pretty high, so you’re not going to cede control of that to the machine. On the other hand, imagine that we now have genomic data available, right? The trouble with healthcare is that most healthy people have very few points of contact with the healthcare system. So there isn’t enough data about them. Right. Whereas the sick people, there’s lots of data about them, but it’s usually too late to do much about it, so you can imagine that in the future as we get more data about people, your physician could very well be a machine as well, or at least the machine group play a much larger role in advising you on your health.
Kathryn mentioned earlier, putting chips in brains. Actually, this is an area of research that Matthew’s done a lot on and it talks about brain machine interfaces, some of primitive ones we have now. And he explores, kind of the future of this in terms of what you call the control problem. I believe so. Elaborate on that.
So I’m a philosopher and bioethicists. So in the bioethics community, there’s a talk of human cognitive enhancement. So one way to amplify our intelligence, so is to sort of amplify biological intelligence to get us to become smarter biologically speaking. But there’s another way which is to do some sort of symbiosis where we begin to use computer parts and smartphones kinda like that. You might think that your smartphone, it’s kind of like an extended mind, an extension of your mind. But there’s more now there’s something called brain machine interfaces where there’s something called transcranial stimulation. So you sorta put something over your head. So companies are marketing this to athletes so that they can sort of performed better. They can learn better, learn quicker. And there’s actually evidence that it’s not quite like the matrix style of being able to learn a language right away. But it turns out that you can learn things faster, you remember things more using even transcranial stimulation, which is actually a very crude technology.
And there’s something else called deep brain stimulation which is much more invasive. And so where basically you’re inserting a theme the electrode into your brain and then it’s connected to a battery pack and then you sort of, you can adjust the, the mode, the electrical currents, and it’s said sense of electrical current to your brain. And about a hundred thousand people in the world today have already a DBS and they use it for things like Parkinson’s disease, epilepsy, etc. Etc. Even depression. But the interesting about thing about DBS right now is that it’s an open loop system. And what that means is user controlled, right? So you kinda manipulate, you sort of adjust the level of electricity.
But DARPA, the Defense Advanced Research Projects Agency, for example, it’s quite interested in something called a closed loop system where the implant itself will auto monitor your mood, your brain, state and your emotional state. So say you’re a soldier in the war. And all of a sudden you started to panic, it will send, what DARPA wants to do is sort of send some sort of automatic electrical signal to calm you down, for example. Right. Um, and so we’ll automatically monitor your emotions and then adjust them on your, on your behalf. And that raises all sorts of questions about whether this is going to solve the control problem because, well, you know, if the machine’s deciding that for you, then who’s really in control, right? Who’s really augmenting, who? What Vassant is talking about.
What worries you about, you know, at what level, because we’re talking about cyborgs basically, right?
The deeper philosophical question, one issue that’s going to come up is so say, so it’s going beyond augmentation towards integration, right? And when you have this type of integration, one issue is that is going to be, is this going to still be me? Right? So in this literature it gets a bit science fictiony, some people talk about uploading your mental contents, taking like copying all your mental content and uploading it to the cloud so you can kind of like “Her” or you know, if you’ve seen the movie “Her” and then sort of, and that’s one issue. The thing about that is that when you upload the mental contents into the cloud, it’s not going to be you. And here’s a way to think about why it’s not going to be you. So imagine that you take, you make a thousand copies of that, right? And just run it in thousand different locations. Now you’re supposed to be only in one location, right? A so it seems like if it’s running a thousand copies all that, all those copies cannot be you. Right? But the integration part seems like, maybe you can kind of preserve some sort of identity if its more much more integrated if some sort of. If our carbon-based cells can interact with an isomorphic non-carbon based cells sort of talk to each other if we can kind of get at that level. So a lot of people are working on this particular problem, right? Getting biological cells to talk to digital cells. If that can happen, then we can accomplish some form of integration and still be you.
John, let’s bring this back a little back. What is IBM working on? Healthcare or something that’s an example of sort of augmented intelligence that you might. That is either in the marketplace or in the labs, demonstrated, right? That can be done now that you wouldn’t have thought could have been done five years ago.
So, you know, there are many really good examples. I’ll talk about one, you know, one is in healthcare. So, you know, today, if we think about a problem, I’m like skin cancer, actually millions of people around the world every year are affected by skin cancer. And if we look at melanoma in particular, it’s actually a very deadly form of skin cancer where in the U.S. 10,000 people die each year. The computer can work together with the clinician, you know, sort of addressing the fact that people don’t always make good decisions. They have their own biases, they get tired, there’s subjectivity, you know, there’s, you know, there’s a lot there. The computer is objective and the computer, you know, that’s, you know, that’s a strong capability that it can bring to the clinician and point things out. So maybe, you know, maybe there’s, maybe there’s something that they haven’t seen before, but the computer has an infinite capacity to, you know, to see thousands or tens of thousands of cases of melanoma and make connections, you know, the clinician, you know, may, may miss. So It’s, you know, it’s really problems like that where we look at the strengths of the computer, you know, the ability to look at massive amounts of data, the ability to continuously learn, the ability to be there without any delay, you know, the ability to sort of interject the right moment., B really ultimately it’s augmentation and scaling that, that human expertise, it’s not replacing that doctor, for example, in the end.
Yeah, this is the other come back and automation is destroying jobs…
Can l just add to that because I just want to pick up on that, so one of the reasons there’s all of this sort of excitement about AI is that we’ve made a big dent in solving perception and the reason that’s profound is because previous generations of AI required you to curate the input into a representation that the machine could understand and then we’ll run with that. Right? So game playing in the sixties and seventies for example, you give them a representation of the problem of the of the chess board or the backgammon game or whatever, and then within the precepts of that they did their thing and did search and all of that sort of cool stuff, and did quite well, but you still have to tell the machine what it was working on. The difference now is that machines just take input directly from the environment. Images like vision, they can dig and see, they can hear, they can read and that makes a big difference because you know, you just kind of sidestep that laborious process of curating the inputs for the machine and then having the machine do kind of the rest of it, right? Whereas now it’s doing the heavy lifting right from the get go right from the source and sort of doing it all the way.
Now I am a little less optimistic about the future when I think of the world that way because one of the basic fortes of human beings is our ability to deal with unstructured data. A lot of the basis of human employment is the ability to look at images, whichever way they come, look at handwriting, whichever the way it comes, and then do some simple sort of logical things on top of that. Right, and it does, and it doesn’t matter. By the way. This is blue-collar workers or white-collar workers. The machines don’t really care. The only thing that matters is that sufficient amounts of data available. And I think this is what the excitement is about and the and the shifts that are going to happen. You know, I was in Toronto airport the other day. I was on a United airlines flight that happened to be a code share with Air Canada, so we went to the kiosk, tried my passport, no go credit card no go, global entry didn’t work and so I need a human. So I go into a line that’s full of humans. The lines look humans and I was thinking this is silly, right? All you need is a machine just watching everything, right? Image recognition, technology is good enough and it can just tell Air Canada how many people didn’t need, when to sort of keep things flowing more freely and sort of sort of doing things the old way where you know, we’re a scientist is making Poisson assumptions about arrivals and doing this heavy duty modeling and the stuff has gotten a broken at the end of the day as far as I’m concerned. So that’s sort of, where a lot of the productivity gains I feel will accrue is from this ability to just kind of ingest the raw data and to be able to make intelligent decisions with it automatically it. So that’s going to have a pretty profound impact.
Yeah, I agree with that. Like with the so the data science community, we call it feature engineering. So the sort of standard practice in data science up to the past couple of years have been to have a human come in and curate which aspects of a data set are going to be most highly with the outputs that you’re looking for. So extended example I’ll give is if you’re, you’ve got a simple model where you’re trying to predict the prices of houses in a given jurisdiction. You go through and you say, well is it going to be the square footage? Is it the location? Is it some sort of amenity? And you might note that it’s the square footage that seems to be the most highly correlated what you’re looking for, so you focus on that. And in these new deep learning neural oriented systems, we that sometimes there are so many possible features that could be relevant to predict what we’re looking for, that we sort of remove that part from the equation and focus the engineer’s activity on selecting how many layers in a network might be useful selecting, which type of architecture because the connections can be oriented in different ways, will be most effective in, in deriving the output that we’re looking for, which has lead to these breakthroughs and things like perception. I’m on Jeopardy is a type of problem as John mentioned earlier on is one that’s a little bit like entity extraction or sort of basic equations, right? So Einstein is who’s the most famous scientists of 21st century. It says Einstein, right? So it goes through Wikipedia and can pull that out, which is different from truly interpretive semantic understanding of text, which is harder to encode and a set of steadfast rules or humans selected features and a is leading to some relatively significant breakthroughs in applications like automated text summarization where we can build systems that make our representation of a very long piece of text as well as a mathematical representation of each sentence and then basically pick out those sentences that are most closely related to the model of the whole, which is a thing that has struggled, has been a struggle for the research community for a long time.
One thing though, he mentioned that I want to disagree with his with the medical example and considering these computer systems more objective than humans. I don’t think machine-learning systems are objective, in part because of the way that they’re trained. so I’m in the, there’s two sort of camps and machine learning, supervised learning and unsupervised learning. So unsupervised learning is the style that we know when, when, when we normally think about machine learning and from a public perception, it’s the machines magical ability to discern patterns in data. So that exists and it’s hard and it’s often used for exploratory analysis of a data set to get a feel for clusters of information that have something to do with one another so that you can then build a system. A lot of the big breakthroughs in deep learning today, most systems are supervised, which means they require a human to come in and label a set of data so they give it the right answer. And then the learning is optimizing the pathways so that when it sees something that it doesn’t know yet it’s been trained, the model has been trained well enough that it can make an accurate prediction. So that means humans are training them, which means there’s a lot of subjectivity baked into the systems, right?
So I’ll give you example of one where this can have ethical consequences. So a good friend of mine just published a post on Medium about, a set of research that was used to detect and assume criminality from just from someone’s photo. So these series of photos, this was done in China have of men and you had sort of the top row when they every had on their white collar and they looked like nice kind people in the bottom row are these sort of smug faces with furrowed brows. And you look at that and you say, imagine that weren’t a machine learning system. It’s just you as a human who’s the good guy and who’s the bad guy? And lo and behold, you kinda, you know, it’s like at first glance you have this intuition. So, the researchers of the paper claimed isn’t this amazing! These systems can go out and automatically identify criminals just from their faces when in fact, you know, basically what the AI has done is it’s revealed to us, our prejudices, right? Our tendencies to look at somebody and be like, yeah, that guy’s scary and that guy’s not. And we do this every day when we’re walking down the streets, right? So basically it is, if we view AI there as, as a magnifying glass to illuminate our own human biases, I think that there’s a powerful ethical discussion to have, but it means that if the systems, we have to assume that you have to recognize the systems actually aren’t objective because they, they’re concatenating our own human behavior.
So I think that indeed, is a risk and certainly it’s something we have to take a, you know, a lot of care. In that particular case, what was the ground truth? I think that, you know, that’s what it comes down to, you know, with a lot of this training if you’re going to be teaching the computer than is the information that your teaching in with correct? And you know, in the case of skin cancer, you know, well, what can you do? You can have a gold standard around, you know, the pathology reports or you know, they’re there. Can, you can work hard to get information that is correct as possible when you’re training the computer. I think these are things that we have to strive for. I mean, I fully agree there was always this notion and machine learning, you know, prior to this resurgence, which was garbage in, garbage out and I think that still applies today with deep learning and so on. You know, you have to take a lot of care and how you train these systems.
Is there, you know, the next five years, is there something you think is going to be achievable that you see as you know, are potentially, it’s striking and I’m thinking positive here right
So I see more and more sort of decisions getting automated and humans will do something else. What they do, I don’t know. Right? Because there’s a lot of talk about, well, they won’t replace us, they’ll augment us. I’m just not seeing yet what that will be. I mean, I accept the fact that people will do something different, right? If they’re dangerous jobs that humans have been doing well better than machines do them and the human operator does something else. But I don’t see exactly how that’s going to sort of liberate us to, do, you know, more productive things as has been the case in the past. So I’m sort of agnostic on this whole view of, you know, whether, you know, like previous waves of technology, AI is no different, you know, it’ll just make us more productive. But I fear that it’ll worsen inequality that it will actually, you know, will still require humans, but the kinds of things that we value about humans like empathy and all of that, I’m not in short supply , so they’re not going to be sort of well compensated kinds of jobs. So I worried that the balance is shifting heavily in terms of capital, you know, as opposed to labor.
Anyway, take a shot at the five-year thing?
One of the other reasons why AI is taking off right now or one of the core reasons is actually from hardware. So graphical processing units that were historically used for video games. They do a good job moving electrons and basically in these parallel matrices as opposed to just in a straight line like in a central processing unit. That was great for processing images and building video games, the popularity of video games, and it just so happens that the architecture is also great for the type of mathematics that’s underlying the types of systems that Vasant was just describing. It’s really that combined with an immense amount of data and just faster processing power is really led to the quantum, no pun intended, sort of leap in the revolution recently. And there’s other types of hardware advances that are occurring right now. So in those, this quantum computing world with the, you know, and we could have a session, they just did a session on itself to describe this, you can given, given this sort of entanglement properties that exist at the atomic level, you can super, super impose various states of a bit so that it’s not just the ones and zeros that we think of in standard deterministic, linear processing power. But from a probabilistic perspective and machine learning is all about statistics and probability, you can use one operation to basically like sample across what could be for states if you’ve just got to bits and up to many, many, many, many, many to, to the N levels of states. Which is significant, I think for the types of operations and the types of hierarchical and complex patterns that we look for in machine learning, which could lead to potentially like even more cool perception or perception oriented activities.
The second is most of the time today, it’s very computationally intensive to train these algorithms. So you make the system smart, you bake up the smarts, lets say I’m in a very large cloud computing architecture. So Google and Facebook, et cetera, they’re doing most of this work today centrally, but all of the big companies, Google, Apple, I’d assume Facebook in some way because of Google and Apple is, Facebook is as well. They’re spending a lot of energy right now to push the training of the algorithms out onto mobile devices, which means your device will really start to know you. It will track your data and it will be personalized to how you write messages, what you like, what pictures you’re interested in, etc. And the caveat here is we go, as we think about privacy, that could be super creepy if these large companies then get access to all of that data. But from what I know, they actually are making attempts to do this right where they’re using some new, uh, privacy techniques called differential privacy encryption techniques, homomorphic encryption to make sure that Google never actually gets access to this fundamental personal data.
And if we think about edge computing, so all of our little toasters and shower heads and everything connected to the internet today, all that information has to come back to a centralized server to then make it back to your fridge to tell you what to cook next. But if that processing exists out on the edge, I think it could lead to some startling new applications. So that’s for me, that’s the sort of cool, super exciting area.
I just want to add one thing. I think that the area of augmented reality is very exciting, right? So that’s kind of the machine learning coming together with some of the processing power that Catherine was talking about. I think that’s a world that we’re augmentation will be really cool, sort of, you know, the world seems to be heading that way because machine learning has become good enough to facilitate that kind of stuff. Hardware is kind of, you know, so the algorithms, the AGI that we’re talking about, good enough, the hardware power is there, the speed is there. So it sort of, that’s what I would expect to see a lot of augmentation happening.
Yeah, imagine kids taking a field trip to the Great Barrier Reef and swimming around presuming that we don’t kill the earth but global warming and it still exists soon, there’s another problem that we’re not talking about, but like, you know, and they could, they could snorkel around and have their little glasses on and have the taxonomy of the fish be identified in real time and. Right. So you could like, education could be awesome with, with augmented reality, you could take a walk through the park and have your botany glasses on, you know?
Yeah. Some people actually creating sort of doing science labs where you, sort of, use sort of augmented reality or virtual reality where you go in and you can kind of do sort of experiments in virtual labs and things like that. But just the comment on a sort of this your question. While I already mentioned the brain machine interfaces and it’s already been used. I mean, so they have people, sort of like one person sort of like people who are paralyzed now for example, there being computer, there’ll be wired to computers and they can now or they’re training them to be able to move different things using their mind to type, you know, sort of things on computers, etc. etc. There are also people implanting things into their body so that they can use it to open doors to, you know, pay at checkout counters and you can see that sort of. I mean there was this guy with an antenna on the head, so all that, oh that’s actually been done. And there’s a lot more integration. And again, and you know, at DARPA is really at the forefront of a lot of these technologies and they’re trying to get people to, there’s something called the silent top program where they’re trying to get soldiers to be able to talk to each other using their mind. They’re really, they’re trying to do that within the next five years. That was what near mandate. Within five years we want to close loop system where the machine can automatically monitor what’s going on out there and be able to put that into the user. Right
Intelligent, autonomous nervous system, the outside world, the augmented reality stuff is basically Her as a guide to the outside world
That’s exactly right.
So yeah, so I think Kathryn described many of these, what I would call technology enablers. I think quantum has definitely, you know, it has enormous potential IBM is so huge. It’s a huge direction for IBM and its impact will be enormous. I mean just in terms of computation and computation is important. I mean it’s actually one of the primary things that’s driving our ability to learn more complex models and more sophisticated, sophisticated knowledge and, and so on. But what are the areas that I’m very interested in, is its application of AI is to creativity and if you think about it, you know, the sort of the spectrum of artificial intelligence. We talked a lot about perception, the ability to see and hear and speak and, and, and, and so on. Then at the middle maybe we have more about knowledge and reasoning, but then where, where, where’s creativity? I think creativity is even, you know, at, at the further end of the spectrum. And I think it’s even harder to answer the question, “what is creativity?” then it may be to answer “what is intelligence?” We can sort of describe a few attributes of creativity. But it’s hard to really know what is, what is happening and it’s, it’s really entirely a human endeavor today. So I like to think of myself as an engineer. It’s about, it’s all about method, but what creative people do, to me, it still seems to be magic somehow.
We did a project at IBM in this last year around horror movies. We actually got Watson to watch hundreds of horror movies. So I like to think we sent it to film school. I also like to think I went from teaching Watson to see, to teaching Watson to feel, because really what we were able to encode a Watson to do around these horror movies was, you know, not only, you know, not just see a objects and scenes and people and transcribed speech, but characterize the content in terms of emotion. So it was a scene that it sounds scary? Did you know, did a scene look happy? Or did know, did it look, did it look sad? You know, something like that. And through these algorithms we were able to get Watson, you know, able to make a fairly good assessment of horror movies, but also a horror movie trailers. We learned the patterns for making a horror trailer.
Essentially it came down to three things. The scenes at a horror movie, a movie trailer are either suspenseful, scary or tender. That’s it, but you need, you need all three. I mean, you know, the, contrast is there and we were able to take that simple recipe, apply it to a brand new horror movie and have Watson do a significant part of making the trailer for that movie. We just needed a film editor to come in and within one day, you know, create an entire trailer for the movie.
So it’s a good example of taking something which is, you know, might be a three month effort for a production team, you know, down to, you know, computer assisted task which happened in a day. But I think this just scratches the surface. I think it points to a lot of potential for the computer to be an assistant to augment the creative process, to do some of the mundane work. The computer has no problem watching every movie that was ever made. If it comes down to it and can extract insights and you know, it can really be an aid to the creative person. Certainly in filmmaking, but it goes beyond, certainly goes beyond making movies.
There’s a cool a trend in the computer vision world they call it style transfer. And this was a neat; I find this a neat accidental byproduct of the research process. So as people were trying to hone the algorithms to do the type of work that john mentioned, so come vision perceptions, a computer vision and perception. So being able to accurately label a picture according to its content. To do that, they have to pass it through this network and it twists and turns and transforms the input image and gradually gets rid of all the noise so that it can focus on the general representation that can be affiliated with a linguistic term. Cat, dog, glass of wine is I’ve been using during the, during the presentation. So as there was a group of researchers who noticed that that noise that got removed just so happened to be what we call artistic style when they put in painting. So if you put in Van Gogh’s, Starry Night and you pass it through these algorithms, it will pull out the Starry Nightness. And then you can impose that on your, your favorite selfie on your Facebook page. Right? So this, I think this actually there’s the art critical community has different views on this. Some people say is kitsch, right? It’s ridiculous that, you know, are selfies, can become a Rembrandt. On the flip side, it actually is a really, it’s sort of democratizes the skillset around art. So there’s an app called Picazzo and I’ve had some conversations with the CEO of the app who said, well, the critics panned it. He had this influx of emails from people just like you and I who were like, oh my god, I always wanted to paint, but I just don’t have time to learn it and now I feel like I have this like I have this agency and I can go and I can make a Kandinsky and I can make a you know, and I can make my own Mondrian or whatever, and the commercial applications or that, you know, you can be.
I’ve done a lot of work in startups and we’ve got no budget so we can’t afford $50,000 branding agencies. But it’s really cool when you don’t have to hire that that, you can pull up this app and you can for five cents, make a really professional needs, stylistic website. So, so it’s not, you know, it’s not quite creatively like we think of it, but there is the sacrosanct newness of, of the, you know, of genius and art gets, it forces us to ask these big philosophical questions about what we value in society when we see that this happens and it also leads to this whole from an epistemological and cognitive perspective, it’s teaching us and it’s inspired some research on the correlations between how we see and align language with the world that we process and then how we make art and what’s and what qualifies as style. So I think that’s just also super cool. You know, consequences of this, of this moment.
Well, let me just ask John on the creativity side, because you guys had done stuff with a cooking and other things. Where do you kind of come out on this, anybody’s doing any kind of real reporting, a research about artificial intelligence. I mean, you kind of come away with a credible appreciation for what we call general human intelligence, right? And only 20 watts, you know, low power, you know, it just, and it feeds into this the AI debate I was in particularly the brain. I mean, is it good? Is it an inspirational metaphor or is it a roadmap?
Computers do not work like the human brain, you know, they are not 20 watts and you know, and they are very fast, but they’re not massively parallel, It’s a very different ability which can have great impact, uh, certainly for data processing and making, you know, finding patterns and making predictions, that’s really, it’s strength, but in terms of replacing that 20 watts of, of, of human ability that they bring, um, you know, I think we have a ways to go.
Yeah. It reminds me that famous line by, one of the pioneers in voice recognition who worked for IBM for Fred Jelenik is he explained it, you know, airplanes don’t flap their wings. You can, they do it differently.
Yeah, that’s right. And I think there is some, you said, is it a metaphor? Yeah, certainly there’s a lot there because whatever biological systems, you know, human cognition perception, it’s the best system we know. So we can learn. We can certainly learn a lot by knowing, you know, knowing it better, you know, knowing how it works, it doesn’t necessarily mean that’s how we should build our computers.
So I just want to comment on the creativity. I’m not sure if it’s true that machines can be more creative than we are. Just give one example: take like AlphaGo, right. So, uh, when he was playing Lee Sedol, and he made this move is like the 34th movie. It was like a move that nobody had ever made. Uh you might think creativity, just in terms of being able to come up with a come up with solutions or sort of conceptual space that hasn’t been thought of before. Right? And to that extent it does satisfy that criteria of creativity came up with ideas that haven’t been thought of before and I think machines are going to be able to do that. They’re going to be able to locate spaces where we haven’t been to before. So, I don’t know what you think about that?
I even when Kasparov played against Deep Blue and in chess, he reached the point where he, you know, he felt that the computer was not playing fair, right. Or there was really a person behind it, right? Because of whatever move Deep Blue had made, it just didn’t seem logical to him. Right. So, you know, I think there is, but in the end, what was the Deep Blue? Deep Blue was a massive search system. There was no intelligence there, right? You know, in, in the way we think, you know, about a lot of the problems in AI. It just was a brute force search problem. So sometimes we might ascribe some of these qualities to it, but certainly knowing what was behind the curtain, you wouldn’t say, you know, Deep Blue was creative.
I see.
I think we, we’re talking about like two types of creativity here. I think what Matthew alluded to was sort of more in the realm of reinforcement learning are we Kathryn mentioned, supervised or unsupervised. There’s reinforcement where the reward is delayed, right? And so, what Deep Mind has been really masterful that is just sort of pushing the limits in terms of reinforcement learning, which in simple terms boils down to, you know, you have an evaluation function, you’re here, you can evaluate sort of what got you there, but then what’s the remaining thing to the end of the game? And that’s kind of a recursive kind of problem and they become really good at solving that problem. And so a machine may make a move and you said, wow, that’s never happened before, that’s creative. But there’s another aspect of creativity that sort of, it seems to be outside the realm of these kinds of algorithms, which is we just find ways to do stuff and we gone quite describe it, such as even dealing with each other. Right? And machines are not good at dealing with people at the moment. Right? That human machine interface is actually quite poor. They’re quite stunted in that sense and a lot of the research efforts and creativity, I think will make a dent in that arena and just getting machines to be more empathic or sort of have the kinds of qualities that we so seamlessly seem to exhibit.
On the flip side, so there’s a film called Tim’s Vermeer and I think it’s fantastic. I think it’s fantastic because it, for two reasons. One is it actually forces us to question our assumptions around human artistic creativity. So this is a movie about some guy, Tim. Uh, he was a designer at Pixar who has a theory about the 17th century Dutch painter Vermeer and wants to test his theory and he assumes that Vermeer used an optic system that had two mirrors, one that sort of just reflected the, the world that he was trying to depict. And then the other one that’s sort of shot that reflection down onto a canvas and then, Tim hypothesizes that he used something like a paint by number type system to basically use pigment until there was a color gradient between what was reflected in what he was painting. And the minute it got there he shifted on and he just sort of goes through and in a seemingly extraordinarily uncreative way, it made what were the most realistic. And I, I’m a huge fan. I mean just tantalizing paintings from the 17th century that looked radically different from those of his contemporaries. So I think what I like about that is it also forces us to. It’s this is less about machine creativity and more about our assumptions of what qualifies as artistic creativity. That has a lot of ramifications also further scientific research community. So, you know, I believe that the myth of the solitary genius who sits there and solves Fermat’s last theorem, you know, is not necessarily the way that the innovation technology progresses.
This is very much a back to Her and artificial general intelligence, right? There’s, there’s stuff going on collectively that we from our, trapped in our bodies. Subjective viewpoints don’t often perceive. But if we could perceive like who knows what might happen. The second cool thing about that film is that to make this happen, Tim had to reverse engineer he thought would have been the original scene. So to do it he has to learn about 17th century glasswork and textiles and woodworking. And it’s kind of this engineer’s paradise, right? Because within this one single project lies of the world is he has to do all, you have to learn all these things in order to solve his problem. So I like it as something that also forces us to question or are inherited assumptions of what qualifies as human creativity in the first place, which can be often very scientific and constraint oriented.
Are you guys are really worried about what you’re doing is, you know, gonna make the world a rougher place for more people?
I think. I think IBM is very clear on this. Um, you know, we’ve outlined the principles here for AI around three dimensions and we call it a purpose. So you know that the role is about augmentation of you know, working together with humans for important industry problems. Trust that, you know, when we build these AI systems will make it very clear how those models were, were learned when AI is being used, what data is behind it, you know, and so on. And then the, and then the third is skill. It actually comes back to how these systems are trained that they are, that they will learn from the human experts, you know, that, it is about taking all of that human expertise and trying to make a joint computer human capability that’s even better. That can help scale expertise, you know, can help, solve difficult industry problems. So I think we’re very optimistic in, you know, in the, in the potential if we follow these three things,
Don’t feel compelled if you don’t, but if you, if you got to put a strong point of view, I’d like to hear.
Yeah, I worry, I don’t have that optimistic prognosis on employment. I think the worldview of a cool place, they’ll, it’ll be amazing machines, augmented reality, all kinds of stuff. You know, my wife yesterday showed me a Groupon coupon for data scientists. So did you expect to see this? I was like, no, that’s really interesting. Um, and that kind of tells you were sort of people are focusing on, but that’s, you know, everyone can be a data scientist, and if machines are making of these decisions better than us, that we’ll be doing something else. I don’t know what that is. Hard to say, what it’s going to do for employment, but like I said, I worry about inequality.
Okay.
I kind of side with the economists here. I think there’s been multiple technology revolutions historically and that jobs haven’t disappeared. And if you look through the history, I may come from a sort of history philosophy background. People have worried about the same things time and time again. We talked about it with, there’s this film and from 1957 “Desk Set” where I’m Katharine Hepburn and Spencer Tracy screwball comedy, absolutely fantastic. They were all worried they’re gonna lose their jobs in the sixties. And it didn’t happen because the computer. I also do think though that the fact that white collar jobs, subtasks of white collar jobs like investment banking, wiring, accounting, doctors, etc. pushes a sensitive button in today’s contemporary society. So there’s a lot of people in the world today who have horrible jobs, they’re jobs they don’t really want and we don’t think about them, were worried about the fancy things. And I think it tells us, actually some people are worried about sort of, self driving cars and what that’s gonna do the trucking industry as well. And obviously with the manufacturing industry. But, I think that I, the fact that, you know, cognitive tasks are often hyper specialized, which is a great candidate for narrow, narrow intelligence leads to some discomfort with the way in which our society is currently constructed that people are reacting to.
So I think that there is going to be some sort of a, it’s going to transform our society, there’s going to be transitioned costs, so people are gonna lose their jobs and it’s not so easy to retrain people, you know, especially if you’re in your fifties, etc. etc. to try to retrain, to say, oh, you can take a course on data mining as you know, it’s very hard. Right? And so that’s going to, it’s going to hurt a lot of people and so I do think that we need a discussion in our society about the unique qualities that are going to result, you know, so in the long run, maybe so just like, when we transition from horse buggies to cars, right, people lost jobs, etc. etc. We need to be able to do this this time, it’s going to happen even faster. Right? And so we as a society need to be able to deal with that transition and how do we help these people? Do we do something like universal basic income, etc. etc. But we do need to have that conversation. Otherwise the inequality is going to be hugely exacerbate it and that’s going to be really bad.
Technologists should be part of that debate as opposed to just the public policy people to kind of.
So we have time for some questions if there are, I think there people with microphones.
My main concern is how we should educate the generation, like my son who is 12, which type of profession, how we should encourage that, knowing that it’s going to be harder in the future?
I’ve written a lot about this topic in education in this age, so I think, you know, as we’ve been talking about this whole debate between artificial general and narrow intelligence, the, this general as a model or something that it really is not achieved yet and there’s a lot of value in our being able to do multiple things, right? So not just focusing on one single thing, but really the value of a classical liberal art, liberal, a liberal arts education, right? So really sort of learning not only one thing but, but many things will be valuable in the future. And then the other thing, as Matthew just mentioned, skills transfer. So, the, most valuable skillset that one can have today is the ability to learn, learn new things because as the technology keeps changing, you don’t want to get stuck in a thing where you know, you kind of have this one skill and if you can’t perform that skill then you’re out of a job. If you can go on a MOOC right, so a massive online open course and learn a new skill when you’re 25, when you’re 30, when you’re 35, that’s going to be, these are going to be the Darwinian fits for the new economy. And I don’t know, there’s no, the education system hasn’t solved this at all. You know, there is, and I hope that the technologists will be in dialogue with the education policymakers to make sure that both general education as well as skills transfer will be will be focused on in the future.
Another Question
The thing that’s really looming on the horizon that I see as the self-driving car, I think there’s a lot of money going there and they really want to make it happen very quickly. One of the things I keep hearing is going to happen within five years. So, you know, obviously how does this, I feel like it’s going to be shoved down our throats rather than we’re going to be allowed to arrive at some sort of organic a solution. So where do you guys see as, how is this going to be implemented?
So maybe I can go. So one of the things that we haven’t really talked about is sort of how do we build ethics into AI and that’s sort of a very prominent in the case of self-driving car. So as a philosopher we talk about the trolley problems. So what happens when a trolley goes towards five people, you know? Well now lets deal with the self-driving car. Imagine now the car’s driving, it’s an. It could kill five people. Oh, could kill you, right? You could sort of turn off to the side of the world killing you, right? What should the car do and how should we, how should the program or a program that into the car and which you buy a car that will kill you. Right? Even though maybe that’s the right thing to do. Right? And so, and who gets to decide that? So there’s a sort of, there’s a who’s going to build in these sort of ethics into the AI. So that’s sort of a huge area that hasn’t really been discussed well that people are starting to discuss now.
My take on that has to do with sort of like different geographical locations are a better fit for self-driving cars. So Singapore it would be great, right? So self-driving cars work the best. If it’s, if we go from human to all cars, all self drivers, all human today also driving tomorrow because once you’ve got all the cars that are talking to one another with their sensors, it becomes a very static system. It loses a lot of complexity. Bangalore is the worst place for self-driving cars because if you ever been to Bangalore, it’s like people are honking and driving all over the place. There’s traffic lights, right? So it would be super hard. It goes back to the, as was saying with the quickness with which you have to make a decision in the level of chaos in your system. So I do think like, this is going to be the technology’s there, you know, and it’s going to be a policy decision where certain who knows in the States, it’ll be Uber that starts because there is aggressive as they are and they’re just going to put the fleet out and god knows what will happen and you know, and then they’ll get sued and they’ll pay it off. And the next thing you know, everybody will have self-driving cars. Right?
But um, you know, I think that it’ll be a geographically specific area, there’ll be somewhere that’s more of a Singaporean like state that will probably be like at the early adopter, and then it’ll gradually shift to, I can’t imagine when it’ll make it this place like India just because of the, but I think the, so the, so the sort of that. But the interesting point is going to be this, this middle threshold where, you know, you’ve got some human drivers and some self driving cars and you’ve got these, I can’t remember what they call it, but it’s sort of, there’s like four levels of self driving cars from, you know, cruise control to full automation and right now there’s a lot of work on the like when does the human intervene, who’s liable when the human intervenes and it goes into the ethics and IP, right? So it’s like, is it the carmaker? How do we do we model this like a software liability problems. So I think right now the issue is really one of policy and ethics.
But there was another point earlier on that is also relevant for self-driving cars. We hold computers often to higher standards and human behavior. And in the self driving car world, there’s so many accidents per day that it’s kind of a no brainer that we should go to the autonomous vehicle paradigm, even if there will be accidents, there will be fewer than there are with human drivers. But it’s so hard to get people to appreciate that. Right? So for me, this is the sort of like the messy debate in the, in the space right now.
And it’s not all math. Um, well, I think we’re a little past time here I want to thank the panelists and all of you for coming.