You select things from the present and things from the past that are relevant. And you’re using memory too, because you go back to things you know in order to make judgments. If you anticipate something bad happening in the future, you change course-that’s how you do planning. You project yourself into the past or the future, and when you move along these projections, you’re doing reasoning. Spectrum: What other aspects of human intelligence would you like to replicate in AI?īengio: We also talk about the ability of neural nets to imagine: Reasoning, memory, and imagination are three aspects of the same thing going on in your mind. ![]() I have several students working on this and I know it is a long-term endeavor. I wrote a paper in 2017 called “ The Consciousness Prior” that laid out the issue. And I’m not saying it’s going to be easy. It’s not that we have solved the problem, but I think we have a lot of the tools to get started. We’ve had big breakthroughs on computer vision, translation, and memory thanks to these attention mechanisms, but I believe it’s just the beginning of a different style of brain-inspired computation. This is very different from standard neural networks, which are instead parallel processing on a big scale. When you’re conscious of something, you’re focusing on a few elements, maybe a certain thought, then you move on to another thought. Humans do that-it’s a particularly important part of conscious processing. Spectrum: How can we create functions similar to human reasoning?īengio: Attention mechanisms allow us to learn how to focus our computation on a few elements, a set of computations. Then there are people like me, who think that we should take the tools we’ve built in last few years to create these functionalities in a way that’s similar to the way humans do reasoning, which is actually quite different from the way a purely logical system based on search does it.īACK TO TOP↑ The dawn of brain-inspired computation Some people say we can do it with classic AI, maybe with improvements. In classical AI, they tried to obtain these things with logic and symbols. One of the big debates these days is: What are the elements of higher-level cognition? Causality is one element of it, and there’s also reasoning and planning, imagination, and credit assignment (“what should I have done?”). And we’re gradually climbing this ladder in terms of tools that allow an entity to explore its environment. But maybe we have algorithms that are equivalent to lower animals, for perception. Spectrum: How do you assess the current state of deep learning?īengio: In terms of how much progress we’ve made in this work over the last two decades: I don’t think we’re anywhere close today to the level of intelligence of a two-year-old child. Not who’s right, who’s wrong, or who’s praying at which chapel. What matters to me as a scientist is what needs to be explored in order to solve the problems. I’m trying to add something to the toolbox. When I talk about things like the need for AI systems to understand causality, I’m not saying that this will replace deep learning. Gary Marcus, who put out the message: “Look, deep learning doesn’t work.” But really, what researchers like me are doing is expanding its reach. So deep learning researchers are looking to find the places where it’s not working as well as we’d like, so we can figure out what needs to be added and what needs to be explored. Yoshua Bengio: Too many public-facing venues don’t understand a central thing about the way we do research, in AI and other disciplines: We try to understand the limitations of the theories and methods we currently have, in order to extend the reach of our intellectual tools. ![]() IEEE Spectrum: What do you think about all the discussion of deep learning’s limitations? He’ll speak on a similar subject tomorrow at NeurIPS, the biggest and buzziest AI conference in the world his talk is titled “ From System 1 Deep Learning to System 2 Deep Learning.” In that context, IEEE Spectrum spoke to Bengio about where the field should go from here. Today, there’s increasing discussion about the ![]() ![]() Geoffrey Hinton and Yann LeCun) won the Turing Award, which is often called the Nobel Prize of computing. He was rewarded for his perseverance in 2018, when he and his fellow musketeers ( Yoshua Bengio is known as one of the “three musketeers” of deep learning, the type of artificial intelligence (AI) that dominates the field today.īengio, a professor at the University of Montreal, is credited with making key breakthroughs in the use of neural networks-and just as importantly, with persevering with the work through the long cold AI winter of the late 1980s and the 1990s, when most people thought that neural networks were a dead end.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |