A | B | |
---|---|---|
1 | Tag | Quote |
2 | README: This is a list of "tags" used to describe themes in the transcripts, along with attached quotes. This document does not include all tags, nor all tag-quote pairs, and is sorted alphabetically by tag. The quotes aren't transcribed in a polished way (there are polished transcripts elsewhere). If there's no speaker label in the quote, assume that the interviewee was speaking; the interviewer is usually labeled "VG". Within a tag, each quote is usually from a different interviewee, but sometimes multiple rows are from the same interviewee. | |
3 | General\blame or fault | I worry that in the mid-term we'll achieve something a lot more like a bridge falls down and the engineer looks at the bridge and says, "Ho-ho, that lazy bridge didn't want to stay up." And it's like, the bridge isn't lazy. The bridge is just... You didn't design it correctly. We shouldn't anthropomorphise it. And I feel like a lot of systems right now we look at them and because we tend to think of them as human, or like if a human behave that way, we'd be like, "Oh, it cheated. It found a loophole. It's trying to do what you asked, but not what you wanted and things like that." And it's like the AI would like nothing better than to optimize whatever the heck it is that you gave it, right? And when I say like, that's not even right. Like the AI doesn't have any of this sort of stuff. |
4 | General\blame or fault | So there is going to be a lot of money and power around it, around activities around AI. So I don't want to blame AI for the bad stuff that happens. |
5 | General\blame or fault | That's true. But that's assuming that humans know exactly what they're optimizing and are perfect at it. So I think that becomes a problem of credit assignment where it's easier to credit or discredit a human than it is to a machine, possibly. If I burned down a house I can easily be like, "Hey, it's your fault. You left the iron on," versus a robot, where if I want to have my suit ironed, I will need... Or I might have the problem that it burned something down and then I'm blaming a black box thing which... It's really quite hard to aim. |
6 | General\blame or fault | But if you just deploy it without ever considering things like what if in edge cases and scenarios where your vehicle actually harms someone, Who will be responsible? What will happen? |
7 | General\blame or fault | Similarly, in case of healthcare, if something is misdiagnosed because of your AI tool, who will be responsible? |
8 | General\blame or fault | So there's no way to make sure that it would work all the time, and because of that, we don't know when it's going to not work and what will be the consequences and who will be responsible for that. |
9 | General\dual-use | I think that's a push that the biggest labs in the world are making, right? So it is likely concerning for me that it has a bit of a... That vibe like, "Oh, we need to make it because someone else is going to make it, otherwise it'll need to be..." We need to like... I don't know, I've been familiarized with the gain of function studies for viruses, and I don't think it's the same, so I'm kind of against that type of study so just making a virus extremely potent and extremely transmissive just because it might just happen in nature anyway, and that we want to be able to defend ourself. Things can go wrong, it just feels the likelihood of things going wrong from that lab is higher than natural emergence, but I feel like with the... There is a bit of... I think there is a bit of illusion of, I don't know, kind of God syndrome of some researchers, even like Ilya Sutskever tweeting like, maybe these things are partially conscious, |
10 | General\dual-use | 0:39:33.5 Interviewee: I think it depends on who uses AI. For me I would rather think AI is a neutral technology, but it could possibly be misused by malicious actors. But it can also be used by scientists to accelerate the scientific discovery. So for example, if in the future there is a similar pandemic like COVID and if we can quickly put together all those AI techniques we have to discover new drugs or new vaccines, I think that will be certainly good for our society. So I think it really depends on how you use that technology. Like the nuclear, they can be used for power generation and not necessarily for war. So it really depends on how you're using that technology. |
11 | General\dual-use | 0:48:22.2 Interviewee: AI is one where I would say that... When we say dual-use in biology, we're talking about the very real ability to create something very dangerous directly based on its capabilities, and that I would say is like 50/50, whereas AI is, yes, it's dual-use in that way on like the currently existing things of like spyware, or the like short-term super sophisticated espionage tools and Good autonomous weapons, yeah. In that way, it's dual-use. And then in the existential risk realm, I don't really view it as a dual-use, but, yeah, I do see the point, though, it is a dual-use type... Yeah, I guess my pessimism does cut both ways. I'm bearish on utopia and I'm also bearish on AI destroying us. |
12 | General\engineering-vs-science | So people are working for papers, I don't like it but that's the way it is. And if I could change something, I would have changed the quality of the works, not just tuning some parameter or doing ugly works, taking different parameters, making it larger models to just do something to get a paper. I don't like that, but I'd like to encourage people to do more exciting things. It take more time to come up with something really thoughtful, not thoughtless papers that get published. I like to change that. Tried for papers, but get something in a better conference, so you have to work better on that. |
13 | General\engineering-vs-science | So like research or the research community as a whole, I think it's very important to not focus on the novelty of things. I think there's some communities that understand it and there's some that don't. And I'm coming from a community that doesn't. And going into more of a community that does. If you can follow that 'cause I was... I don't know if that even made sense, but... So I'm going from a community that really cared about something that I don't think is that valuable, which is novelty, and I think the core machine learning researchers think about novelty as a holy grail of science, when I think more of making things work as the holy grail of science, it's borderline engineering, but I would argue that deep learning as a whole is an engineering field. Yeah, so I think that's something that I would say for fellow researchers to focus more on. |
14 | General\media-intuitions | so that people are just more educated about how they work and what they can expect to see, and also not to be kind of scared of... I don't know, science fiction things like Terminators. |
15 | General\media-intuitions | 0:09:58.8 Interviewee: Since we're only learning from data... And I feel like that's a common misconception of neural networks or AI in general, I've come from like... I studied in a very green city, where not many people sort of were exposed to technology and they always had this wrong misconception of AI that they're conscious and they have this Terminator view of AI which has been basically marked into their brain. And whenever I talk to people who don't really know much about computer science or machine learning in general, I first need to explain what a statistical model is, and that we're only learning from data and that we're not really building the next Terminator, it's just, we're building we are learning from AI and we're trying to make predictions based on that data. And it's not really as in the movies, that's the one thing I first try to clear up and even multiple years down the line, I don't think that's gonna change much. |
16 | General\media-intuitions | For example, if the three principles that we see in the movie like, "Oh, you should not do harm to human. You should protect humans," and so on. |
17 | General\media-intuitions | To be very honest, I think risk is sort of overhyped. Yes, AI is improving and we do need certain regulations, but at the same time, I don't think AI is in a state or we have reached where AI will create something of an autonomous bot that will start ruling the world. That thing is not happening. That's just a popular science fiction thing that is now glorified in the media. |
18 | General\media-intuitions | 0:16:01.4 VG: Yeah. It's interesting. What do you think are ways that we might wipe ourselves out that are more likely or something? 0:16:10.5 Interviewee: Other than AGI, I guess nuclear war is a big one that I feel like most people aren't too scared about anymore and I still am. I don't know, biotech, the standard Si-fi tropes on how humanity wipes itself out, I suppose. 0:16:26.8 VG: Yeah. What are some of your standard Sci-Fi books that you draw things from? 0:16:37.0 Interviewee: Other than the famous movies, I guess, just skipping over those... 0:16:41.1 VG: Well, I kinda want all of them. 0:16:42.8 Interviewee: Okay. Like Terminator and the Matrix, I think I've referenced already. That's the very pop culture AGI goes wrong sort of things. I really like Greg Egan, that's sort of more, what is that scenario? That's the scenario where it takes thousands of years, but it's after thousands of years, how good things could be, but it's less AGI and more just fundamental technology and science advancement. Trying to think of other relevant science-fiction. |
19 | General\mention paperclips | 0:27:36.8 Interviewee: So, the kind of the paper clip problem? I think this is already the case, but the question is, is it out of malice? Is it out of misspecification? Is it a fundamental issue? Like, do the people working on computer vision for facial recognition want their stuff to be used in concentration camps? |
20 | General\mention paperclips | paperclip paradox |
21 | General\mention paperclips | 0:12:53 Interviewee: Well, yeah. I agree with that. I don't think... I think there are two separate questions here. So one you're asking about is the paperclip maximizer argument from Nick Bostrom. So like if you have a system and you tell it like "you need to make as many paperclips as you possibly can" then it's going to like destroy the earth to make as many paperclips as possible. |
22 | General\mention paperclips | 0:20:28.3 Interviewee: Okay, so this is a quite big misconception of the public is that, and I think actually, it's rightfully so, because I am actually working on this, is that... So what you said right now is like the famous paperclip example, right? Which is going to turn the whole world in particular, and that's kind of what you said more or less, right? So the problem here is that current system, a lot of them, it's true, they work on a very specific one dimensional object. There's a loss function that we're trying to minimize. And it's true, like GPT-3 and all these system, currently they are there, they have only one number, it want to minimize. And this is, if you think about it, is way too simplistic for this very reason, right? Exactly, because if you want to just maximize number of paperclips, you're just going to turn the whole word into the paperclip machine factory. And that's the problem. But the reality is much more complicated. And in fact, we are moving there, like my research was, we're moving away from that. We're trying to understand, we're trying to understand if first of all intelligence can be emerge on its own without it being minimized explicitly. |
23 | General\mention paperclips | paperclip maximizer experiment, |
24 | General\mention paperclips | 0:21:54.3 Interviewee: We're just taking one step at a time toward that paper clip argument, right? Yeah, so I don't really follow why that is true. So I'm missing some connection between being able to solve complicated problems and believing you need to be checked on all the time. Okay, so let me try and reconstruct your argument. You're saying that here's something that is capable of solving complicated problems. |
25 | General\power too centralized | 0:18:04.7 Interviewee: Facebook, Google, and exactly that comes into the picture, and all they care is to have more money or in the near future. So that's why I'm worried about it. So if we leave the AI to their hands, then it will be really bad. But if it's somehow create a system that's decentralized, that's not centralized on some big companies, which if you have some sort of a decentralized system, then I'm quite optimistic about AI, but otherwise it would be really bad for us. |
26 | General\power too centralized | 0:22:56.3 Interviewee: Yeah, it's hard to imagine because I'm not sure when we reach that state. I'm not sure whether we will have enough power or not... Right now, as I said, these big companies are still trying to get our information as much as they can, they are scanning our fingerprints, or with this face ID they are taking the image of our faces and they are storing this information. Maybe when the time comes, they will threaten us with our information or stuff like this, so we will not have enough power to counter their attacks, so I think that's why I think it's worrying. 0:23:45.2 VG: Okay, and they here is the AI, or the government? 0:23:48.7 Interviewee: No, this is the big companies. The ones they have the data, yeah. That's the power, right? 0:23:55.5 VG: Okay. So, oh, okay, so when you're talking about humanity losing control to... I thought it was the AI, but instead it's the companies? 0:24:04.7 Interviewee: Yeah. 0:24:06.3 VG: Okay, so the... Okay, I was confused. Alright, so the danger that one has in developing AI is just that the companies that control AI will have control over society. |
27 | General\power too centralized | But going back, backtracking a little bit, that we are giving too much power into the hands of very few. So think about OpenAI, it's a good example. When they released GPT Evolve, they didn't really release it, then they announced that they had GPT2, "Oh, it's so dangerous, we can generate fake news now, that is really good." Meaning that we don't want anybody to do this, so we keep it to ourselves. |
28 | General\power too centralized | 0:06:13.8 Interviewee: Yeah. So, what I am worried about it, if we move to the next part of the question, what I am worried about it, and this is a very interesting question and a very interesting topic for thought altogether, I think the worry, the main worry for me is that, now that AI is sticking, you see it promoted, you see, for instance, companies trying to recruit people who are knowledgeable of machine learning and AI or these methods. And they wish to apply them to their specific sector of business, right? 0:06:58.5 Interviewee: So this is creating a lot of activity. This is catching a lot of interest. It is likely to lead a lot of business. So there is going to be a lot of money and power around it, around activities around AI. So I don't want to blame AI for the bad stuff that happens. But human activities that involve resorting to AI technologies or to AI assisted anything. That could be engineering, medicine, science. It could be many, many domains that could benefit from the assistance of AI tools solutions. So the worry is that, the same problems that we have seen in the past that are associated to power structures, and let's call it some groups of people that want to get in. And very small numbers of people holding the doors. So those same kinds of problems that we have seen in the past. And the past can be in the mid and very long scale. In the last decades or in the last centuries even, that some of those problems could get transported to this new area of activity around AI. So we are in a situation, I think, in which there is still the hope that that doesn't happen, provided that we do some clever things around it. But the concern is that some of those things may be happening already, given that economic activities, and I don't know, corporations, all of those things are in the hands of a small group of people. 0:08:45.5 Interviewee: And it may just happen like before, like you know that... Whatever happens, we say yeah, it will be influenced by those who can decide like where to locate money, where to send resources, what people to hire and whatnot. So it is very generically like this. And I don't want to go into the details of a potentially very negative discourse. But that would be my main worry... Go ahead, sorry. |
29 | General\power too centralized | It's scary if only the biggest players get to decide what we can train these incredibly powerful models to do. I think there's been a lot of discussion in tech recently about the fact that there's developer teams of 40 people over in Silicon Valley changing the lives of people around the world. |
30 | General\racing | 0:45:10.0 Interviewee: Exactly, exactly. And Asia is taking over. This is why America is getting nervous and there will be even less incentive of regulating because of course it holds research back, and Europe just sits there and waits. And that is a good point. So actually, this competition argument, the market competition, the, I don't know, countries' competition, which in my field of high-performance computting was always big, and every time the new Asian super computer was on the number one of the list, America was launching a multi-million... $100 million program to change that to reclaim the number one spot, which was very... Well, very good for the research, of course. And the same thing is happening in AI. It's just happening at a different scale, so it's even larger. So I agree, there will not be a lot of regulation in this competition. That's very true. Am I afraid of it? Not yet. I personally am not really afraid. |
31 | General\racing | I think that's a push that the biggest labs in the world are making, right? So it is likely concerning for me that it has a bit of a... That vibe like, "Oh, we need to make it because someone else is going to make it, otherwise it'll need to be..." |
32 | General\racing | I'm picking on China a lot. So let's say Beijing or Iran is the one to do it or even Washington DC, I don't particularly trust them. If they're the one to do it first and it sort of does result in the whole technological explosion or whatever, then that could be bad, even if it went well and stays under control and listens and it's generally like not the paperclip maximizer or not Skynet. |
33 | General\agree AI will continue unregulated and could be unsafe because of market argument | 0:42:56.4 VG: Yeah, something I'm worried about is I feel like AI is more dangerous than something like nuclear weapons because of its potential for some sort of agency or something where intelligence is special because it can self-modify and in a way that nuclear weapons can't, they just sit there. And so to that extent, I'm like, "Oh, I think that AI might not be able to just be solved by regulation because it will explode. Another thing I'm worried about is this is intensely commercially incentivized whereas nuclear weapons aren't, and so how... 0:43:35.8 VG: Yeah, are we going to have good policy on systems? You can't just shut down the whole thing, people are definitely not going to like that. Even if you're a researcher, you're like, "Well, but some research should be allowed, right?" Yeah, I don't know. 0:43:51.7 Interviewee: That's an excellent point, but yeah, I think the market argument is the killer. It'll drive development. There is no reason to develop nuclear weapons further. They're just good enough. And that's true. So they will not turn into the overlords. Nuclear energy, yes, but that's again not a danger zone. Yes, yes, so I agree on that side. The market is super large. You'll have a hard time regulating that market, because we have a hard time regulating the drug market today. It's going to turn into a similar thing. If there's somebody's willing to pay for it, there will be agents that are willing to execute, illegal or not. And then you could even argue that as we've talked about before, the culture is different in different, very large regions with a lot of market power. You could argue about the regulation in the different areas of Earth. Europe is number one in regulating. They're regulating everything to the extent that it's no fun anymore, and then America is regulating a little bit, but not really, if you think about it, and then Asia, I'm not even sure if there is so much regulation. |
34 | General\agree AI will continue unregulated and could be unsafe because of market argument | Because all of these companies are only gonna develop things that are going to sell more things because that's how these companies work in a capitalist market, and that is the problem. |
35 | General\too much focus on benchmarks or performance | That's why I mentioned the big companies. Right now, people put their... Most of their efforts to increase the accuracy or performance of these artificial intelligence systems. They don't put some effort to understand the underlying mechanisms for these architectures. If you do that, it will be much safer feature for us. But they are not doing this because in the short-term, what will give them money is to put some effort to improve the accuracy of the models. Yeah. |
36 | General\too much focus on benchmarks or performance | 0:24:45.6 Interviewee: Yeah, I think in general, I agree that there is nothing about the long-term and maybe less like tracing, just tracing the state of the art. The one thing which I think is a sign of it, is just more focused on explainability. And in the last years, we've seen a massive increase. I'm personally a recent big fan of the field. And its work when people use it. It's so important, not even to have explainability for them, but it's way more important, and so I can have the explainability. I can see what's happening inside the model. And so I think it's a sign that people are thinking, the long-term, and the fact that right now when we're setting up a new model... 0:25:38.4 Interviewee: Last meeting today, I was like, "Alright guys, we are doing this. But if we're not going to have explainability, if we're not going to have a plan now, we're not doing this. And I remember three months ago, it was like, "No, let's just see whether it performs. And let's see later." And now it was like, "Yeah, you're right." So I think this as a long-term I think about other things like trying to care about that, you know, humanity exists. I didn't see that so much, just because I was in another part of community who develops, who works on that for a while. So I didn't feel that specifically. |
37 | General\too much focus on benchmarks or performance | it's very much kind of performance-oriented, it's very dry and boring. To be honest, they're all just benchmarks to beat, and it's like now. Well, these are all surveillance algorithms. And I just kind of want to think a bit... I'm uncomfortable with many things |
38 | General\too much focus on benchmarks or performance | 0:34:52.0 Interviewee: Yeah, yeah, I agree. I mean the culture... I think first thing to do is probably to try to kind of expand a bit more of the awareness of these things in the community, and then just have people in general talking more about the implication of the work we are all doing. It's a bit just too much attached to status, performance and then making numbers go up. I think even just basic science, scientific training for PhD students is something that I would really like to see and just ethics of science and things like that. I would really to see that in [university] for instance, it seems crazy that these things don't happen. I don't know, you're in [university], right? 0:35:40.3 Interviewee: I don't know if that's the thing there, but it just seems that people just go out there from undergrad and just start working on their PhD and they just... Okay, "I need to just have this NeurIPS paper. I need to just crunch these numbers because I'm going to get a job at Google and then I can just show to everyone on [company] that I work with [company]." Like, it seems honestly that's kind of... I mean, I'm just trivializing it but it seems that's big part of the community now is trying to do that, and I think we should just work a bit more as... Okay, what we're doing, why we're doing it, what are the implications, and yeah. And that I think would make things like electing people that will deal with governance, for instance, like we lack people that basically become the program chairs and AC at conferences, we should kind of do something like delegates for... I don't know the UN or the EU or whatever. I think that that's... And then I think solutions would come out just having people spending time discussing these things. |
39 | General\too much focus on benchmarks or performance | 0:45:36.7 Interviewee: Definitely important work. I feel like there's a way too little work or I haven't been exposed to much safety research. So ethics research, definitely underexposed to general research that is put out there, I feel like. A lot of people just naturally move into the improvement of AI systems. 0:46:01.8 Interviewee: Simply because... Like I also never had... I think there's an underlying problem with education, and like CS math and physics maybe, that these people usually gravitate towards improving the systems rather than like ethics or safety research. I also never had a course on ethics. I had one seminar on ethics for machine learning, but never like safety or that type of thing in my education. So maybe that would be helpful for educating some of the future researchers in the field. |
40 | General\too much focus on benchmarks or performance | 0:31:13.0 Interviewee: Yeah, I totally understand what you are saying. So it happens to us very often in terms of AI and security, everybody, will not deny they want the system to be secure and robust but if we ask, "okay, do you want to invest more or that has no more details about your product to help you improve robust and security". They will probably say, "No, no, no, I think my system is robust or secure enough". But I think only when those bad things happen, like a security breach or they have leakage, they will realize, Oh, there's the issue in my system, and they are going to go back and solve it. So it's kind of sad in terms of everytime some bad thing happens they will finally acknowledge their systems are not perfect, and I think no system is perfect. 0:32:01.5 Interviewee: So that's why I keep telling that we should be more proactive in terms of we should be the white hat hacker to proactively test all those potential issues or risks, or even inspire human values, if we can, before we actually deploy that system. But that really requires a very different mindset and that requires a human CEO, to be open-minded and acknowledge that there could be some issues or failures of the system or product they are going to deploy. But I totally agree with you so far still the accuracy and performance metrics are the focus of the AI research and technology, and they wanted to be the first in some domains, rather than doing things right and safe and robust. |
41 | General\too much focus on benchmarks or performance | 0:40:25.5 Interviewee: Yeah, so I would really want them to benchmark this trustworthy or these values other than performance metrics. For so far in our community, we have been focusing on accuracy. Like there's a leaderboard, you put your model and if it gives you a better accuracy, that means the model is better. But we know in reality, whether a thing is good or not it's a multi-objective thing. It depends on the values you care about whether it's fair, it's robust, it's explainable in addition to being accurate. So, I will really think, hope that our colleague can pay more attention and have a more detailed profile of the system they are developing in terms of how harmful they are, how fair they are. And actually, some of the efforts are being made. At Google they have these model cars to be transparent. At IBM we also have these fact sheets to just provide more details and ingredients of the model and provide multiple evaluation metrics. I think that's certainly the future I want. I think that's the ideal AI research and community would look like. |
42 | General\too much focus on benchmarks or performance | So yeah, I guess you could say I think AI has the potential to do a lot of good. There's so much potential for forecasting economic stuff for different policies regarding everything from homelessness to global warming to stuff like that. So while AI has the potential for a lot of good, I think that the majority of researchers focus on what is the most flashy thing, what is the coolest looking thing we could do, and all corporations look at it as how can we sell more product? So at end of the day, I think that I don't really think it's enhancing society in a good way that much. I don't think it's necessarily negative, but I'm not like, "Oh, it's so good for society." Yeah. |
43 | General\too much focus on benchmarks or performance | I think the majority of research within the fields of computer vision and NLP are so bogged down and just trying to get the best result on some data set that I don't know that it's necessarily really contributing net positive to society. |
44 | General\too much focus on benchmarks or performance\need more focus on ethics | 0:24:45.6 Interviewee: Yeah, I think in general, I agree that there is nothing about the long-term and maybe less like tracing, just tracing the state of the art. The one thing which I think is a sign of it, is just more focused on explainability. And in the last years, we've seen a massive increase. I'm personally a recent big fan of the field. And its work when people use it. It's so important, not even to have explainability for them, but it's way more important, and so I can have the explainability. I can see what's happening inside the model. And so I think it's a sign that people are thinking, the long-term, and the fact that right now when we're setting up a new model... 0:25:38.4 Interviewee: Last meeting today, I was like, "Alright guys, we are doing this. But if we're not going to have explainability, if we're not going to have a plan now, we're not doing this. And I remember three months ago, it was like, "No, let's just see whether it performs. And let's see later." And now it was like, "Yeah, you're right." So I think this as a long-term I think about other things like trying to care about that, you know, humanity exists. I didn't see that so much, just because I was in another part of community who develops, who works on that for a while. So I didn't feel that specifically. |
45 | General\too much focus on benchmarks or performance\need more focus on ethics | it's very much kind of performance-oriented, it's very dry and boring. To be honest, they're all just benchmarks to beat, and it's like now. Well, these are all surveillance algorithms. And I just kind of want to think a bit... I'm uncomfortable with many things |
46 | General\too much focus on benchmarks or performance\need more focus on ethics | It's a bit just too much attached to status, performance and then making numbers go up. I think even just basic science, scientific training for PhD students is something that I would really like to see and just ethics of science and things like that. |
47 | General\too much focus on benchmarks or performance\need more focus on ethics | I mean, I'm just trivializing it but it seems that's big part of the community now is trying to do that, and I think we should just work a bit more as... Okay, what we're doing, why we're doing it, what are the implications, and yeah. And that I think would make things like electing people that will deal with governance, for instance, like we lack people that basically become the program chairs and AC at conferences, we should kind of do something like delegates for... I don't know the UN or the EU or whatever. I think that that's... And then I think solutions would come out just having people spending time discussing these things. |
48 | General\too much focus on benchmarks or performance\need more focus on ethics | 0:45:36.7 Interviewee: Definitely important work. I feel like there's a way too little work or I haven't been exposed to much safety research. So ethics research, definitely underexposed to general research that is put out there, I feel like. A lot of people just naturally move into the improvement of AI systems. 0:46:01.8 Interviewee: Simply because... Like I also never had... I think there's an underlying problem with education, and like CS math and physics maybe, that these people usually gravitate towards improving the systems rather than like ethics or safety research. I also never had a course on ethics. I had one seminar on ethics for machine learning, but never like safety or that type of thing in my education. So maybe that would be helpful for educating some of the future researchers in the field. |
49 | General\too much focus on benchmarks or performance\need more focus on ethics | 0:31:13.0 Interviewee: Yeah, I totally understand what you are saying. So it happens to us very often in terms of AI and security, everybody, will not deny they want the system to be secure and robust but if we ask, "okay, do you want to invest more or that has no more details about your product to help you improve robust and security". They will probably say, "No, no, no, I think my system is robust or secure enough". But I think only when those bad things happen, like a security breach or they have leakage, they will realize, Oh, there's the issue in my system, and they are going to go back and solve it. So it's kind of sad in terms of everytime some bad thing happens they will finally acknowledge their systems are not perfect, and I think no system is perfect. 0:32:01.5 Interviewee: So that's why I keep telling that we should be more proactive in terms of we should be the white hat hacker to proactively test all those potential issues or risks, or even inspire human values, if we can, before we actually deploy that system. But that really requires a very different mindset and that requires a human CEO, to be open-minded and acknowledge that there could be some issues or failures of the system or product they are going to deploy. But I totally agree with you so far still the accuracy and performance metrics are the focus of the AI research and technology, and they wanted to be the first in some domains, rather than doing things right and safe and robust. |
50 | General\too much focus on benchmarks or performance\need more focus on ethics | 0:40:25.5 Interviewee: Yeah, so I would really want them to benchmark this trustworthy or these values other than performance metrics. For so far in our community, we have been focusing on accuracy. Like there's a leaderboard, you put your model and if it gives you a better accuracy, that means the model is better. But we know in reality, whether a thing is good or not it's a multi-objective thing. It depends on the values you care about whether it's fair, it's robust, it's explainable in addition to being accurate. So, I will really think, hope that our colleague can pay more attention and have a more detailed profile of the system they are developing in terms of how harmful they are, how fair they are. And actually, some of the efforts are being made. At Google they have these model cars to be transparent. At IBM we also have these fact sheets to just provide more details and ingredients of the model and provide multiple evaluation metrics. I think that's certainly the future I want. I think that's the ideal AI research and community would look like. |
51 | General\too much focus on benchmarks or performance\need more focus on ethics | Maybe the culture of scientific community. How we can make the commitment towards the truth, towards personal integrity. It's like constant battling not only in science, but also in society. 0:01:32.4 VG: Interesting. Are you seeing this among your colleagues or among the public? 0:01:36.6 Interviewee: In general, I think the scientific community is doing very good compared to all other communities, when we consider it, but we should be aware about the potential downfalls of, let's say bigger reward. Big money may sway us in a direction of skewing the results. There have been lots of scandals in some domains, directly funded research when you want to have some answers and you get the answers you're paying for, so I think this is what we should be worried about. This is in general science. |
52 | General\too much focus on benchmarks or performance\not enough on beneficial | So yeah, I guess you could say I think AI has the potential to do a lot of good. There's so much potential for forecasting economic stuff for different policies regarding everything from homelessness to global warming to stuff like that. So while AI has the potential for a lot of good, I think that the majority of researchers focus on what is the most flashy thing, what is the coolest looking thing we could do, and all corporations look at it as how can we sell more product? So at end of the day, I think that I don't really think it's enhancing society in a good way that much. I don't think it's necessarily negative, but I'm not like, "Oh, it's so good for society." Yeah. |
53 | General\too much focus on benchmarks or performance\not enough on beneficial | I think the majority of research within the fields of computer vision and NLP are so bogged down and just trying to get the best result on some data set that I don't know that it's necessarily really contributing net positive to society. |
54 | General\too much focus on benchmarks or performance\not enough on understanding | That's why I mentioned the big companies. Right now, people put their... Most of their efforts to increase the accuracy or performance of these artificial intelligence systems. They don't put some effort to understand the underlying mechanisms for these architectures. If you do that, it will be much safer feature for us. But they are not doing this because in the short-term, what will give them money is to put some effort to improve the accuracy of the models. Yeah. |
55 | Questions\(AGI-when)\consciousness | Interviewee: Okay. It's a hard question but I think that the conscience of the human and the ability of the AI, I think there's a kind of gap between this too. But I can't pinpoint the exact point where it is but I think there is a gap. But best of my ability I think that even if the AI can solve a lot of different problems even if it can divide the task into subject tasks, the task solving is only the part of the human life. We are not machines for solving different tasks. We have other things that not very task specific in our lives. So I don't think that this part of human life will not be simulated or emulated by AI soon enough. I think that's the thing that humans do but machines do not do that. VG: They do not have consciousness or is that kind of thing? Interviewee: The consciousness and... You can say that. I'm not sure about the conscience and the ability of solving problems. These are two different kind of things in life. |
56 | Questions\(AGI-when)\consciousness | Not even that would be able to question, "Do I want to do this? Or do I want to do this other thing instead? Is this good for me? Or are humans going to get rid of me?" I don't see any sort of self-reasoning coming out of a machine. I just don't imagine. I don't foresee it. |
57 | Questions\(AGI-when)\consciousness | 0:24:17.4 Interviewee: So when I described a general artificially intelligent agent, as an agent that can do several tasks, I still imagine that it is something that will be put together by a group of people. And the structure kind of wrapping these several different systems that are good at several different tasks. Well, if you would design such a structure with some fail-safe operations so that we can, I don't know, turn it off, pause it, whatever, at any moment. So again I think this idea of the self-conscious machine that will develop its own plans independent on human input or human will, it's something that has been fed by the media and the science fiction literature and movies. I don't see it happening in this exact way. 0:25:24.6 Interviewee: Just like I see very advanced technologies that can help humans a lot, or that can be used by one group of humans against another group of humans. 0:25:39.1 Interviewee: So I see it like that, as a tool that some humans could use to harm another group of humans, but not something coming conscious out of itself and trying to... |
58 | Questions\(AGI-when)\consciousness | Interviewee: In my opinion, according to the current AI systems, consciousness may not happen. The current AI system, so the computer won't be self-aware what it is doing, what it needs to do, at least to the current design principals that we're using in the AI systems. That is not possible. It may happen. I mean, we need to significantly change some design principles the way we are training the systems, something. So I don't know. That's my belief. I might be wrong, but at least the AI systems that I'm working with, at least if we progress in the similar direction, they won't become conscious. VG: What does consciousness... does it just mean self-awareness? Interviewee: Yeah. Yeah. I mean self-aware, like the AI should be able to explain why it took certain decision, why it took certain- VG: Okay. So if you can explain it, it can explain itself, then it is conscious? Interviewee: Yeah. That's how I see it as a self-awareness consciousness. It is conscious of what it is doing. I don't know I might be wrong, but that's how I see it as consciousness. For example, as humans, we also do lots of things, but probably 90% are subconscious. I mean, we are not conscious of what we are doing. For example, right now my heart is beating 100 times per minute, but I cannot explain why it is beating 100 times per minute because it is subconscious or unconscious of thing. But I can explain the higher level cognitive process. And now, I'm talking with you and I can explain the process of why arrive at those things. |
59 | Questions\(AGI-when)\consciousness | I don't see we can ever get to a point where these models are sort of conscious or know what they're thinking about or are able to take on these type of challenging tasks where you have a CEO, neural network or something. |
60 | Questions\(AGI-when)\consciousness | 0:22:30.6 Interviewee: Yeah, I think it's the same line of argument with consciousness in general AI models. I don't think that we'll ever get to a point where it's that conscious, that it realizes that humans are in control. If you're talking about that systems get shadow of you, meaning, like other AI systems that are deployed in the world or... 0:22:55.1 VG: Yeah, I think it knows what type of system it is, and it knows how... Yeah, what system it is and that it knows that it exists in the world, that it knows that other people interact with it as a system. 0:23:05.0 Interviewee: Yeah, that goes into the line of consciousness and knowing how you're perceiving yourself, how others perceive you, and I feel like that's very, very, very far into the future maybe that that could ever happen. That also plays into emotions maybe in general, like how it sees itself. It's not in this like task to optimize for living longer or not being shut off or something you're gonna just train, then I don't see how that could happen but... |
61 | Questions\(AGI-when)\consciousness | 0:15:53.0 Interviewee: Okay. Yeah, I agree with that. Yeah. What I thought more like, that like CEO, like humans, right? That I know you would, I don't know, have random talks, and he'll be like your friend. It will be just like, this AI will be doing its job and that's it. It won't become your friend. Like you won't be connected somehow to this person. So yeah. That's what I meant, more like the consciousness won't be there, it's probably not possible to do consciousness, right? |
62 | Questions\(AGI-when)\consciousness | 0:09:54.0 Interviewee: Okay. No, I don't think that's possible. Okay. So that's basically capturing human consciousness, human intelligence, lots of things. I mean, you said like a robot scientist, right? Just to replicate the brain of a human, that's what you meant? |
63 | Questions\(AGI-when)\consciousness | 0:05:00.1 Interviewee: Yes, inevitably we or someone else will get there. It's like when you think of it as an emergent system, it took billions of years to evolve the human consciousness and intelligence, and it just takes time and evolutionary pressure to do so. And we have evolutionary pressures, everyone wants to compete with their economic pressures. So that is like the battle in AI and it is rapidly evolving. So we will get there, I don't know when. I don't think it will be in my lifetime that I will experience conscious AI, artificial general intelligence, how some people call it, because we are nowhere near integrating various streams of heterogeneous data and making sense out of it. That's complicated. |
64 | Questions\(AGI-when)\embodiment | I think that, I believe that embodiment is an important aspect of it, but I think... I suspect that you can probably get pretty far without it, but I don't think you can kind of get all the way without embodiment. |
65 | Questions\(AGI-when)\embodiment | Interviewee: I think the AI we are talking about right now, it's in the more technical term. We say that as the deep learning or the model of the neural network. I think that this kind of approach is not the real approach to achieve the strong AI or just as you said the general AI. The consensus type that they can have maybe in the consensus with robots. Asimov's the Three Laws of that. I don't think that this approach should get us there, but I think the field of research is wide enough that one day it would get us there. But the approach we are using right now is not the right approach. This is my opinion. |
66 | Questions\(AGI-when)\embodiment | The question of, how do you create an AI that understands humans fully? Is it has to be one that is in the society of humans, that is learning language in human society in the same way we do. It's a purely philosophical question if it's possible. I imagine, yes. 0:36:30.0 VG: Raising AIs with humans in the way that they're organically using language? 0:36:36.8 Interviewee: Yeah, like Asimov style. |
67 | Questions\(AGI-when)\embodiment | 0:16:27.5 Interviewee: Also, I think another important thing is that a lot of human intelligence is tied to our senses, right? For that to happen, I guess robotics also has to progress in a similar way where cameras can do as good a job as our eyes can. For instance, I used to work on image sensors. Before I started grad school, I worked for Sony and used to work on camera and sensors. And I think one of the things that me and one of my friends that used to hold this discussion is like, our eyes are way better at a lot of stuff than camera. You have cameras for specific tasks and cameras are better than our eyes at those specific tasks. But it's only those specific cameras, but for instance, you need separate cameras for night photography, but any general camera is much worse at seeing stuff in the dark than our eyes are and stuff like that. So that kind of sensory perception also needs to improve along with just like mathematical modeling. So I think looking at those two things in isolation will also not work. So I think a lot of these things need to be tied in if you want to mimic general intelligence. And I think currently that's not the way it happens because currently, the approach is, sure I mean people talk about it, because that's what gets written in articles. But I think currently the way at least companies are looking at it as just automation and just making pattern recognition if you will. And I don't necessarily think that's a bad thing. I just think that the way things are going currently, I don't see that achieving general intelligence. |
68 | Questions\(AGI-when)\embodiment | 0:11:45.0 Interviewee: It's going to be extremely difficult to develop something that is sufficiently reliable and has an understanding of the world that is sufficiently grounded in the actual world without doing some kind of mimicking of human experiential learning. So I'm thinking here reinforcement learning in robots that actually move around the world. 0:12:13.0 VG: Yeah. 0:12:13.9 Interviewee: I think without something like that, it's going to be extremely difficult to tether the knowledge and the symbolic manipulation power that the AIs have to the actual contents of the world. 0:12:29.5 VG: Yep. 0:12:29.9 Interviewee: And there are a lot of extremely, extremely difficult challenges in making that happen. Right now, cutting-edge RL techniques are many orders of magnitude... Require many orders of magnitude too much data to really train in this fashion. RL is most successful when it's being used in like a chess context, where you're playing against yourself, and you can do this in parallel, and that you can... When you can do this over and over and over again. And if you think about an actual robot crossing the street, if an attempt takes 10 seconds, and I think especially early in the learning process, that's an unreasonably small amount of time to estimate. But if an attempt takes 10 seconds and... Let me pull out the calculator for a second. |
69 | Questions\(AGI-when)\embodiment | 0:06:55.1 VG: Yeah, when do you think we're going to-- what kind of systems do you think we're going to have when we cap out on the current scaling paradigm? 0:07:02.0 Interviewee: Well, I think like the ones we have now, but yeah, in 50 years, I don't know. But in like 5 to 10 years, it will just be much bigger versions of this. And so what we have seen is that if you scale these systems, they generalize much better. If that keeps happening, then we would just have much better versions of what we have now. But still it's a language model that doesn't understand the world, and so still it's the component that is very limited in seeing only the training data that is in images on the internet, which is not all of the images that we have in the world, right? So I think the real problem is data, not so much scaling the compute. 0:07:49.7 VG: What if we had a system that has cameras and can process auditory stuff that is happening all around it or something and it's not just using internet data, do you think that would eventually have enough data? 0:08:03.3 Interviewee: Yeah, so that's what I was just saying. If you have something that's embodied in the world in the same way as a human and where humans treat it as another human, sort of like cyborg style, things like that, that's a good way to get lots of very high quality data in the same way that humans get it. What are they called? Androids, right? 0:08:24.9 VG: Yeah. 0:08:25.3 Interviewee: So if we actually had android robots walking around and being raised by humans and then we figured out how the learning algorithms would work in those settings, then you would get something that is very close to human intelligence. A good example I always like to use is the smell of coffee. So I know that you know what coffee smells like, but can you describe it to me in one sentence? 0:08:54.2 VG: Probably not, no. 0:08:55.7 Interviewee: You can't, right? But the same goes for the taste of banana or things like that. I know that you know, so I've never had to express this in words. So this is one of the fundamental parts of your brain; smell and taste are even older than sight and hearing. And so there's a lot of stuff happening in your brain that is just taken for granted. You can call this common sense or whatever you want, but it's like an evolutionary prior that all humans share with each other, and so that prior governs a lot of our behavior and a lot of our communication. So if you want machines to learn language but they don't have that prior, it becomes really, really hard for them to really understand what we're saying, right? |
70 | Questions\(AGI-when)\embodiment | 0:10:32.3 Interviewee: Yeah, maybe. So the real question is, if you just throw infinite data at it, then will it work with current machine learning algorithms? Is I guess what you're asking, right? And so I don't know. I mean, I know that our learning algorithm is very different from a neural net, but I think if you look at it from a mathematical perspective, then gradient descent is probably more efficient than Hebbian learning anyway. So mathematically, it's definitely possible that if you have infinite data and infinite compute, then you can get something really amazing. Sure, we are the proof of that, right? So whether that also immediately makes it useful for us is a different question, I think. 0:11:20.8 VG: Interesting. Yeah, I think I'm trying to probe "do we need something like embodied AI in order to get AGI" or something. And then your last comment was like, whether that makes it useful for us. I'm like, well, presumably we're going to... feeding it a lot of data lets it do grounding, so like relationships between language and what actually exists in the world and how physics works. But presumably, we're going to be training them to do what we want, right? So that it will be useful to us? 0:11:43.5 Interviewee: Well, it depends, right? Can we do that? Probably the way they will learn this stuff is through self-supervised learning, not through us supervising them. We don't know how to specify reward signals and things like that anyway. I'm not sure, if we actually are able to train up these huge systems that are actually intelligent through self-supervised learning, if they are then going to listen to us, right? Why would they? |
71 | Questions\(AGI-when)\embodiment | 0:26:14.1 VG: Interesting, okay. And you don't necessarily see a connection between, like, the current... [you think] if we just push really hard on the current machine learning paradigm for 50 years, we won't have an AGI. We need to do something different for an AGI, which sounds like embodiment / combination with humans, biological merging? 0:26:31.7 Interviewee: So it could be embodiment and combination with humans, but also just better, different learning algorithms. So probably more sparsity is something that scales better. More efficient learning. So the problem with gradient descent is that you need too much data for it. Maybe we need some like Bayesian things where we can very quickly update belief systems. But maybe that needs to happen at a symbolic level. I still think we have to fix symbolic processing happening on neural networks-- so we're still very good at pattern recognition, and I think one of the things you see with things like GPT-3 is that humans are amazing at anthropomorphizing anything. I don't know if you've ever read any Daniel Dennett, but what we do is we take an intentional stance towards things, and so we are ascribing intentionality even to inanimate objects. His theory is essentially that consciousness comes from that. So we are taking an intentional stance towards ourselves and thinking of ourselves as a rational agent and that loop is what consciousness is. But actually we're sort of biological machines who perceive their own actions and over time this became what we consider consciousness. So... where was I going with this? [laughs] What was the question? 0:27:57.2 VG: Yeah, okay. So I'm like, alright, we've got AI, we've got lots of machine learning-- 0:28:00.8 Interviewee: --oh yeah, so do you need new learning algorithms? Yeah. So I think what we need to solve is the sort of System 2, higher-level thinking and how to implement that on the neural net. The neural symbolic divide is still very much an open problem. There are lots of problems we need to solve, where I really don't think we can just easily solve them by scaling. And that's-- like there is very little other research happening actually in field right now. |
72 | Questions\(AGI-when)\embodiment | 0:18:56.0 VG: Yeah, that makes sense. Especially since self-driving cars are, like, robotics and robotics is behind as well. But even GPT and stuff doesn't really have good grounding with anything that's happening in the world and how-- 0:19:09.2 Interviewee: GPT's capabilities are also wildly overstated. You can pull out a lot of good examples out of GPT if you really want to, and you can pull out a lot of crappy ones. But we're not going to just brute force large language models to get our way to general intelligence. That's BS you'll only hear from someone who works at OpenAI who wants their equity to be worth more, quite frankly. The only people who say this are the ones who have an economic incentive to say this, and the people who follow the hype. Otherwise, I don't really know of anyone who thinks GPT is the road to AGI, especially given that we can't scale up any bigger. I mean, this is something, this is my whole push right now, is that the only way that Nvidia is going to come out with new GPUs next week, and they are going to come out with new GPUs next week that will be twice as fast as the ones that came out two years ago, is if they doubled the amount of power. It's not like we're doubling the amount of hardware we have available. |
73 | Questions\(AGI-when)\embodiment | 0:19:32.5 Interviewee: Yeah, so I think I was really moved by something Doug Hofstadter wrote, it's called The Shallowness Of Google Translate and he... Yeah, and he talks about how understanding is truly fundamental to a task like translation, and he describes it as some, like an artistry, artistic sort of endeavor and I don't think you can get to it all the way just with data, so I don't think there's translation, so I think perhaps, I would agree with you that you can make a translation system that would be very useful in a wide variety of systems, and perhaps that does not require understanding, and I can totally imagine amazing business models around it, so yeah, that part I would agree with, but a translation system that would match a human translator? I... 0:20:21.5 VG: And what are the differences you're expecting between a human translator in terms of output, not in terms of understanding? 0:20:27.9 Interviewee: Right. So if you just think of a sentence, something that really does require understanding things about the world, like a lot of translation requires understanding that something is an object. 0:20:46.4 Interviewee: And has a relationship to things in the world. And yeah, Doug Hofstadter gives a lot of interesting examples, I don't have them off the top of my head, but it would, it makes mistakes that a human wouldn't just because it knows that a woman is like a person that exists in the world, but GPT, it does not. |
74 | Questions\(AGI-when)\embodiment | 0:26:31.4 Interviewee: So I think just going from these artificial sorts of environments to the real world is I think a much harder task. So I think and just adapting to the messiness of the real world is a much harder task than we anticipate. So I think, just my... Perhaps this is why I think I'm pessimistic, like I just actually work on these things rather than just looking at what journalists, quote-unquote say about AI. So the assumptions that we make are, I think, not reflective of the real world, and it's very hard to I think make any sorts of guarantees about how these systems will work in the future. So it's a... Yeah, I'm optimistic about task where the rules are clear, but I think anything that requires deploying these things in the real world I'm less optimistic, yeah. This is I think how I like to think of progress. |
75 | Questions\(AGI-when)\embodiment | 0:10:25.0 VG: Yeah, the one I'm interested in is, has enough generalizable reasoning ability to automate all the jobs? 0:12:35.8 Interviewee: And the question is, every job? I don't think every job, but I think most of them, most of the jobs done by humans now. I would say most. At least a majority, I'll go with that. 0:13:07.0 VG: Cool. Do you have an estimate for when we might reach that point? 0:13:11.5 Interviewee: No, I'm not gonna make any kind of prediction on that, just 'cause I would just be pulling a number out of nowhere. 0:13:24.2 Interviewee: I don't know, 40 years, it's just a guess. 0:13:26.0 VG: Wow. Alright, within our lifetimes, solid. So say we have an AI that's at that level in 40 years, do you expect that to affect society much? 0:13:47.0 Interviewee: Yeah, it would, wouldn't it? I have to deal with the consequences of that statement. 'Cause I genuinely wonder if this thing exists, is it at the level where it can actually be deployed to do any... 'Cause the mechanical part of this means that there are a bunch of jobs that it really can't do in any practical sense. 0:14:34.0 VG: Yeah, unless robotics increases fast enough, which I presumably, I don't know if you think robotics will catch up eventually, but... 0:14:41.6 Interviewee: I don't know enough about robots to have even half-baked opinion on that topic. I don't know. 0:14:50.0 VG: Yeah, one thing I think about is, let's say we have OpenAI and DeepMind are forging ahead and they're like, "Cool, GPT and we're going to continue to have the same amount of algorithmic improvements and hardware improvements, now we're in quantum, now we're in optic, like whatever." And then we have more compute, more data, tons and tons of researchers and money being poured into this space, eventually, we get something crazy and then we can take GPT, whatever, and make it a CEO, fill it in for any job. I don't know. 0:15:22.0 Interviewee: Yeah, so this is the first edition of post-scarcity world, where it's like there's enough labor, where there's enough robot labor that scarcity of resources isn't a constraint anymore. And what would that society look like? It's big question. I'm trying to [inaudible] my claim about the timespan now. I don't know, it might just be a failure of imagination, but it's hard for me to imagine that, even if this thing exists, it being something that can actually be deployed in meaningful numbers to solve problems is somewhat hard to figure out. But it might just be a failure of imagination. 0:16:47.7 VG: Yeah, I wonder because it may be that you don't need to have an AI that's deployed in large numbers if you can have it, one of one copy of it do you solve cancer for us, or solve climate change or be a CEO of a company and there's only a few CEO AIs and... 0:17:08.9 Interviewee: Yeah, there are many different ways it could go. One of them being that we cede control of AI, we cede control of areas, powerful institutions to AI and then we just let it do whatever, and then we live with the consequences. That does seems unlikely for political reasons, within my lifetime, but who knows what could happen? Yeah, it's a big question. |
76 | Questions\(AGI-when)\embodiment | 0:06:08.5 Interviewee: Yeah, well I'm a reinforcement learning researcher. Reinforcement learning unlike the other types of machine learning has an agent in it. So, right. So if you crack up and rest of an order read the first chapter, which says AI is about agents and there are no agents in the other machine learning paradigms, the right. So they're like useful in the way that calculus or Python is useful in that there are thing that we might use to build an intelligent agent, but they're no agents in them. So they're not even on the same topic. So reinforcement learning has a plausible arguments to be about building a general host service agent. And you could imagine if you did end-to-end deep reinforcement learning on an embodied agent, by the way, you said, not necessarily embodied agents like scientists, or CEOs, scientists, and CEOs are also embodied, there are only embodied agents, right? There are no unembodied agents out there in the world. Every agent is an embodied agent. And there are some people who think that's, I'm one of those people, that's fundamental and inexorable. But there are people who aren't and who don't and that's reasonable too, I guess. But in reinforcement you can imagine a deep reinforcement manner where you just piled enough data in, and the network was cleverly enough architected and was great in dissent enough and that is in principle, if you were free of some complexity constraints in sufficient, right? Like modular some stuff, right. Reinforcement learning assumes that state space is mark of that's obviously not true. But in principle, that problem setting you know, if you could do that fast enough, that would win. I know very few people who think you could do that fast enough. You certainly couldn't do it fast enough for an embodied agent, you know, to just go around in the world and do end-to-end deep enforcement learning and like do all the things that human does. But I guess it's an in empirical question. So, I think one way to summarize what I'm saying is anyone who's doing machine learning, who's not doing RL is not doing AI. Some said that the people who are doing RL are doing AI, the people who are interested in multitask, sorts of AI general intelligence transfer, that sort of question. And then anyone in the rest of AI, who's not doing something that generates actions, 'cause that's what agents do they generate actions. If they're not directly thinking about the question of how to generate actions, or indirectly thinking about building something that will help something generate actions, is not doing AI. So those technologies might be useful they might not be. I happen to think the focus on applications is not, is like actively harmful. Because it's like building a better dishwasher, it's just like not what intelligence is. And if we put all human capital into building like the world's big dish water, then that it says nothing about general intelligence, the same way. If we build the world's best twitter classifier, that's got nothing to do with the thing. |
77 | Questions\(AGI-when)\embodiment | 0:08:57.8 VG: Got it. So even if, even if like GPT-like system, which are not our agents, continue to like be able to do more and more kind of tasks is not intelligence in, in any way? 0:09:08.7 Interviewee: No. There's no grounding in any of those systems. If you took all the text that's ever been written in the history of the universe, you can't learn grounding from it, 'cause the grounding data is not in there. And so getting the meaning of text from other text, it's like a closed system, it doesn't contain the thing that you need it to contain. Some people think that a question answering system could be intelligent. I guess, maybe if it answered enough questions and it knew what it meant, which is to say that the symbols were grounded, I guess maybe. I don't know. I think agents are things that take actions and actions in the world, and that means a robot. You and I are robots, but we're not computers. So, this is an obvious observation that most people don't understand the implication of, but you're not a computer. You're a brain in a body, you're not a brain. There are no brains. There's no like brains in the fields or hanging from the trees or flying through the air, there's only bodies that happen to have brains inside them. And so it's not reasonable to talk about a computer without a robot, if we're talking about agents that are intelligent the way that humans are, other creatures are. 0:10:30.0 VG: So, if I had an agent that sent emails and exchanged money in order to buy more hardware and that, that's sort of taking an action, but that doesn't fit the criteria in some way? 0:10:42.4 Interviewee: Yeah. It could, but how general is it? So, my spam filter does that, right? It gets text and then it decides if it's spam or not and it flows the thing to one way or another. And so then you need to say, okay, one way to think about intelligence is, there's a couple of dimensions. One dimension is how good are you at stuff? And then another dimension is how wide is the set of things you can do? And all of those things are very limited. They can be amazing. And actually, I think being amazing at stuff is not that interesting, you just need to be about as good on average as a human. But it's the breadth of the thing that you're sort of adequate at, that matters. You can imagine now, I'm doing a horrible math analogy that's totally inaccurate and imprecise in my math, undergrad lectures will be upset. But you can imagine something like the area under that volume, so there's the volume of things that you're adequate at. And then there's how good you are at those things. And that distinguishes a narrow from a general intelligence. And it's much more about how big that set is and much less about how good you are at doing that thing. And so an agent that does any individual task is just a point. And you know, that's like, then you get Dung Beetle, Dung Beatles is amazing at Dung beetling, right? Like they're, they can roll balls of Dungs that are like 500 times their weights. And in that tiny little Ecological niche, they are, they are the thing, but we that doesn't count. That's not interesting. Right. That's that's like almost exactly opposite of what intelligence is. 0:12:06.9 VG: What if I had a CEO AI that interacted with other people and had text and images as input and can also do texts and talk to people back, and it just interface with a lot of humans but was the located on the internet, per se. Does that count as intelligence to you? 0:12:25.0 Interviewee: That's a robot, right? Because, it's getting video input and it's getting right. So what you need to be a robot like it doesn't actually matter that you're built out of metal. Right. What matters is that your sensors are connected to the world and your actuators are connected to the world and you are engaged in an interaction loop with the world. Like, that's the thing, right? So people used to call that situated back in the nineties, when we were all hippies about this sort of thing, they would talk about a situated agent, which is locked in interaction with the world. And like a lot of classical, philosophy of mind results only make sense in that context. Like, where's the meaning of symbol? It's not actually in the computer program, it's actually in the interaction with the program with the world, because you could keep the program fixed, change the world and the symbol doesn't make sense anymore. Right. So lots of symbol grounding and meaning things don't actually make sense except for in that interaction with. 0:13:13.1 VG: Okay, is this an embodied agent? 0:13:16.3 Interviewee: Well, what we really mean by embodied is just real sensors and actuators. 0:13:21.2 VG: Okay. 0:13:22.7 Interviewee: We just mean plugged into the world, not something virtual and official. And so one way to think about that is that the wider the set of tasks that you need to solve, you need to have a single style input and output space that's capable of handling all the tasks. And so, what that means is that, if you were to just build an emailing agent, then its input space would be like an email text blob and its output thing would be like spam or not spam or reply or something like that. But then you say, Oh, it also needs to be able to juggle and actually it also needs to be able to write a sonata and it needs to be able to cook an egg. And so as you increase the set of things that it has to do, which is what we want to do, then the sensory motor spaces has to get richer and richer, 'cause it has to cover the union of all of those things. Right? And therefore, and then very shortly, you're at robot level sensor complexity right? You have to be because that's the reason we have eyes, eyes are terrible sensors computationally, they're very flexible, but like a large fraction in your brain is employed and interpreting them, but you have to do that because otherwise you couldn't do all the thing. So it's really about the richness of the sensory motor space, and it's about that you're plugged into the world, right? You make an action, it changes stuff. |
78 | Questions\(AGI-when)\only if we understand the brain | Interviewee: It's possible. I think it would be almost like understanding the human brain and human functioning, which would be a lot of bioinformatics and a lot of biology. So more than... Along with computer science. So I don't have too much understanding of the biology part of it, but I think it's very far away. So in my end I have a feeling we'll never do it, at least in the current computing paradigms. I think it's very far, so almost impossible. VG: Interesting. Okay. Well, in order to have something like general intelligence, we need to basically be copying the human template, which means that we need to know humans extremely well. And so we need to do biology and neuroscience, et cetera. And we also need to have a good understanding of computer science and both of these things together will get us to AGI. And we wouldn't be able to get AGI gist via the computational route and not with the current paradigm, which is I guess, lots of deep learning cetera. Interviewee: Yeah. I think in the human brain there is... I mean, you have memory, you have creativity, you have different kinds of feedback mechanisms, you have reflex actions, so many. Which we can do these little parts separately, maybe in computer science, but it's not all together. And then of course there are different interactions between these. You have the different divisions of the brain itself, which are working together. And then there's this hierarchical organization. We have different kinds of inputs as well, like touch and sight sound. It's multimodal. And we don't have replicas of each of these in computers again. There are so many challenges. |
79 | Questions\(AGI-when)\only if we understand the brain | Interviewee: I think that we would need some kind of biological understanding because we are eventually... I mean, first of all, to even define what we mean by... Like we are saying, we want to create a human brain, let's just take only the brain part of it. And even you have to limit it to some subset to actually say we're done, we need to define it otherwise, how will we know we are done? So after taking these set of things, let's say like, if we remove the complete physical part of it, then it's this thinking and so on. I mean even which is that itself is lost. So then having defined that, then we should be able to replicate it and then be able to test this. So, I mean, you must probably know about things like curing tests and so on. I think that to even know that we have like actually built this, we should know. How do we know? |
80 | Questions\(AGI-when)\only if we understand the brain | 0:06:47.7 VG: So I often think of this as like, we've been working on AI for less than 100 years, humankind has advanced a lot in the past 10,000 years, especially in the past 400 or so. And I imagine that if we can continue to pour effort into AI that we'll eventually get somewhere. Maybe 1000 years, maybe 200 years, maybe 50 years. However, do you think we'll ever get to a very capable AI? 0:07:15.4 Interviewee: That's a very difficult question. I think the question is, rather, can we imitate the human brain or something like that? Because if we can or if we can be close to that, and that's really not what we're doing right now. But if we can be close to that, maybe we will have some paradigm shift that will make this thing happen. But yeah, I guess it depends on this kind of thing. But right now, I'm not sure we're heading to that. So. Yeah. 0:07:48.3 VG: Interesting. I know, companies like OpenAI and Deepmind are aiming for general intelligence. But who knows if they'll actually get there? There's plenty of people who aimed for things in the past... 0:08:00.2 Interviewee: Yeah, you can aim for a thing but dreaming that you can achieve it, but I don't see it achieved with the things that we have right now. 0:08:09.8 VG: Do you think in like 1000 years, we'll have a general AI? 0:08:17.9 Interviewee: Yeah, I guess maybe, because at that point we will have a better understanding of the brain. And I guess, at that point, we will be able to imitate or improve the brain somehow, and have computers that are more similar to the brain than what they're doing right now. So I guess, yeah, maybe? 0:08:38.9 VG: Maybe in 1000 years? How about in 200 years? I'm just like, "Hmm." 0:08:44.2 Interviewee: I don't know, maybe far in the future, but not close to what we are right now. Yeah. |
81 | Questions\(AGI-when)\only if we understand the brain | 0:09:08.7 Interviewee: I don't think so, no. Because what we model we don't explicitly model the same way it happens in our brain, right, it's all still approximation, and we still don't know for sure how it all happens in our brain, so I think we're still missing many details from it. |
82 | Questions\(AGI-when)\only if we understand the brain | 0:10:47.3 Interviewee: I think in the end, it converges to each other in some sense. Because what both branches seek for is some kind of characterization of intelligence. And the goal of deep learning is just to also get to intelligent systems. And so in neuroscience, you observe what you have in real life and try to demystify it in some sense. And deep learning has more of the constructive approach. So we start from small blocks and we try to build systems that get as far as real-life instantiations. But maybe also beyond. And I think that you cannot look at both branches separately because there is so much interaction in between. A lot of deep learning techniques are inspired by neuroscience. And I would guess that a lot of neuroscience will be influenced also by observations in these idealized environments of deep learning research. So I wouldn't say that one pushes forward and the other doesn't keep up, in a sense. So they go together in a sense, I would expect. |
83 | Questions\(AGI-when)\only if we understand the brain | 0:15:55.7 Interviewee: Yeah, maybe it's also a matter of money, because neuroscience, you do not see this as a short-term benefit of investing money into that research, and this is more reserved for a classical academia, but then Deep Learning, you see all the big companies spending a lot of money on this, and this accelerates it, of course, but is this really fundamental progress or not this is the question. So if we have all these tiny steps like proposing new loss functions that eventually don't change the understanding on a fundamental level, one could argue that most of what is regarded as progress will not matter in future anyway, so yeah, it's hard to quantify this, put this into numbers, I would say. |
84 | Questions\(AGI-when)\only if we understand the brain | 0:11:18.6 Interviewee: But at the same time, having human-like capabilities is very, very hard because... Primarily because we do not understand the brain as it exists. Now, of course, there has been a different trajectory where people have sort of said, "Okay, we don't care about what the brain does or how it works, we just want to keep on playing with the permutation and then combination of how the layers are arranged, what layers are to be put in, and that is how we come up with a network. And then that just solves the task at hand. So that's one way to do it. And that's perfectly all right. But if you were to mimic the brain as is, we need to have a lot more studies and we need to have a lot more going on the neuroscience front and on the cognitive psychology front. Vision scientists need to come in and really bridge that gap, that is when I think we can achieve somewhere close to artificial general intelligence. |
85 | Questions\(AGI-when)\only if we understand the brain | 0:26:14.1 VG: Interesting, okay. And you don't necessarily see a connection between, like, the current... [you think] if we just push really hard on the current machine learning paradigm for 50 years, we won't have an AGI. We need to do something different for an AGI, which sounds like embodiment / combination with humans, biological merging? 0:26:31.7 Interviewee: So it could be embodiment and combination with humans, but also just better, different learning algorithms. So probably more sparsity is something that scales better. More efficient learning. So the problem with gradient descent is that you need too much data for it. Maybe we need some like Bayesian things where we can very quickly update belief systems. But maybe that needs to happen at a symbolic level. I still think we have to fix symbolic processing happening on neural networks-- so we're still very good at pattern recognition, and I think one of the things you see with things like GPT-3 is that humans are amazing at anthropomorphizing anything. I don't know if you've ever read any Daniel Dennett, but what we do is we take an intentional stance towards things, and so we are ascribing intentionality even to inanimate objects. His theory is essentially that consciousness comes from that. So we are taking an intentional stance towards ourselves and thinking of ourselves as a rational agent and that loop is what consciousness is. But actually we're sort of biological machines who perceive their own actions and over time this became what we consider consciousness. So... where was I going with this? [laughs] What was the question? 0:27:57.2 VG: Yeah, okay. So I'm like, alright, we've got AI, we've got lots of machine learning-- 0:28:00.8 Interviewee: --oh yeah, so do you need new learning algorithms? Yeah. So I think what we need to solve is the sort of System 2, higher-level thinking and how to implement that on the neural net. The neural symbolic divide is still very much an open problem. There are lots of problems we need to solve, where I really don't think we can just easily solve them by scaling. And that's-- like there is very little other research happening actually in field right now. |
86 | Questions\(AGI-when)\only if we understand the brain | 0:10:00.3 Interviewee: Oh, this is a tough question. Let's see. At least, based on the method we have taken, I think the way we develop AI now will not lead us to that future, but maybe the human will find some different ways to develop AI. But through my mind, I guess is to first we talk about maybe 50 years or a century, I think that's not very possible, but in the future, this may be a question about some knowledge about our brains. So maybe human at the moment, we are not... I mean the investigation or research into our brain is not very clear. 0:11:01.0 Interviewee: So it's quite hard to imagine if the machine can evolve into some stages, that is the machine can be as complex, as powerful like human brains. So maybe in a century or even in two centuries, I tend not to believe that this will happen. but in the very long run, it's very hard to tell because I think that the way we think about something and the way the machine difference or train themselves are totally different. They work in different ways. For example, machines may require very large amount of data to find some internal principles. But with human, we are very good at generalizations, so I think for this point, if then maybe we will achieve that after 1,000 years, but I'm not very optimistic about this. I tend not to accept this, but I can't deny that entirely. |
87 | Questions\(AGI-when)\only if we understand the brain | 0:14:24.8 Interviewee: Yeah, I mean again, I don't think this automating all job story is really... So yeah, I actually really like sort of Michael Jordan's framing here. So he talks about how... Chemical engineering wasn't like a field per se even 60, 70 years back. Chemistry was a field. Then I think chemical engineering is the practice of, I guess sort of concretizing how you build factories and how you manage risk and to produce chemicals at a very large scale. So I feel like... Yeah, I agree with this formulation, that like, AI is this... Yeah, as I said that's why I like the phrase machine learning. It's just taking a lot of data and perhaps making some predictions and making some decisions using that. Yeah, but I think we are super, super far away from anything that can do even remotely similar to what the human brain does. I think if you talk to... I think neuro-scientists, they would tell you that we don't even understand what happens in single synapse and like... |
88 | Questions\(AGI-when)\only if we understand the brain | 0:21:49.8 Interviewee: Yeah, yeah, yeah, oh yeah, I don't think there's anything magical about the brain, so you're right, we already know that there is a system in the world that is capable of this, so there's nothing magical about the brain, so it's possible that we would be able to replicate it, but I just think because of this AI hype, I think people substantially underestimate what our brains are capable of and how truly amazingly difficult a task like actually understanding language is, so it's... But yeah, you're right, it would probably employ some sort of embodied agent that can live in the world, talk to people, move around and do these sorts of things. I don't think it's like an agent's going to watch YouTube videos all day and just look at, scrape all the texts from the internet, and I don't think that is, at least I don't think that can possibly be a path towards the actual understanding language. |
89 | Questions\(AGI-when)\only if we understand the brain | 0:09:54.0 Interviewee: Okay. No, I don't think that's possible. Okay. So that's basically capturing human consciousness, human intelligence, lots of things. I mean, you said like a robot scientist, right? Just to replicate the brain of a human, that's what you meant? |
90 | Questions\(AGI-when)\only if we understand the brain | 0:23:20.0 Interviewee: No, if you... I understand the ideas you have. The [inaudible], that the robot is walking around and getting their own information, but I am not sure how the model would be trained, but that's even not a problem, it's still the observations that is... That are just like that's coming from the day to day life, from human behavior. But I still don't see like how it will think something which is radically different and that I don't see actually. And I think that's the novelty in human brain that we can... We are great visionaries, so we can actually come up, think, visualize something and we can make it real with whatever our smart or our intuition, our knowledge, lot of things. I don't think... I think that vision is very important. So that vision, I am not sure how the robots can learn it. 0:24:24.1 VG: Cool. My next thought is what if we get really, really good scanning technology in the future and can scan and reconstruct a human brain, do you think that would be smart? 0:24:42.9 VG: Like maybe we can scan the whole thing and then we can make a digital mind just on a computer, but it's just running off of like, you know, instead of having neurons fire, it's kind of like it having a model of neurons fire, etcetera. 0:25:04.7 Interviewee: Yeah, but human brain is hard to study, but maybe it's possible. I'm not so sure. I mean, because if it is possible, then probably it is also possible to come up with the robot, which will replace a human, if we can replicate human brain exactly how it works. So I think these two questions are very related to what you're asking. 0:25:26.4 VG: Yeah. I mean, they're quite, they're different approaches, certainly. One of them is like brain uploading via scanning. And one of them is like use a different method, maybe like gradient descent, which is quite different method from how humans do it. And so I don't know, they seem like kind of distinct methods for having, generating artificial intelligence. 0:25:48.4 Interviewee: Yes, but when you say scanning, so you have to basically go and scan every cell and every single and... I mean, how would I say that... All possible reactions that's happening, I know very less about it, honestly. I think you have a background, you have much better background than me, but I have very less understanding that how do we scan the brain actually? |
91 | Questions\(AGI-when)\only if we understand the brain | As my background is, my PhD has been in neuroscience, and in neuroscience we are dealing with brains. And the way that people have been dealing with brains in order to understand them, was to designing tasks, so many tasks, so many types of recording from brains, in order to really understand what is going on inside the brain and how behavior is generated. This kind of research needs to be done also more often in AI. So when we have these kind of large scale models, we need to interrogate them more frequently and with a more diverse set of tasks and tools. And only under this sort of investigation, we can understand the limitations of these models. We can understand that if we scale a model, which direction of cognitive abilities these models are not able to express, right? So we need to study these models better and more deliberately. On the other hand, the other type of research that I think is also connected to my own research is that we need to get more inspired by the human brain and by animal brains. So when, during evolution, brains have also scaled, right? But they didn't scale uniformly. It was not like we got a mouse brain and just scale it in all different dimensions to get to the human brain. Human brain scaled compared to mouse brain, but they scaled differently. They scaled in different dimensions differently. So we need to really understand the scaling underlying brains evolution, throughout evolution, in order to know that how exactly we need to scale these models to get different types of abilities in different environments, different environmental pressures, and so on. So, yeah, I think in a combination of all these approaches, we might get a better chance of getting to this kind of artificial intelligence or artificial general intelligence. |
92 | Questions\(AGI-when)\only if we understand the brain | I'm not so sure that we'll have a general AI because this all started with the false analogy that artificial neural network are analogous to biological neural network. And the thing is we don't really understand how brain really works very in depth. We have some idea that there are different parts of brain and this and that happens but we don't really know. We cannot replicate it, we cannot create an artificial biological brain so saying that artificial neural network simulates brain and if you'll just keep increasing the number of neurons, number of layers and throwing a lot of compute power and a lot of data to do something, I don't think we'll have a general AI; we really need to understand how the brain works if we are talking about that. Now the thing is, do we really need general AI? Do we need just one AI to solve everything when we can have thousands of things that are designed to do one particular stuff? 0:11:25.8 Interviewee: It could be very complicated stuff. Yes, driving a vehicle is very complicated thing and if you're creating something that's equal to a driver, a human driver, but it's not doing all the other things that human is supposed to do, it's just driving. And if you are very good at doing that, we are solving one big problem similarly in other areas, we can have specific AIs which are very complex and equal to a specialist, a human specialist that is just doing that job. Now, whether we want to have single system or not, is something the researchers have to decide. I think it won't be decided just by discussion or even writing papers in AI and sociology conferences or general, I think it will be done when the researchers start saying that no, we cannot create general AI because we don't know the brain. So that's my opinion based on what I have learned about neural networks and AI, that it's not really possible to create general AI. |
93 | Questions\(AGI-when)\only if we understand the brain | 0:14:00.3 Interviewee: I think, definitely. It all depends on, if we understand the brain, even if you don't understand the brain, though we have a good model of consciousness or how intelligence work. Right now, all we're doing is renewal network, that's not really how the brain works. And, even if we do that, I think the way right now we are seeing the trajectory that the amount of computed data that is required, if we built Artificial general intelligence it's going to be intractable to have that kind of data and compute, so there will definitely need to be some kind of paradigm shift when it comes to what kind of compute we are using, or are they energy efficient or not? What material is used? It can go all the way to material science. And about data, do we really need that large number of data. 0:15:02.7 Interviewee: Another thing is, if we ever understand the brain, because let's say if you are able to fully model it, model the interaction and everything, we can record it someway. If there is a way to record it, and we can then create a mathematical model or even a data driven model, maybe at that point we'll have a general system, but that also requires a lot of changes in probably one of the brain, it cannot be just MRI, cannot just have some probes if it's very dense, you're just reading some signals of the brain, so there are lot of things that not just statements on algorithm and collecting data, it's so many areas of science and engineering gets tested. So, if we solve all those problems in all those areas, maybe we'll do that, maybe within 100 years, it's possible. Humans have been accelerating a lot so it's possible. |
94 | Questions\(AGI-when)\paradigm shift needed | Whether we reach it by pouring a bunch of data into deep networks, I kind of doubt it. I also want to take issue a little bit. So I'm glad you phrased it that way. |
95 | Questions\(AGI-when)\paradigm shift needed | 0:05:42.6 Interviewee: I don't see that coming too soon. Deep learning has it's own flaws right now. It is not being used in medical operations so much, 'cause adverserial attacks are not being interpretable, things like that. So, I personally don't think that's coming too soon, but it will come someday. But on the other hand as being worried about that it wil replace the people, so people won't have jobs or can't do anything. Again, I don't think that's something to be worried about. When people are farming in fields, they would see machines like that. Some machine will come and take your place, if you've harvested things. So you won't have something to do. But it will change your life, it will level up your life. So you have to work with the machine. And you have a new job. |
96 | Questions\(AGI-when)\paradigm shift needed | Where, yeah, so doing those sorts of things in coherent... Doing sort of intelligent things in coherent ways over very long horizons is fundamentally much harder than what things are actually doing right now. 0:13:25.6 Interviewee: In antithetical, maybe a little bit to the way that the models are trained and the way that sort of all deep learning stuff works. |
97 | Questions\(AGI-when)\paradigm shift needed | And I don't think that only with brute force gradient descent we'll be able to build general capabilities. So I think we would need to find out something else. |
98 | Questions\(AGI-when)\paradigm shift needed | Interviewee: I just think that there's a lot of... Because I'm currently researching about the technology we are doing right now. Just like I've said before, I already think that the approach we are currently using is not right. But we are still speeding the power out of a deep learning. And that will be continued. We will be continued to screening out of by [what] five or 10 years. And to the point that we have to switch gear to the real thing. I think that will cost some time. And two, for the real thing to really improve to the point that we can apply it to the real applications or real seen scenarios, that would cause another long time of... And next we have some identity issues and adding all that should be the whole improvement or the whole journey of the AI for human to keep pushing forward. |
99 | Questions\(AGI-when)\paradigm shift needed | VG: Yeah. So this is like an empirical claim. You're like, "Well, currently we're getting stuff out of scaling things. We're getting novel stuff, but at some point we're going to stop getting novel stuff it's all going to be the same." And your prediction is that will happen sometime five to 10 years from now. Interviewee: Yeah. VG: Okay, cool. What's driving that intuition? Interviewee: I'm in the computer vision field and we just trying to solve the basic problem of that is the image segmentation. The segmentation is the low level task or you can say it's low level but it's the fundamental task of our field. And that field has not been innovated for a long time. It's like we are just competing for the score, competing for the MRI score for the metric. And we cannot see anything very interesting or say, it's very revolutionary something like that. It's not there anymore. And we are just improve on use works but improvements are improvements. That thing I think that should do, but it's just incremental. It's not something revolutionary and we can build upon that and to create a new path. I didn't see that. So I think that's the... If we've already reached there right now, then after five or 10 years, maybe different other tasks might be suffering from the same kind of the scenario that we have right now. So I think that it's the basic premise by my prediction. |
100 | Questions\(AGI-when)\paradigm shift needed | VG: Okay. So you're saying that image segmentation that field hasn't seen very much progress and it's not recently. The large language models seem to be seeing a lot of progress and like foundation models and communication models seem to see a lot of progress, but you're like, "Well that will probably track back." Is computer vision not seeing much progress? Interviewee: No. The computer vision itself have seen a lot of progress right now but not in some of the fundamental tasks. Because the fundamental task has been tapped by a lot of different researchers. A lot of has been put out in recent years but we cannot see a giant leap from the existing methods to the current the state of the art methods. The recent one is the transformer which is brought from the ARP community. In ours where we use convolutional neural network for a long time. And it's for three or four years we are sticking to the CNN and not before the transformer came out and we are still using it. And now the transformer comes to the [CV 00:28:20] community and we saw a giant dig for the metrics and for a flood of different works for using transformer in contribution tasks. But other than that we did not see any very revolutional progress that's the basic -- right now. |