Gary Marcus is happy to help regulate AI for U.S. government: “I’m interested”

Date:


On Tuesday this week, neuroscientist, founder and author Gary Marcus sat between OpenAI CEO Sam Altman and Christina Montgomery, IBM’s chief privacy trust officer, while all three testified for over three hours before the Senate Judiciary Committee. The senators have largely focused on Altman because he currently runs one of the most powerful companies in the world and because Altman has repeatedly asked them to help regulate his work. (Most CEOs are asking Congress to leave their industry alone.)

Although Marcus has been well-known in academic circles for some time, his star has recently risen thanks to his newsletter (“The road to AI we can trust”), a podcast (“Humans vs Machines’) and his understandable uneasiness about the uncontrolled rise of AI. For example, in addition to this week’s hearing, he appeared on Bloomberg television this month and was featured in the New York Times sunday magazine And Wired among other things.

Because this week’s hearing seemed truly historic in some ways — Senator Josh Hawley called AI “one of the most technological innovations in human history,” while Senator John Kennedy was so enamored with Altman that he asked Altman to choose his own regulators — we wanted to do that too, talk to Marcus to discuss the experience and find out what he knows about what happens next.

Are you still in Washington?

I’m still in Washington. I’m meeting with legislators and their staffs and various other interesting people and trying to see if we can make the things I’ve been talking about a reality.

You taught at NYU. They have co-founded several AI companies including one with the famous roboticist Rodney Brooks. I interviewed Brooks onstage in 2017 and he said at the time that he didn’t think Elon Musk really understood AI and that he thought Musk was wrong about AI posing an existential threat.

I think Rod and I share skepticism about whether current AI has anything to do with artificial general intelligence at all. There are several issues that you need to take apart. One is are we close to AGI and the other is how dangerous is the current AI that we have? I don’t think the current AI we have is an existential threat, but it is dangerous. In many ways I think it’s a threat to democracy. This is not a threat to humanity. It will not destroy all people. But it’s a pretty serious risk.

You debated this not long ago Yann LeCun, Meta’s chief AI scientist. I’m not sure what this flap was it about – the true meaning of deep learning neural networks?

So LeCun and I actually discussed a lot of things many years. We had a public debate moderated by David Chalmers, the philosopher, in 2017. I tried to join [LeCun] to have another real debate since then, and he won’t. He prefers to tweet me on twitter and such, which I don’t think is the most mature way to have conversations, but because he’s an important figure, I reply anyway.

One thing I don’t think we can agree on [currently] That said, LeCun thinks it’s okay to use those [large language models] and that no damage can occur here. I think he’s completely wrong about that. There are potential threats to democracy, ranging from misinformation intentionally produced by bad actors to accidental misinformation — like the law professor who was accused of sexual molestation when he didn’t commit it — [to the ability to] They subtly shape people’s political beliefs based on training data that the public isn’t even aware of. It’s like social media, but even more insidious. You can also use these tools to manipulate other people and probably trick them into anything you want. You can scale them massively. There are definitely risks here.

You said something interesting about Sam Altman on Tuesday, telling the senators that he didn’t tell them what his worst fear is, which you call “Germane,” and pointing them to him. What he still hasn’t said has anything to do with autonomous weapons, which I spoke to him about as a main concern a few years ago. I found it interesting that guns were not discussed.

We’ve covered a lot of issues, but there’s a lot of things that we haven’t achieved, including enforcement, which is really important, and national security and autonomous weapons and things like that. There will be more of them [these].

Was there talk of open source versus closed systems?

It was hardly mentioned. It’s obviously a really complicated and interesting question. It’s really not clear what the correct answer is. You want people to do independent science. You might want some kind of license for things that are deployed on a very large scale, but they come with special risks, including security risks. It’s not clear if we want every malicious actor to have access to arbitrarily powerful tools. So there are arguments for and against, and probably the right answer will be to allow some degree of open source, but also some restrictions on what can be done and how it can be provided.

Any specific thoughts on Meta’s strategy for allowing its language model out into the world so people can tinker with it?

I don’t think that’s great [Meta’s AI technology] To be honest, LLaMA is out there. I think that was a bit careless. And, you know, that’s literally one of the genies that comes out of the bottle. There was no legal infrastructure; As far as I know, they have not interviewed anyone about their work. Maybe they did, but the decision-making process at this, or let’s say Bing, is basically just this: a company decides that we do this.

But some of the things companies decide could do harm in the near future or in the long run. So I think governments and scientists should increasingly play a role in deciding what happens out there [through a kind of] FDA for AI, where if you want to do widespread deployment, you need to do a trial first. You talk about the cost advantages. You make another try. And finally, if we are convinced that the benefits outweigh the risks, [you do the] Large scale publication. But right now, any company can decide at any moment to provide something to 100 million customers, without any government or scientific oversight. There must be a system in which impartial authorities can intervene.

Where would these impartial authorities come from? Doesn’t everyone who knows something about how these things work already work for a company?

I’m not. [Canadian computer scientist] Yoshua Bengio is not. There are many scientists who do not work for these companies. Getting enough of these provers and giving them an incentive to do so is a real concern. But there are 100,000 computer scientists here with a certain level of expertise. Not all of them work for Google or Microsoft on a contract basis.

Would you like to play a role in this AI agency?

I’m interested, I think everything we build should be global and neutral, presumably nonprofit, and I think I have a good neutral voice here that I’d like to share and try to get us to a good one place to bring.

How did it feel to sit before the Senate Judiciary Committee? And do you think you will be invited again?

I wouldn’t be shocked if I was invited back, but I have no idea. It really touched me deeply and I was deeply moved to be in this room. I assume it’s a bit smaller than on TV. But it felt like everyone was there trying to do what’s best for America – for humanity. Everyone knew how important the moment was and by all accounts the Senators put on their best game. We knew we were there for a reason and we did our best.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related