Earlier this month, Google launched an “AI Summary” feature that uses artificial intelligence to summarize search results into short paragraphs that not only provide incorrect advice and answers, but also often bizarre answers, including some that sound like TikTok challenges. Putting glue on pizza or eating rocks.
in Blog Post Google search chief Liz Reid said Thursday that users have been more satisfied with their search results since the feature was introduced, but added that the company will reduce how often “AI summaries” are shown to provide answers.
GBH Taking everything into consideration Moderator Arun Rath discussed the rise of generative AI with Brian Criss, a machine learning researcher and professor at Boston University. Below is a lightly edited transcript.
Arun Rath: First, tell us how quickly this technology has taken off. We've been hearing about it for a few years now, but now it seems like we're seeing search engine options everywhere.
Brian Criss: Machine learning has been around for a long time, but there has been a recent exponential increase in the power of models and the amount of data being used to train them. So, in the last 5 to 10 years, we've seen incredible progress in the field of generative AI.
“In some ways, I think we'll just have to get used to the fact that these models are sometimes wrong.”
Brian Kris, a machine learning researcher at Boston University
Russ: And let's talk specifically about what's going on with Google. This isn't the first example of generative AI not being there yet, but it feels like it's a bigger issue with Google. Some people are debating whether you can expect a level of reliability in the results you get from Google, and it's not good for a brand if Google gives you weird or incorrect results.
Chris: Is that so? I mean, what you said is exactly what I think a lot of people are thinking. I mean, Google is a company that gives people information, and they expect it to be correct. Maybe other companies, like OpenAI, have a little more leeway in terms of things not having to be accurate. This is a bit of a speculation, but I think Google is rushing products to market in an attempt to compete with OpenAI. So maybe what we're seeing are features and products that don't have all the bugs worked out.
Russ: It seems to be improving a bit. I tried to get him to say something weird just before speaking but it didn't work. How serious is the problem and how quickly do you think it can be resolved?
Chris: Well, I think there's a fundamental problem that's hard to fully solve, which is that these models that are trained on human-generated data are never going to be 100% accurate. So even if you put in safety filters or try to remove erroneous content, you're never going to have a perfect system. You're never going to get rid of all of that. So it's going to be an iterative process.
Over time, these things will get a little better, but I don't know if such systems will ever be 100% accurate, and in some ways I think we'll just have to get used to the fact that these models will be wrong sometimes.
Russ: Are those problems built into the system, or are there other approaches to AI that can get around those issues?
Chris: This is built into current systems, and the way that current systems are trained is that they take data from the internet, and given a piece of text, say from a webpage, they try to predict what the next text is.
So if you feed it the right information and it's predicting the right next thing in the text, that's fine, but if it gets its information from a source that might not be accurate, like Reddit or fake news, it's going to learn that incorrect information.
So unless we can figure out how to purge that kind of training data from these systems, it's going to be hard to remove it completely.
Russ: How likely do you think it is that regulation of this technology will be introduced domestically or in places like Europe, which are more aggressive in regulating the internet?
Chris: Yes. I think there will be regulation, but I'm not sure what it will look like. But if you are going to use these systems for critical applications, such as medical applications, you need some assurances, or there could be serious consequences.
Same thing with self-driving cars, right? Self-driving cars need regulation, otherwise we could have life-threatening problems in the future.
But I would say that whatever you do, you should consult with the people who are building these systems, because they are the ones who best understand how they work, and they are the ones who know best what errors can creep into them. Any time you're designing regulations, it's good to understand exactly what these systems actually do.