Hello. Welcome to Eye on AI.
Leading AI companies OpenAI and Google alternated in the headlines on Monday and Tuesday, announcing a number of new generative AI products and model updates. On Wednesday, the U.S. government had that moment.
A bipartisan group of senators led by Chuck Schumer of New York has released a long-awaited “roadmap” for regulating AI. The 20-page report is the culmination of a nearly year-long hearing tour conducted by the Senate AI Working Group. We held private meetings with more than 170 technology industry leaders, academic researchers, civil rights leaders, and others to better understand how to regulate technology. Lawmakers called on the U.S. government to start spending at least $32 billion a year on AI “as soon as possible,” which does not include spending on AI for national defense. However, while acknowledging the various harmful applications and currently unfolding impacts of AI, they did not propose any specific regulations. Instead, they pushed it to a Senate subcommittee on issues such as AI training for civilian workers, AI easing relentless energy demands, and AI being used to create election disinformation and child sexual abuse material. He suggested areas where efforts should be focused. (CSAM).
Many organizations, watchdog groups and experts who provided insight during the hearing sessions called the outcome disappointing, a lack of vision, a delay in much-needed regulation, and many countries around the world already have regulations in place. They argue that this is a missed opportunity to take action.
“The nine 'insight forums' functioned as a stalling tactic,” Amba Kak and Sarah Myers West, co-executive directors of the AI Now Institute, an AI policy research group, said in a statement. , argued that the momentum to regulate AI was instead diverted to society. This is a process led by the industry behind closed doors and now benefiting the industry.
Alondra Nelson, former White House acting director of the Office of Science and Technology Policy, who participated in a listening forum on U.S. support for AI innovation, outlined the upcoming roadmap. fast company They said it was “too shallow to uphold our values” and lacked “urgency and seriousness.” Suresh Venkatasubramanian, another former White House official who co-authored the Biden administration's blueprint for the AI Bill of Rights, a set of principles to guide the development and use of AI systems, said he and other AI An ethicist said he felt “betrayed” after joining AI ethicists on the bill. Discussions were held in “good faith” despite industry regulatory concerns.
If there's one thing Schumer has made clear, it's that U.S. domination of AI is the goal, and at a press conference he talked about $32 billion in “U.S. dominance in AI,” including “beating China.” A surge in emergency funding to solidify our advantage.” This funding will go toward research and development, infrastructure, outstanding CHIPS and science law funding, a series of “AI Grand Challenges” programs, and numerous other government and AI-related initiatives.
Delaying regulation certainly helps U.S. tech companies do so, especially the dominant companies that argue that regulation “stifles innovation.” According to its own purpose stated in the report, the AI Working Group was established because AI is “too broad” to “fall neatly within the jurisdiction of a single commission.” But what this special group decided was that ultimately it should be the subcommittee that proposes the bill, and that billions of taxpayer dollars can't wait, but regulations can. It seems like that's all. Government funding for science and technology is important, and that's how we got the internet in the first place. But prioritizing advantage over safety is a dangerous path.
A number of AI bills have been introduced in subcommittees but stalled. The Senate Rules Committee yesterday passed three bills aimed at protecting elections from deceptive AI, but they still need to advance in the House of Representatives and pass in the full Senate, adding time to the election. Near at hand. If he hadn't taken less than a year for the AI Working Group to come up with zero concrete regulatory proposals, efforts to mitigate this clear and current danger of AI might have made more progress. In the report, lawmakers also expressed clear support for a national data privacy law, the kind they are also responsible for introducing and passing.
Schumer said: new york times“AI is changing so fast that it's very difficult to regulate it. We weren't going to rush this.'' $32 billion a year, plus more defense funding for AI. Pour in 'as soon as possible' certainly seems like decisive and quick action. At the same time, it has been almost two years since the arrival of a new era of generative AI has made regulation increasingly necessary. And long before that, there was also widespread evidence that AI was causing real-world harm. The EU enacted a comprehensive AI policy earlier this year, just months away from a heated election in which AI is already being used to deceive and misinform voters. Considering all this, there is no need to rush to propose AI regulations over the course of a year.
Just last week, I reported how the most comprehensive state AI bill to date came under fire over concerns (not to mention opposition from the tech industry) that the federal government should take the lead. . States should not hold their breath if things continue like this.
So, here's more AI news for you.
Sage Lazarus
sage.lazzaro@consultant.fortune.com
sagelazaro.com
AI in news
There have been two high-profile departures at OpenAI, and Anthropic is hiring in droves. Ilya Sutskever, OpenAI's chief scientist and one of its co-founders, told X that he is leaving the company after nearly a decade to work on projects that are “personally meaningful.” Just a few hours later, Jan Leike, who co-led OpenAI's Super Alignment Group with Sutskever, shared a rather short message: “I have resigned.” The departure follows a series of high-profile departures from the company's safety team in recent months, raising questions about whether the team is slowly becoming hollowed out. as luckReported by Christian Hetzner.. But Anthropic landed on the bright side this week with a shakeup of his AI cadre.company announced The company has hired Mike Krieger, co-founder and CTO of Instagram and most recently AI company Artifact, as its new chief product officer.
Stability AI is in talks to sell. according to it information, reported that the AI company has been in talks with at least one potential buyer in recent weeks. The four-year-old company was an early leader in generative AI with a stable diffusion model that generates realistic images from text prompts. The company is facing the resignation of its founder and CEO, disputes with investors, an employee exodus and a severe lack of funding. Stability AI lost more than $30 million in the first quarter and owes its cloud provider about $100 million.
As AI threats to publishers increase, Pulitzer Prize winner leverages custom models for investigative reporting. City Bureau and the Invisible Institute, which won a local reporting award for its “Missing in Chicago” project, announced it has trained a custom machine learning model to scrutinize thousands of police misconduct files. of new york times The Visual Investigation Desk similarly trained a model to identify craters from 2,000-pound bombs throughout areas marked safe for civilians in the Gaza Strip. “We didn't use AI to replace manual tasks; we used AI because these are the types of tasks that would be extremely time-consuming to do manually. [it would distract from] Other investigative activities,” said reporter Ishaan Jhaveri. times The team that won the International Reporting Award said: Nieman Institute.
While AI's vast number-crunching and pattern-recognition capabilities have been shown to benefit journalism, other aspects of the generative AI revolution are beginning to pose a major threat. Google announced this week that it is rolling out AI Overviews, a rebranded version of its Search Generative Experience (SGE), in the US and soon worldwide. The product returns multi-paragraph answers to users' Google searches, often eliminating the need to click on links on his website to get information. Testing this feature has already left publishers scrambling to prepare for a 20% to 60% drop in traffic and up to $2 billion in lost revenue. Adweek reported.
The fate of AI
Google I/O reveals why search giant is silent against OpenAI —Sharon Goldman
Many CIOs are stuck in the AI slow lane, but they’re setting ambitious priorities as if they weren’t. —John Kell
Gen AI looks easy.that's what's so difficult —Rodney Semmel (commentary)
AI calendar
May 21st-22nd: AI Seoul Summit on AI Safety (Seoul, South Korea)
May 21st to 23rd: Microsoft Build in Seattle
June 5th: FedScoop's FedTalks 2024 in Washington, DC
June 25th-27th: 2024 IEEE Conference on Artificial Intelligence in Singapore
July 15th to 17th: Fortune Brainstorming Tech, Park City, Utah (Registered) here)
July 30th-31st: Fortune Brainstorm AI Singapore (registration) here)
August 12th-14th: Ai4 2024 in Las Vegas
Pay attention to AI numbers
39%
This is the percentage of CEOs who say they currently have good generative AI governance in place. These numbers aren't exactly encouraging, especially given that 75% of CEOs surveyed said trustworthy AI would be impossible without effective AI governance within their organizations. Not.
There's also the fact that these CEOs are among the first to bring AI to their organizations.according to investigationThe study, conducted by IBM and Oxford Business and comprised of interviews with more than 3,000 CEOs around the world, found that 61% report that their organization is generating more energy faster than some are happy with. According to respondents, they are promoting the introduction of type AI. Additionally, 72% said it is currently only piloting and experimenting, while 49% expect him to drive growth and expansion by 2026.