Image credits: tech crunch
To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we'll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Sarah Krebs is a political scientist, U.S. Air Force veteran, and analyst focused on U.S. foreign and defense policy. She is a professor of government at Cornell University, an adjunct professor at Cornell Law School, and an adjunct fellow at the West Point Institute for Modern Warfare.
Kreps' recent research explores both the opportunities and risks of AI technologies, such as OpenAI's GPT-4, particularly in the political realm. In an opinion column for the Guardian last year, she wrote that as more money is poured into AI, the AI arms race will intensify, not just between companies but between nations, while the AI policy challenges will become more difficult. .
Q&A
In short, how did you get started in AI? What attracted you to this field?
I started in the field of emerging technologies related to national security. At the time the Predator drone was deployed, I was an Air Force officer working on advanced radar and satellite systems. I have been working in this field for four years, so it was natural for me, a Ph.D. student, to be interested in studying the national security implications of emerging technologies. I first wrote about drones, and the drone discussion has moved towards questions of autonomy, which of course involves artificial intelligence.
In 2018, I attended the DC Think Tank Artificial Intelligence Workshop and gave a presentation on this new GPT-2 feature developed by OpenAI. We had just experienced his 2016 election and foreign election interference, but it was relatively easy to spot because there were little things like grammatical errors from non-native English speakers. Ta. These errors were not surprising given that the interference was from a foreign country. Russian-backed internet research agency. When OpenAI gave this presentation, I immediately realized that they can generate credible disinformation at scale and, through micro-targeting, influence the psychology of American voters much more than if an individual had attempted to write the content by hand. I was hooked on the possibility of manipulating it in an effective way. , scale will always be an issue.
I reached out to OpenAI and became one of their early academic collaborators in a phased release strategy. My particular research aimed to investigate the possibility of exploitation, namely whether GPT-2 and his later GPT-3 can be trusted as political content generators. In a series of experiments, I assessed whether the public would find this content trustworthy, but then I also conducted a large-scale field experiment. There, they generated “letters from their constituencies” randomized with letters from actual constituencies to see if legislators would respond at the same rate. We know whether they can be fooled and whether malicious actors can shape the legislative agenda through massive letter campaigns.
These questions go to the heart of what a sovereign democracy is, and I have clearly concluded that these new technologies are indeed a new threat to our democracy.
What work (in the AI field) are you most proud of?
I'm very proud of the field experiments I conducted. No one has done anything similar, and we are the first to demonstrate its disruptive potential in the context of a legislative agenda.
However, we are also proud of the tools that unfortunately did not make it to market. I worked with several computer science students at Cornell University to develop an application that processes incoming mail from Congress and allows us to respond to constituents in a meaningful way. We've been working on this since before his ChatGPT, using AI to digest large amounts of email and help time-pressed officials communicate with people in their districts and states. was offering. We thought these tools were important not only because voters are dissatisfied with politics, but also because demands on legislators' time are increasing. Developing AI in such a public-interest way seemed like a valuable contribution and interesting interdisciplinary research for political scientists and computer scientists. We conducted a number of experiments to evaluate the behavioral question of how people feel about AI-assisted responses, and we found that perhaps society is not ready for something like this. I concluded. But a few months after we pulled the plug, ChatGPT came along and AI became so pervasive that you wonder why we were concerned about whether this was ethically dubious or legal. That's what I think. But I still feel I was right to ask tough ethical questions about legitimate use cases.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
As a researcher, I have never felt these challenges to be so serious. I was out in the Bay Area and there were literally a bunch of guys doing elevator pitches in hotel elevators, a routine that seemed intimidating to me. I encourage them to find mentors (men and women), develop skills, let those skills speak for themselves, challenge themselves, and stay resilient.
What advice would you give to women looking to enter the AI field?
I think there are many opportunities for women. A woman needs to develop skills and have confidence so she can succeed.
What are the most pressing issues facing AI as it evolves?
I believe that the AI community is trying to avoid “hyper-alignment,” which obfuscates the deeper, or actually true, question of whose values, or what values, it is trying to align AI with. I'm worried that we're developing too many research initiatives that focus on things like . Google Gemini's troubled rollout shows the satire that can come from aligning with a developer's limited values, and actually introduces (almost) laughable historical inaccuracies in its output. brought. I believe that the values of these developers were sincere, but I believe that these large-scale language models do not support certain values that shape how people think about politics, social relations, and a variety of sensitive topics. revealed the fact that it was programmed using These issues are not the kind of existential risks, but they shape the fabric of society and give considerable power to the large companies (OpenAI, Google, Meta, etc.) who are in charge of their models.
What issues should AI users be aware of?
As AI becomes more pervasive, I believe we have entered a world of “trust but verify.” It would be nihilistic not to believe anything, but there is a lot of AI-generated content out there, and users should be really careful about what they instinctively trust. We recommend seeking alternative sources to verify authenticity before assuming everything is accurate. But I think we've already learned that through social media and misinformation.
What is the best way to build AI responsibly?
I recently wrote an article for the Bulletin of the Atomic Scientists. The story started with coverage of nuclear weapons, but has moved on to address disruptive technologies like AI. I've been thinking about how scientists can become better public managers, and I wanted to connect some of the historical examples I was researching for my book project. In addition to outlining a set of steps that I recommend for responsible development, I will also explain why some of the questions that AI developers have are wrong, incomplete, or misguided. We also talk about whether they are being guided.