Last month, New York City came under increased scrutiny after it was revealed that an AI-powered chatbot was providing false information and encouraging small business owners to break laws and violate worker and tenant protections. It became. When asked about the shortcomings of chatbots, which were first reported by investigative news outlet The Markup, New York City Mayor Eric Adams said:[a]Whenever you use technology, you need to put it into a real environment to solve a problem. ”
Weeks later, the chatbot is still working and offering incorrect advice. And the “twists” that are being worked out are being worked out at the expense of actual humans.
The “move fast and break things” philosophy invoked by Adams may still hold sway among Silicon Valley entrepreneurs, but governments are not held accountable for the consequences of such failures. This is a terrible guideline for the public sector because it is indebted to them. The New York City chatbot episode perfectly illustrates how the premature introduction of new technology, especially his AI, can result in costs to governments and citizens that far outweigh the benefits.
Built by Microsoft and released in October as part of the New York City Artificial Intelligence Action Plan (touted as the first of its kind in a major U.S. city), the chatbot is available on the Small Business Services Administration's Web site. It is hosted on the site and includes information such as: The goal is to provide business owners with “access to trusted information” gleaned from official city sources to “start, run, and grow their businesses.” That seems harmless enough. And what business owner wouldn't be drawn to the promise of instant, direct answers instead of boring, mundane clicks to find the right FAQ, form, or phone number?
If chatbots had been successfully implemented, they could perhaps have facilitated efforts to streamline and improve the city's public services. Rather, chatbots pose a number of potential problems for city governments and put residents at risk.
For example, according to an investigation by The Markup, chatbots falsely stated that employers could receive tips from workers. In theory, New York City has the strongest labor protections in the United States. But these laws are difficult to enforce, especially when government-approved chatbots feed false information to business owners about them. Additionally, since wage theft reporting is based on worker complaints, such false information is likely to deter workers from filing complaints. When workers suspect that their rights are being violated by withholding tips, employers can use AI chatbots that feign authority and legitimacy, as New York City has deployed, to You can refute the claim.
Protecting worker rights is already difficult, but technological systems can make it even more difficult. Research by Data & Society demonstrated how automated systems can scale up the unpredictability of work through scheduling software, and that chip theft can be automated with platforms like Amazon Flex and Instacart. In fact, Amazon Flex was fined $60 million by the Federal Trade Commission for this action. Existing laws like the Tip Protection Act and the Fair Scheduling Act can hold employers accountable regardless of the tools they use, but worker protections are only as effective as their enforcement.
A recent report by Data & Society and Cornell University examined a New York City law that requires employers to notify job applicants if they use automated hiring decision tools during the hiring or promotion process. They found that compliance with the law appeared to be alarmingly low and that the law's usefulness to job seekers was limited.
Disseminating false information can also create legal problems for cities themselves and businesses. In a recent lawsuit, Air Canada lost a small claims lawsuit brought by a passenger who claimed the airline's AI chatbot misled her about its bereavement policy. If the company in question was the government, the government could be held liable for providing false information, and the workers could in turn act on false information and be put in a position to violate the law. You may sue your employer.
The public should have the opportunity to provide input on what technologies are implemented in government, as government works with government agencies and could be adversely affected by AI systems deployed by government agencies. . At the end of the day, it's a matter of trust. If citizens cannot trust their democratically elected government to know their rights, and if these technology intermediaries are their government representatives, then they cannot trust the same people to protect their rights. They are less likely to trust institutions.
As governments continue to adopt more technology, it is imperative that new tools are thoroughly evaluated and tested before they are released into the world. AI has the potential to dramatically improve many government processes, potentially enabling cities to provide better services. However, if poorly designed without paying attention to how technology is integrated into society, it can change power relations and people's relationship to government. In this case, the more likely outcome is a further erosion of trust in public institutions, undermining the very laws and regulations that the City is charged with articulating and protecting.
Aiha Nguyen is Program Director of the Future of Work Program at Data & Society. This program aims to better understand the emerging disruptions in the workforce as a result of data-centric technological developments and create new frameworks for understanding these disruptions through evidence-based research . And collaboration.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.