AI startup Anthropic is changing its policies to allow minors to use its generative AI systems, at least in certain situations.
Announced in a post on the company's official blog on Friday, Anthropic will allow third-party apps (but not necessarily its own apps) that leverage its AI models, as long as the app developer implements certain safety features. ) to begin using tweens and teens. Disclose to users which Anthropic technologies are used.
In a support article, Anthropic lists safety measures that developers creating AI-powered apps for minors should include: age verification systems, content moderation and filtering, and “safe and responsible” design for minors. Lists educational resources and more on the use of AI.The company also says this May Make available “technical measures” aimed at tailoring AI product experiences for minors, such as “child safety system prompts” that developers targeting minors must implement. .
Developers using Anthropic's AI models also comply with “applicable” child safety and data privacy regulations, including the Children's Online Privacy Protection Act (COPPA), a U.S. federal law that protects the privacy of children under 13. says Anthropic. Anthropic “regularly” audits apps for compliance, suspends or terminates the accounts of users who repeatedly violate compliance requirements, and makes sure that developers “clearly state compliance on their public websites and documentation.” We plan to require that the information be recorded in the
“There are specific use cases where AI tools can provide significant benefits for young users, such as test preparation and tutoring support,” Anthropic wrote in the post. “With this in mind, our updated policy now allows organizations to implement certain safety features for minors if they implement certain safety features and agree to disclose to users that their product utilizes an AI system. You are allowed to incorporate our API into your products.
Anthropic's change in direction comes as children and teens are increasingly turning to generative AI tools to solve not only academic but also personal problems, and rivals like Google and OpenAI's generative The move comes as AI vendors explore use cases aimed at children. This year, OpenAI announced a new team to research child safety and a partnership with Common Sense Media to co-create child-friendly AI guidelines. Meanwhile, Google has made its chatbot Bard (since rebranded to Gemini) available to teens in English in some countries.
A Center for Democracy and Technology poll found that 29% of children have used generative AI like OpenAI's ChatGPT to deal with anxiety or mental health issues, and 22% have used generative AI to deal with problems with friends. , 16% reported using it in conflicts with family members.
Last summer, schools and universities rushed to ban generative AI apps, specifically ChatGPT, over fears of plagiarism and misinformation. Some have since lifted their bans. However, not everyone is convinced of the potential of generative AI for good, with surveys such as the UK Safer Internet Center finding that more than half (53%) of children believe that their peers do not believe in negative ways. We found that people reported seeing them using generative AI. Credible false information or images (including pornographic deepfakes) used to offend someone.
There is a growing call for guidelines regarding the use of generative AI by children.
Late last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments to regulate the use of generative AI in education, including age limits for users and the introduction of guardrails around data protection and user privacy. “Generative AI has the potential to offer great opportunities for human development, but it also has the potential to cause harm and prejudice,” UNESCO Director-General Audrey Azoulay said in a press release. “It cannot be integrated into education without public involvement and the necessary safeguards and regulations from governments.”