Results for ""
AI algorithms are starting to play a variety of roles in the digital ecosystems of children - being embedded in the connected toys, apps and services they interact with daily. Such AI systems provide children with many advantages, such as personalized teaching and learning from intelligent tutoring systems or online content monitoring and filtering algorithms that proactively identify potentially harmful content or contexts before they are experienced.
AI systems in games and entertainment services provide personalized content recommendations, while social robots power the interactive characters in ways that make them engage and human-like. Going forward, AI systems will, in all likelihood, become even more pervasive in children's applications simply due to their sheer usefulness in creating compelling, adaptive, and personal user experiences.
Such systems also could play a role, for instance, in making systems more inclusive and accessible, as researchers explore ways systems might be made sensitive and adaptive to children's specific needs and abilities.
Certain AI apps now available in the market leverage AI to help kids play more creatively.
There has been increasing effort made in attempts to regulate more responsible AI. Regulatory frameworks have emerged that attempt to systematically characterize risks related to AI technologies and establish methods by which risks might be identified and mitigated. A UNICEF review of 20 national AI strategies in 2018 has shown that very little attention has been given to safeguarding children's rights in an algorithmic-oriented economy and society.
Meanwhile, a separate branch of work has focused on, more specifically, guiding AI (or digital) technologies for children. The two branches of work were separate but related, and sometimes they touched on similar topics differently. Therefore, this confusing array of different frameworks and guidelines relates to somewhat overlapping concerns. This can make it difficult for designers and practitioners to establish concrete design suggestions and standards effectively.
As UNICEF and other organizations emphasize, we must pay specific attention to children and the evolution of AI technology in a way that children-specific rights and needs are recognized. The potential impact of artificial intelligence on children deserves special attention, given children’s heightened vulnerabilities and the numerous roles that artificial intelligence will play throughout the lifespan of individuals born in the 21st century.
As part of their AI for children project, UNICEF has developed this policy guidance to promote children's rights in government and private sector AI policies and practices. The policy guidance explores AI systems and considers the ways in which they impact children. The guidance offers nine requirements for child-centered AI:
One of the major challenges with designing age-appropriate AI for children is to make the AI policies and design guidelines translated to AI developers. Understanding the ways that AI systems are being used in systems for children and their harm and impact is still a new and emerging area of investigation.
On the one hand, there exists a growing body of literature characterizing such AI harms across many kinds of applications and on the other, there is a complementary body of literature focusing on risks to children, which similarly includes primary research, proposed policies, and codes of practice, such as the UK ICO's Age Appropriate Design Codes and the UNICEF's AI policy for children. These diverse efforts have created a confusing outlook for designers and practitioners to create concrete and safe designs for children. There is only one way to analyze the impact these AI tools have on children- to observe and learn.