Results for ""
Most parents fear and experience anxiety while letting their young children use the internet unsupervised. From social media platforms to online gaming, children interact with others via chat features or get exposed to content unsuitable for their age. There is no way to guarantee that the majority of the people on the opposite side of the screen are of comparable age. Most of these platforms remain unmonitored, and children might get exposed to content that can be explicit.
According to the World Economic Forum, every third internet user is under 18 years of age. As a result, online child sex abuse has become a global public safety concern, resulting in a generation of victims. This is not limited only to sexual exploitation and abuse that affects young people online, but there is also an increase in child-targeted identity theft, phishing, harassment, trolling, exposure to hate speech, inciting self-harm, and cyberbullying.
Last month, the online security company McAfee released a study titled 'Life Behind the Screens of Parents, Tweens, and Teens' has revealed that children in India between the ages of 10-14 adopt mobiles quicker than children elsewhere. There is a red flag - 48 per cent of children in India reportedly have private conversations without knowing a person's real identity, which is 11 per cent higher than for other children worldwide.
Source: McAfee
Never before have concerns about safety and the presence of online threats been so prominently discussed by governments, the technology sector, law enforcement organisations, and civil society on the global stage. As a result, large tech firms have adopted regulatory measures and principles but later found them insufficient to deal with the crisis.
The very first job when it comes to online services is to determine whether someone is an adult (18 and over) or a teen (13–17). Unfortunately, it's not uncommon to understand that people are not always honest and tend to enter wrong birth dates when they first sign up for any services. The misrepresentation of age is a common problem across the industry.
To that end, the tech team at Meta has an AI tool - an adult classifier. The team uses multiple signals to teach AI technology. They look at the different things, such as birthday wishes and the age stated in them, such as "Happy 21st Bday!" If a user shares their age on Facebook, it uses the same age for their linked account on Instagram and for linked accounts on other apps. Further, they created an "evaluation dataset" to assess the model's performance. Teams manually review specific data points, such as birthday postings, that they think to be strong markers of age to produce that dataset.
In 2021 alone, about half of Indian internet users participated in online gaming. Of the 846 million internet users in India, 433 million play games. This indicates that in 2021, about 35% of the population played online games. Moreover, the number of online gaming users is expected to reach 657 million users in 2025.
Intel unveiled Bleep, an AI tool to filter offensive or disparaging language in chats while playing games. This was done in collaboration with Spirit AI, a startup that specialises in data science and AI engineering. Spirit AI combines NLU with the rapid millisecond scanning of millions of texts to make the tool effective. The interface of Bleep is designed with numerous sliders so that users can filter hate speech based on categories including "Sexually Explicit Language," "Racism and Xenophobia," "Ableism and Body-Shaming", and others.
Additionally, it is important to monitor and intervene. Parents usually are unable to determine when they should step in for a while, leaving kids alone in front of a digital screen. AI alerts parents of potentially dangerous patterns and behaviours a child engages in, whether using applications, sending messages, or simply browsing the internet, which can reduce this insecurity. Take, for instance, Omdena is, a company that specialises in using AI for good deeds.
The team gives itself a task - to find online data of chats between groomers and minors using machine algorithms. The team found numerous previously unidentified open data sources, most of which were available on the dark web. Then, using existing grooming information and natural language processing, the team created an anti-grooming chatbot. As a result, the team was able to create techniques for quickly identifying the language patterns that may eventually result in grooming. Thus, chats can be flagged even before grooming occurs.
Overall, it can be said that the focus should shift more and more towards combining machine learning and AI detection, along with human moderation, to protect young children from toxic online content. However, considering AI tools as a panacea and immediate solution to the problem might be a misplaced notion.