Meta Takes Comprehensive Efforts To Protect Teens From Harmful Content On Instagram

Meta’s proactive measures on Instagram aim to safeguard users, particularly teens, from harmful content while embracing technological advancement

By Raunak Bose
New Update

Meta (Image via TechFirstNow)

In an attempt to protect teens from exposure to unpleasant stuff and limit the circulation of those messages, Meta, the parent company of Instagram, has already introduced new functions to blur messages with nude content. This action is particularly important due to the rise of the social media platforms addiction nature and their effects on the mental health of young users.

The tech giant’s latest move will be the deployment of on-device machine learning technology to analyse images sent through Instagram’s direct messaging channel. For users below 18 years of age, this nudity protection mechanism will be activated by default whereas adult users will be advised to enable it for themselves too.

It is noteworthy that this development comprises end-to-end encryption, thus even in chats where Meta does not have a right of access to the content, the nudity protection mechanism is still in place. On the other hand, the users can still report inappropriate content and Meta can take any necessary action.

Meta’s Innovative Approach to Tackling Nudity and Sextortion on Instagram

Apart from solving the issues related to nudity, Meta also uses technology to detect accounts participating in sextortion activities. Besides that, the firm also tests new pop-up messages to notify users who have engaged with suspicious accounts.

Mark Zuckerberg, CEO of Meta (via The Hans India)
Mark Zuckerberg, CEO of Meta (Image via Meta)

This announcement comes immediately after Meta has already taken steps to improve security across its platforms. In the first month of this year, the company announced its decision to limit access to sensitive content, including topics like suicide, self-harm, and eating disorders, mostly to teenage users.

However, the Meta’s efforts come up against legal challenges and regulatory hurdles. The attorneys general from 33 states in America, including California and New York, have sued the giant company for supposedly misleading people regarding workplace safety. Likewise, the European Commission is seeking explanations on Meta’s policy regarding preempting children's exposure to illegal and harmful content.

The dynamic nature of social media regulation underlines the necessity of constant innovation to promote users' security. Meta’s enhancement of the safety features also shows that it admits its responsibility to its big customers, especially those who are under 18.

With the ever- changing digital realm, having a balance between innovation and security becomes more and more important. On the one hand, social media represents a new frontier for socializing and self-expression, on the other hand, it becomes increasingly difficult to moderate content and secure the users.

In this subject, Meta’s recent safety feature development intends to go the extra mile. The company aims to capitalize on the technology and work alongside with stakeholders to develop a safer place for users of any age.

Basically, it will be a matter of how these projects will be applied in reducing risks and creating a culture of reliable digital presence. The development of technology is not only beneficial for us economically but also in terms of ensuring the safety of online communities.

Explore More Topics: 

Latest Stories