Instagram’s parent company Meta announced a new features on Thursday targeting at improving safety for teenagers on the platform. After facing growing criticism over harmful content exposure among young users, it seems Meta is trying to step up and take responsibility.
Protecting Teens from Unwanted Nudes
A big focus is on protecting teens from receiving n*de photos in Instagram direct messages. Meta plans to test technology that detects n*dity in photos before a user sends them. For anyone under 18, Instagram will automatically blur messages with nude content.
The goal here is mainly to stop predators from sending unwanted explicit images to minors. Meta underlined this tech will work even in encrypted chats to balance privacy and safety. They are also encouraging adults to turn on the feature in their account.
Cracking Down on S*xtortion Scams
In addition to unwanted n*des, Instagram has an ongoing issue with scammers exploiting teens through “s*xtortion” – threatening to expose private photos unless the victim pays money. So Meta is developing tools to identify accounts tied to these scams proactively.
For example, they want to send pop-up warnings if someone interacts with a suspicious account. This could help teens avoid manipulation and extortion before it goes too far.
Part of Broader Youth Protection Push
These latest updates build on previous changes intended to improve young users’ wellbeing. Back in January, Meta announced plans to limit teens’ exposure to content about self-harm, eating disorders, and other sensitive topics that could promote dangerous behaviors.
They also committed to defaulting teens into more private account settings on Instagram and Facebook. This aims to shield them from potentially inappropriate contact with unknown adults.
Legal Pressure and Scrutiny Mounting
Meta’s youth safety push comes amidst rising legal threats relating to youth data privacy and protection. Last October, attorneys general from over 30 states filed a lawsuit accusing Meta of intentionally misinforming investors about risks to teens on its platforms.
Lawmakers in Europe have also been demanding details on Meta’s approach to shielding minors from illegal and dangerous material online. The pressure is clearly on for Meta to convince regulators they take child safety seriously.
Is Meta Doing Enough?
While these latest measures seem assuring, some child advocates argue Meta still isn’t doing enough compared to the scale of risks on a platform used by billions. They want assurances Meta will prioritize children’s interests over profitable algorithms.
Others are concerned new n*dity detection could be an invasion of privacy if abused. Meta says, the features are designed not to analyze message content beyond scanning for image n*dity.
But with tech this powerful, there’s always potential for unintended consequences or mission creep. As with all attempts to balance safety and speech online, the details and execution will matter tremendously.
For now, Meta deserves credit for trying to address legitimate issues raised by parents, regulators, and society. As Instagram plays a central role in many teens’ lives, we should encourage them to keep improving experiences on the platform. But we also can’t expect quick fixes to deeply complex wellbeing challenges.