Starting today, Instagram will start warning users when they’re about to post a “potentially offensive” caption for a photo or video that’s being uploaded to their main feed, the company has announced. If an Instagram user posts something that the service’s AI-powered tools think could be hurtful, the app will generate a notification to say that the caption “looks similar to others that have been reported.” It will then encourage the user to edit the caption, but it will also give them the option of posting it unchanged.
The new feature builds upon a similar AI-powered tool that Instagram introduced for comments back in July. The company says that nudging people to reconsider posting potentially hurtful comments has had “promising” results in the company’s fight against online bullying.
This is just the latest in a series of measures that Instagram has been taking to address bullying on its platform. In October, the service launched a new “Restrict” feature that lets users shadow ban their bullies, and last year, it started using AI to filter offensive comments and proactively detect bullying in photos and captions.
Unlike its other moderation tools, the difference here is that Instagram is relying on users to spot when one of their comments crosses the line. It’s unlikely to stop the platform’s more determined bullies, but hopefully it has a shot at protecting people from thoughtless insults.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally in the coming months.