The problem with algorithms is that they still can’t spot certain offensive content that a human would flag. The latest example of this comes from Instagram, who used an image that said “I will rape you before I kill you, you filthy whore,” in a Facebook ad.

Olivia Solon, a reporter for the Guardian, discovered that Instagram was using her screenshot of a threatening email to advertise the app on its parent company. The subject line was “Olivia, you fucking bitch!!!!!!!!.”

Solon posted the screenshot on Instagram last year. Explaining the message, she wrote: “This is an email I received this afternoon. Sadly this is all too common for women on the internet. I am sure this is just an idiot rather than any kind of credible threat but it’s still pretty vile.”

Instagram used the image in an ad shown to Solon’s sister. It included the line “See Olivia Solon’s photo and posts from friends on Instagram.” As the post received three likes and more than a dozen comments, it seems Instagram’s algorithms classed it as engaging content that could be used in an ad.

“We are sorry this happened – it’s not the experience we want someone to have,” Instagram said in a statement. “This notification post was surfaced as part of an effort to encourage engagement on Instagram. Posts are generally received by a small percentage of a person’s Facebook friends.”

Last week, Facebook came under fire for allowing its algorithms to generate categories such as “Jew Hater” — based on interests and phrases found in users’ profiles — for targeted advertising purposes. The company has since altered the process, adding human reviews of ad targeting options that don’t cover the 5000 most common terms.