According to the Committee for Children, one in four girls and one in 20 boys in the U.S. will be the victim of sexual abuse. The internet has made it easier than ever to propagate child pornography, and officials are turning to big tech companies to do more to stop it. This month, Google announced the release of a new AI tool, Content Safety API, which will monitor online content and identify images of child abuse with 700 percent more efficiency.
Currently, tools like Microsoft PhotoDNA automate the flagging process, but only for images where the source has already been marked 'abusive.' Identifying new perpetrators is still a manual process, one that small hosting sites with limited resources would benefit from automating. The Google toolkit still requires human verification, but instead of needing to look through every image, it tags the most high-risk images for review.
Although applying AI to trawl through loads of content may seem intuitive, federal officials are skeptical. Cases of online child sexual abuse are highly nuanced, one that an algorithm alone is unlikely to solve. LEARN MORE