Social media and other browsing records have long been used by police in criminal abortion investigations, a precedent that sparked new controversy after Dobbs v. Jackson Women’s Health Organization struck down federal abortion protections earlier this year.
In a recent case, police used a teenage girl’s private Facebook conversations to charge her and her mother with violating Nebraska’s abortion ban. Advocates are now raising concerns about the handling of LGBTQ children’s private data, reports The Guardian.
This year alone, lawmakers have introduced 300 anti-LGBTQ bills that include rules that restrict discussion of sexuality and gender in schools, about a dozen of which have passed. And, according to a survey by the Center for Democracy and Technology, one in five LGBTQ students said they or a friend had been “excluded” without their consent following online surveillance of students.
Ed Markey and Elizabeth Warren (both D-Mass.), warned in an April report that the widespread use of surveillance tools in schools could violate students’ civil rights. The couple argued that by reporting and following the phrases related to sexual orientation, LGBTQ children were more likely to face disproportionately high rates of discipline and being reported to their parents without their consent.
Advocates are increasingly concerned that, alongside private health discussions surrounding abortion, student debates about gender and sexuality will end up in the hands of the police due to the implementation of strict laws on abortion in the United States.
Software companies have taken no action to determine whether student activity monitoring software disproportionately targets students from marginalized groups, leaving schools in the dark. Monitoring student activity online the-Clock.
None of the companies contacted by the senators have analyzed their products for possible discriminatory biases – even though there is data indicating that students from marginalized groups, particularly students of color, face disparities in discipline, and more recent studies indicate that algorithms are more likely to flag language used by people of color and LGBTQ+ students as problematic.