Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Lawrence school district using AI to look for ‘concerning behavior’ in students’ activity

LJworld.com (read free): “The Lawrence [Kansas] school district has purchased a new system that uses artificial intelligence to look for warning signs of “concerning behavior” in the things students type, send and search for on their district-issued computers and other such devices. The purchase of the software system, called Gaggle, comes at a time when questions are growing about how artificial intelligence will affect people’s privacy. But school district leaders are emphasizing that the software’s main purpose will be to help protect K-12 students against self-harm, bullying, and threats of violence. “First and foremost, we have an obligation to protect the safety of our students,” Lawrence school board member Ronald “G.R.” Gordon-Ross told the Journal-World. “It’s another layer of security in our quest to stay ahead of some of these issues.” Gordon-Ross, who is a longtime software developer, said that he respects the “privacy piece” of the question surrounding the use of monitoring systems. But he also said it’s important to keep in mind that the iPads and other devices that the software will monitor are the district’s property, even though they’re issued to students — “we’re still talking about the fact that they’re using devices and resources that don’t belong to them.”

See also from LJ World [read free] – New security system that monitors students’ computer use has ‘inundated’ district with alerts; leader apologizes to staff… “According to information obtained from the district on Friday, there have been 408 “detections” of concerning behavior since Gaggle’s districtwide launch on Nov. 20. Of those, 188 have resulted in actual “alerts.” District spokesperson Julie Boyle said that there are three different priority levels that Gaggle uses to classify the concerning information it detects. The lowest level, “violations,” includes minor offenses like the use of profanity. Those do not trigger alerts, but the system collects data on them “in case future review is necessary.” Next is a level called “Questionable Content,” which triggers a “non-urgent alert to the building administrators for review and follow-up as necessary.” Finally, Boyle said, there is the most urgent level: “Potential Student Situations.” This level includes warning signs of suicide, violence, drug abuse, harassment and other serious behavioral or safety problems, and it triggers “urgent alerts involving an immediate phone call, text, and email to the building administrators.” An alert of this kind is assigned to a staff member for investigation and follow-up.”

Sorry, comments are closed for this post.