Protecting the vulnerable, or automating harm? AI’s double-edged role in spotting abuse
June 13, 2025

(The Conversation) – When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier.
But as a social worker with 15 years of experience researching family violence – and five years on the front lines as a foster-care case manager, child abuse investigator and early childhood coordinator – I’ve seen how well-intentioned systems often fail the very people they are meant to protect.
Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements – not faces or voices – to detect physical violence. I’m grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm? (Read More)