GWI Australia

Machine learning in child protection – the ethical and practical considerations

Written by :

Tags :

The rise of Machine Learning (ML) and Artificial Intelligence (AI) over the past decade presents a dominant disruptive force for almost all services and industries. Here we explore the unique opportunities and challenges these technologies present for child protection and social services agencies and organisations.

Today’s caseworkers face continual issues of heavy caseloads, collating data from multiple sources and taxing administration requirements which all combine to reduce time actively in the field or engaging with those in need. These are classic examples of the type of problems that can be resolved by ML applications.

Despite the benefits, the development of AI raises serious ethical and practical questions in highly sensitive fields such as child protection services where decisions could directly impact a child’s safety and well-being.

Some of the critical issues that need to be carefully considered before the implementation of a ML solution in child protection services:

  • Database readiness – ML algorithms require a large volume of highly cleansed and structured inputs. Collated data from multiple departments is required, as are specific structures, skills and data quality processes.
  • Integration of bias – ML programs are only as good as the data they’re trained on. If any bias exists in the training data, then ML programs will recognise this bias as a pattern and amplify it. This bias needs to be actively considered for child protection services where minority groups are often over represented.
  • Skills gap – The skills and knowledge between data scientists and child protection experts need to be integrated. Both sides need to understand what information they must share; not just to understand practical objectives but to develop an understanding of the roles each party plays and to consider different philosophies, use of language and ethical priorities.
  • Tech literacy – Translating scientific jargon to easily understood language is critical for communicating effectively with the public and alleviating concerns. This issue is compounded with the AI term being somewhat misleading of its current implications and a general scepticism of computers replacing the role of humans.

There are already a number of AI solutions being used in child protection including Spotlight, SAS Analytics for child well-being, Eckerd Connects and the Allegheny family screening tool. These tools enable law enforcement and crucial stakeholders to collaborate, generate one comprehensive view of a child and cut through a momentous amount of data in critical situations.

Using artificial intelligence and machine learning in child protection services is a very promising area with proven models that can help save lives. While some problems have been identified and need addressing before implementation, businesses who adopt and utilise this technology will experience positive results such as up to date predictions, improved precision and fast analysis of large data sets.

Most importantly, faster and more accurate ways to raise the alarm for children and vulnerable citizens at risk can help to prevent tragedy before it occurs.

Related blogs