A Pseudonymous Peer-2-Peer Review System for Child Protection On-line

AuthorT. Martin, C. Durbin, M. Pawlewski, and D. Parish
PositionLoughborough University, e-mail: {thomas.2.martin, chris.durbin, mark.pawlewski}@bt.com e-mail: d.j.parish@lboro.ac.uk
Pages82-89

This paper was originally published in Kierkegaard, S. (2009) Legal Discourse in Cyberlaw and Trade. IAITL.

Page 82

1. Introduction

This paper looks at the problem of protecting children from on-line stalkers/predators. A recent survey of 1,500 children (aged 10-17) in the United States found that approximately 1 in 7 (13%) received unwanted sexual solicitations, and 34% communicated online with people they did not know in person (Wolak, Mitchell & Finkelhor, 2006). This often took the form of crude or vulgar comments in chat rooms - the victims were not bothered and handled the situation well. However some victims felt traumatised and some are targets of aggressive online solicitations (Mitchell, K., Finkelhor & Wolak. 2007). There is also a growing gap between what children do online, and what their parents think they are doing (Lemish, 2008). With the increased importance of the internet in all of our lives, there is more and more pressure on children to be active on-line, and from a younger age. The dangers permeate almost the entire internet, and change rapidly as the technology evolves. Parents are ill- equipped to protect their children through no fault of their own, but (partly) because while they did grow up in a society where these threats existed, they did not exist in this new form. Today's reductions in barriers to communication have made the problem of protecting children much more complex. Children are often taught not to talk to strangers but with the variety of social interactions available today, teaching a child to block all communications from unknown parties would be challenging to even the most technically minded parent. This is probably undesirable too (Wolak, Finkelhor, Mitchell, & Ybarra. 2008)

This area has understandably received a lot of attention. There is a wide variety of content-control software available to prevent children from accessing illicit material. This mainly works by blocking known URLs, but also by dynamically analysing the content. While by no means trivial, this problem is limited in that it is only the content being sent to the child that needs analysis. These approaches do not apply to two way interactions. Firstly, blocking entire sites/protocols is not necessarily desirable as some safe use may be allowed (or else the child would be motivated to try work around the blocks). Secondly, the danger a predator poses is not merely displaying unwanted material to the child, but in arranging a meeting outside the parents' control. This can (and may) be done without mentioning anything overtly sexual. Predators are a danger because they can effectively mimic normal child-to-child conversations. If nothing else, one half could be simply copied and pasted from other conversations between actual children. The only difference may be in attempting to meet in person.

This paper discusses the existing approaches to online child protection and the conflicting requirements of the parent and the child in a moderated approach to chat. It presents an idea for a system of anonymous review with various options for added functionality, along with a justification of the system. The penultimate section expands on the two key technical components - pseudonymous messaging and intelligent text analysis. Page 83

2. Previous Work

Existing approaches to online child protection typically fall into the following broad categories Block, Review, Filter and Moderate. To enable the reader to better understand the problem domain these approaches and the weaknesses associated with them are examined.

Blocking restricts access to protocols and applications deemed "unsuitable", for example peer-to-peer (P2P) networks or Instant Messaging (IM). Operating in a simple deny/permit fashion can make blocking something of a blunt and unwieldy tool. This lack of flexibility restricts its usefulness only to situations where something must be prohibited.

Reviewing technologies vary in type and application but the core ethos is to allow the parent to monitor the child's activity. Website histories, messaging logs, emails, even full replay of video conference sessions maybe recorded. This may be impractical if the child is an avid internet user or in families with multiple children. Reviewing also suffers from problems of privacy (older children are particularly sensitive about their privacy and may be tempted to circumvent the system) and the generation gap - parents may not be able to penetrate youth lingo and slang.

Filtering may be considered a subset of blocking, usually applied to restrict access to websites considered unwholesome in content. Filters generally consist of blacklisted (or whitelisted) URLs, or dynamic blocking of websites based on content - typically examining sites for a list of proscribed keywords and phrases. Each of these methods suffers drawbacks - blacklisting often involves content labelling, sites labelled as containing certain content are blocked. Labelling is performed by the website operator (who may not be aware of the labelling scheme or may neglect to use it). Some providers of filtering software manually review sites but this is an unscalable approach and the quality of this filtering has been brought into question as has its subjective nature (National Research Council. 2002).

Many internet forums use moderation to enforce rules, edit posts, and ban disruptive users, trolls and spammers. Some child oriented forums, including those of the BBC1, operate a process of pre-moderation - each message is examined before it is posted. Moderators are trained to screen messages for signs of bullying, harassment, or anything that may result in a child being in exposed to harm. Moderation suffers two key drawbacks - scalability, and the human bias (subjectivity).

The system proposed here addresses the issues highlighted above without sacrificing safety. The child can feel their privacy is being maintained, although messages of concern are being reviewed, the contents of the message will not seen be their parents (thus shielding them from any embarrassment). In this regard the system may be considered similar in fashion to traditional moderation - their messages may be reviewed but not by their parents - but able to overcome its limitations.

3. Conflicting requirements

On-line child protection cannot be solved with technology alone. This paper therefore proposes a system that uses a combination of automation and human judgement to recognise threats. There are many potential pitfalls in trying to solve this problem. One solution might be to give parents comprehensive logs of their child's internet usage. This would be giving them too much information to manage effectively, and would be a tempting target for identity theft. If the parent has the power to control exactly what their child does on-line, it is possible they can better protect them, but the controls may be overwhelming. Also, children do not want to have their privacy violated, and will circumvent the system one way or another if it is too invasive. Even if they do not have the level of skill necessary to circumvent the system, they could always spend the majority of their internet time away from the home (at school, library, friends, etc). So some level of privacy for the child is needed. Similarly, all access that can be given and kept "safe" needs to be allowed. It would also be naive to expect children to suddenly migrate onto a new "safe" social network, IM network, etc. Any solution must cater for what they already use.

4. Description

The proposed system uses existing technology as a pre-filtering stage to create a prioritised list of 'suspect' chat conversations. This is subsequently analysed using human judgement via a pseudonymous volunteer who sees an Page 84 appropriately sanitised version of the data which does not divulge the identity of the child, thus protecting their privacy.

The system works as a software client that can be downloaded and run on any PC. The primary user (presumably a parent of at least one child), installs and sets up the client. There are two stages to the setup. First, the parent must record any...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT