Facebook Is About To Make Catfishing Problems Even Worse

Scam Computer Keys Showing Swindles And FraudImage via Careful Parents

Over the past week, I’ve had a number of people share articles with me related to Facebook’s testing of a new feature that is purported to alert Facebook users when it finds that someone is impersonating your account. Once the user is alerted, that user is then able to report the fraudulent account and pray that Facebook will take it down. However, given my 8 years of experience with this problem, I feel that I am qualified to say that this approach will simply not work for a number of reasons.

  1. Facebook often fails to take down fraudulent profiles: While I have successfully had Facebook take down hundreds of fake profiles (I find several new ones each day), there are certain profiles that it simply does not take down. For instance, I’ve been trying to get Facebook to take down the account of “Trofimov Sergei” (a user who is clearly using a profile photo of me and my son) for over a year now. Yet, no matter how many times I report the account, the profile remains. More disturbing is the fact that if you search for “Trofimov Sergei” on Facebook, you will see dozens of fake accounts by the same name using stolen photos of other men. Most of the deception is done in private communication with the (potential) victims, but every once in a while, you will find a public post where the fraudsters are asking for money for a feigned illness. Luckily, there are many people (often former victims) who do uncover and share their knowledge of these fraudulent accounts in order to contain some of the damage.
  2. Scammers may use photos of your children as their profile photo: After hundreds of reports, Facebook still refuses to take down the account of “Nelson Colbert,” a scammer who is using photos of my children as a profile photo. When you report an impersonation in Facebook’s current reporting tool, you ultimately have to choose one of the following: A) “This timeline is pretending to be me or someone that I know”, or B) “This timeline is using a fake name.” I have been completely unsuccessful when using Option B, and I have had only limited success with Option A: when you choose this option, you are asked to identify the user who is being impersonated, but when I identify myself, Facebook quickly rejects the report as it is clear that I am not the person in the profile photo. I have attempted to use Facebook’s “Report An Underage Child” tool (which is only available in Canada after you logout, apparently), but this has also been completely unsuccessful. The most unnerving part of this particular profile is that I receive more reports about it from victims than I do about any other. In fact, there are literally dozens of pages of search results that relate to “Nelson Colbert” and this scammer’s involvement in fraudulent activities. Yet, it appears that Facebook has made this account untouchable. I suspect that the scammer behind it may have created falsified documentation to get the account validated internally.
  3. Scammers may use your elderly mother’s photo as their profile picture: These criminals often create sophisticated networks of friends and family in their schemes. For instance, the scammers created a fake profile using my mother’s photos and named her Maria Gallart. I cannot report this profile directly to Facebook; instead I am only able to report it to my mother to deal with it. I did so, and as you would imagine, the distress, anxiety, and uncertainty that this caused my nearly 80-year-old mother was not something that she needed nor something that she necessarily knew how to deal with. And even with my assistance, reporting the fraudulent account from my mother’s account (many times) has not led to the account being taken down.
  4. Facebook doesn’t always believe the “real” person in cases of identity fraud: Facebook has taken down my account twice because a scammer reported me as being the fake Alec Couros. In both cases, I had to submit my passport to Facebook via email for verification (which is incredibly problematic for security reasons). I am unsure of why I had to do this twice, and I am puzzled as to why my account wasn’t verified either time (even though I have applied for verified status). Facebook’s proposed system will have to rely on verifying an account using a secure, consistent, and foolproof system if it is to be successful. To date, the company has failed miserably in this respect.
  5. Facebook’s proposed system could give an advantage to the criminals: Fraudsters have often used photos of me that I have never previously used on Facebook. Based on the incomplete details provided so far about this new alert system, one might assume that if I were to use any of my personal photos after a scammer had done so, I would be the one flagged as an impersonator. Thus, the criminal might easily be regarded as having the authentic profile, which sounds like really bad news.

The Mashable article shared at the beginning of this post states that Facebook is rolling out these features as the company attempts to push its presence into regions of the world where “[impersonation] may have certain cultural or social ramifications” and “as part of ongoing efforts to make women around the world feel more safe using Facebook.” If that is the goal, Facebook’s proposed technology won’t help, and it may very well make things worse for women (or anyone) using the site. Already, Facebook is plagued with identity thieves who adversely affect the safety, comfort, and freedom of many of its users, and the problem will only continue to grow with these types of half-baked efforts. You may not be affected now, but unless Facebook does something to fully address this issue, you almost certainly will be.

(Digital) Identity in a World that No Longer Forgets

[This post was written jointly with Katia Hildebrandt and also appears on her blog.]

In recent weeks, the topic of digital identity has been at the forefront of our minds. With election campaigns running in both Canada and the United States, we see candidate after candidate’s social media presence being picked apart, with past transgressions dragged into the spotlight for the purposes of public judgement and shaming. The rise of cybervigilantism has led to a rebirth of mob justice: what began with individual situations like the shaming of Justine Sacco has snowballed into entire sites intended to publicize bad online behaviour with the aim of getting people fired. Meanwhile, as the school year kicks into high gear, we are seeing evidence of the growing focus on digital identity among young people, including requests for our interning pre-service teachers to teach lessons about digital citizenship.

All this focus on digital identity raises big questions around the societal expectations about digital identity (i.e. that it’s sanitized and mistake-free) and the strategies that are typically used to meet those expectations. When talking to young people about digital identity, a typical approach is to discuss the importance of deleting negative artefacts and replacing them with a trail of positive artefacts that will outweigh these seemingly inevitable liabilities. Thus, digital identity has, in effect, become about gaming search results by flooding the Internet with the desired, palatable “self” so that this performance of identity overtakes all of the others.

But our current strategies for dealing with the idea of digital identity are far from ideal. From a purely practical perspective, it is basically impossible to erase all “negatives” from a digital footprint: the Internet has the memory of an elephant, in a sense, with cached pages, offline archives, and non-compliant international service providers. What’s more, anyone with Internet access can contribute (positively or negatively) to the story that is told about someone online (and while Europe has successfully lobbied Google for the “right to be forgotten” and to have certain results hidden in search, that system only scratches the surface of the larger problem and initiates other troubling matters). In most instances, our digital footprints remain in the control of our greater society, and particularly large corporations, to be (re)interpreted, (re)appropriated, and potentially misused by any personal or public interest.

And beyond the practical, there are ethical and philosophical concerns as well. For one thing, if we feel the need to perform a “perfect” identity, we risk silencing non-dominant ideas. A pre-service teacher might be hesitant to discuss “touchy” subjects like racism online, fearing future repercussions from principals or parents. A depressed teenager might fear that discussing her mental health will make her seem weak or “crazy” to potential friends or teachers or employers and thus not get the support she needs. If we become mired in the collapsed context of the Internet and worry that our every digital act might someday be scrutinized by someone, somewhere, the scope of what we can “safely” discuss online is incredibly narrow and limited to the mainstream and inoffensive.

And this view of digital identity also has implications for who is able to say what online. If mistakes are potentially so costly, we must consider who has the power and privilege to take the risk of speaking out against the status quo, and how this might contribute to the further marginalization and silencing of non-dominant groups.

In a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness

Our current strategy for dealing with digital identity isn’t working. And while we might in the future have new laws addressing some of these digital complexities (for instance, new laws are currently being proposed around issues of digital legacy) such solutions will never be perfect, and legislative changes are slow. Perhaps, instead, we might accept that the Internet has changed our world in fundamental ways and recognize that our societal mindset around digital missteps must be adjusted in light of this new reality: perhaps, in a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness, emphasizing the need for informed judgment rather than snap decisions.

So what might that look like? The transition to a more forgiving (digital) world will no doubt be a slow one, but one important step is making an effort to critically examine digital artefacts before rendering judgment. Below, we list some key points to consider when evaluating problematic posts or other content.

Context/audience matters: We often use the “Grandma rule” as a test for appropriateness, but given the collapsed context of the online world, it may not be possible to participate fully in digital spaces if we adhere to this test. We should ask: What is the (digital) context and intended audience for which the artefact has been shared? For instance, was it originally posted on a work-related platform? Dating site? Forum? News article? Social network? Was the communication appropriate for the platform in which it was originally posted?

Intent matters: We should be cognizant of the replicability of digital artefacts, but we should also be sure to consider intent. We should ask: Was the artefact originally shared privately or anonymously? Was the artefact intended for sharing in the first place? How did the artefact come to be shared widely? Was the artefact made public through illegal or unethical means?

History matters: In face to face settings we typically don’t unfriend somebody based on one off-colour remark; rather we judge character based on a lifetime of interactions. We should apply the same rules when assessing a digital footprint: Does the artefact appear to be a one time thing, or is it part of a longer pattern of problematic content/behaviour? Has there been a sincere apology, and is there evidence that the person has learned from the incident? How would we react to the incident in person? Would we forever shame the person or would we resolve the matter through dialogue?

Authorship matters: Generations of children and teenagers have had the luxury of having their childhoods captured only by the occasional photograph, and legal systems are generally set up to expunge most juvenile records. Even this Teenage Bill of Rights from 1945 includes the “right to make mistakes” and the “right to let childhood be forgotten.” We should ask: When was the artefact posted? Are we digging up posts that were made by a child or teenager, or is this a recent event? What level of maturity and professionalism should we have expected from the author at the time of posting?

Empathy matters: Finally, we should remember to exercise empathy and understanding when dealing with digital missteps. We should ask: Does our reaction to the artefact pass the hypocrite test? Have we made similar or equally serious mistakes ourselves but been lucky enough to have them vanish into the (offline) ether? How would we wish our sons, daughters, relatives, or friends to be treated if they made the same mistake? Are the potential consequences of our (collective) reaction reasonable given the size and scope of the incident?

This type of critical examination of online artefacts, taking into consideration intent, context, and circumstance, should certainly be taught and practiced in schools, but it should also be a foundational element of active, critical citizenship as we choose candidates, hire employees, and enter into relationships. As digital worlds signal an end to forgetting, we must decide as a society how we will grapple with digital identities that are formed throughout the lifelong process of maturation and becoming. If we can no longer simply “forgive and forget,” how might we collectively develop a greater sense of digital empathy and understanding?

So what do you think? What key questions might you add to our list? What challenges and opportunities might this emerging framework provide for digital citizenship in schools and in our greater society? We’d love to hear your thoughts.

Building Online/Blended Course Environments

I was recently invited to speak with the  #eLearnOnt “Game Changer” series where I discussed various options around building online and/or blended course environments via Google Hangout. I began with traditional LMS options and moved to the more nebulous concept of “Small Tools Loosely Joined” course/environment design.

See the recorded presentation below. As well, here are the presentation slides and a collection of related resources.

Thanks to the hosts for making this a fun experience.

Developing Upstanders in a Digital World

This past Wednesday, I was fortunate enough to co-facilitate a Day of Pink event for local school division. We invited approximately 100+ elementary students (mostly 7th graders) to participate. The themes included (digital) citizenship, (digital) identity, anti-bullying, and becoming an upstander (someone who stands up to support the protection, safety, and wise decision-making of others).

We shared a number of scenarios with the children to help them discuss the importance of empathy and responsibility for others. Some of the topics can be seen as controversial but are increasingly characteristic of the realities that children face today. One particularly authentic and timely scenario that was shared was the recent #cuttingforzayn hashtag which explicitly encouraged self-harm. This hashtag originated as a result of Zayn Malik quitting One Direction.

This morning, I received these messages via Twitter:

Twitter Messages - Upstanders

It takes a lot of courage for a child to stand up for another. And I would be naive to think that this act resulted simply from our event. Good parenting, positive role models (including teachers), and life-long experiences encouraging empathy allow kids to do the right thing and to make the world better for themselves and for others. Self-harm, suicide, bullying, and other similar issues are difficult to discuss with preteens and teens. Yet, as illustrated by this example, these conversations are necessary and can make a significant difference in the life of a child.

So please have those difficult conversations with your children or your students and continue to try your best to understand the complex world faced by our kids. Our digital world not only includes the problems and pressures that we had as children, but has also expanded to include a complex web of global and social influences that can pose seemingly unpredictable challenges. Events like this may prove to be a catalyst for change, but not without consistent encouragement and positive growth from home and school.


Edtech MOOC, January 2013?

During my sabbatical year (July 1/12 to June 30/13), I plan to focus on Massive Open Online Courses (MOOCs) as one of my key areas of research. To do so, I’m considering planning, organizing and facilitating a semester-long MOOC focused on educational technology starting January 2013. I am envisioning the course to be somewhat similar to my EC&I 831 course, but with the focus more explicitly on the integration of technology in teaching, learning & professional development (hands-on sessions exploring major categories of tools with a focus on pedagogy & literacy).

I’m thinking that this course would be relevant to teachers, administrators, preservice teachers, teacher educators, librarians, parents and likely many others hoping to sharpen their understanding of emerging skills and literacies. Also, it would be great to have newbies involved (people that are fairly new to educational technology and/or those we wouldn’t normally find on Twitter). However, before I get too far along in this, I want to make sure there is interest from both those that would enroll, and those that would help develop & facilitate the experience.

So, is there a need for this sort of thing? Is there anyone willing to help plan the experience? Anyone interested in participating in a course like this? Any thoughts on what we could do to make this successful? I’d love to hear from you.

Edit #1: There seems to be some interest already, so to make sure that I don’t lose any potential collaborators due to the chaos that is Twitter, please fill out this very short form if you are interested in participating. I will contact you very soon to get things started.

Edit #2: I wanted to capture the responses of potential collaborators/participants, so I put together this Storify. I’m really excited about this, and will get back to everyone by early September at the latest.

Edtech Posse Podcast 5.4

Heather Ross is back along with Dean Shareski, Rob Wall and Kyle Lichtenwald. They talk about digital safety and identity, digital residents and digital tourists. Unfortunately, Rick and I had to miss the conversation.

I am going to listen now … and you should too!

Is This Forever?

One of the videos I showed last night during my Media Literacy presentation was the recent “David After Dentist” video. The scene is of a seven year old boy who just left the Dentist’s office and was still feeling the effects of sedation. I’ve posted the video to Twitter, and while most people report it to be quite funny, others were more critical of this scene being posted to Youtube for all to see. The original video (posted below) was posted January 30, 2009, and has already been viewed over 7 million times.

Boing Boing, a highly influential group blog, posted the video on September 3. At that time, there had already been a few remixes. Since the Boing Boing mention, the number of remixes has exploded. Two of my favourite are found below:


Chad (Vader) After Dentist

There are dozens more!

How does this relate to media literacy? During his state of sedation, the boy asks “is this forever?” While the dad reassures him that it isn’t, in the (digital) media sense, it is forever. Whether the boy likes it or not, he is now an Internet star. The scene will likely follow him into classrooms, into careers, into relationships; it will forever be part of his identity. Whether he accepts his fame as mostly positive (see Gary Brolsma) or especially negative (see Ghyslain Raza) is yet to be seen. What is certain is that the distribution of this video, a piece of David’s identity, is no longer in anyone’s full control.

Self Child-Pornography

Here is another interesting case for your digital internship discussions, one that shows that our legal systems are not always equipped to handle issues arising from emerging uses of technologies, especially by teens.

A 15-year-old Ohio girl was arrested on felony child pornography charges for allegedly sending nude cell phone pictures of herself to classmates. Authorities are considering charging some of the students who received the photos as well.

The unnamed student from Licking Valley High School in Newark, Ohio was arrested Friday after school officials discovered the materials and notified police. She spent the weekend in juvenile detention and entered a plea of “deny” on Monday, according to The NewarkAdvocate.com.

Charges include illegal use of a minor in nudity-oriented material and possession of criminal tools. If convicted, the girl could be forced to register as a sexual offender for 20 years, but because of her age, the judge hearing the case has some flexibility in the matter, an official told the Advocate.

Full story here.