What Kind of (Digital) Citizen?

This post was co-written with Katia Hildebrandt and also appears on her blog.

This week (June 5-11) we’ll be hosting a couple of events and activities related to digital citizenship as part of a series of DigCiz conversations. Specifically, we’d like to deepen the discussion around digital citizenship by asking how we might move from a model of personal responsibility (staying safe online) to one that takes up issues of equity, justice, and other uncomfortable concepts. That is, we want to think about what it might look like to think about digital citizenship in a way that more closely resembles the way we often think about citizenship in face-to-face contexts, where the idea of being a citizen extends beyond our rights and also includes our responsibility to be active and contributing members of our communities. Of course, that’s not to say that face-to-face citizenship is by default more active, but we would argue that we tend to place more emphasis on active citizenship in those settings than we do when we discuss it in its digital iteration.

So…in order to kick things off this week, we wrote this short post to provide a bit more background on the area we’ll be tackling.

Digital Citizenship 1.0: Cybersafety

The idea of digital citizenship is clearly influenced by the idea of “Cybersafety,” which was the predominant framework for thinking about online behaviours and interactions for many years (and still is in many places). This model is focused heavily on what not to do, and it relies on scare-tactics that are designed to instill a fear of online dangers in young people. This video, titled “Everyone knows Sarah,” is a good example of a cybersafety approach to online interactions:

The cybersafety approach is problematic for a number of reasons. We won’t go into them in depth here, but they basically boil down to the fact that students aren’t likely to see PSAs like this one and then decide to go off the grid; the digital world is inseparable from face-to-face contexts, especially for today’s young people who were born into this hyper-connected era. So this is where digital citizenship comes in: instead of scaring kids offline or telling them what not to do, we should support them in doing good, productive, and meaningful things online.

From Cybersafety to Digital Citizenship

Luckily, in many spheres, we have seen a shift away from cybersafety (and towards digital citizenship) in the last several years, and this shift has slowly found its way into education. In 2015, we were hired by our province’s Ministry of Education to create a planning document to help schools and districts with the integration of the digital citizenship curriculum. The resulting guide, Digital Citizenship Education in Saskatchewan Schools, can be found here. In the guide, we noted:

“Digital citizenship asks us to consider how we act as members of a network of people that includes both our next-door neighbours and individuals on the other side of the planet and requires an awareness of the ways in which technology mediates our participation in this network. It may be defined as ‘the norms of appropriate and responsible online behaviour’ or as ‘the quality of habits, actions, and consumption patterns that impact the ecology of digital content and communities.’”

In the Digital Citizenship Guide, we also underlined the importance of moving from a fear- and avoidance-based model to one that emphasizes the actions that a responsible digital citizen should take. For instance, we suggested that schools move away from “acceptable use” policies (which take up the cybersafety model) and work to adopt “responsible use” policies:

Moving Beyond Personal Responsibility

While the move from cybersafety to digital citizenship has helped us to shift the focus away from what not to do online, there is still a tendency to focus digital citizenship instruction on individual habits and behaviours. Students are taught to use secure passwords, to find a healthy balance between screen time and offline time, to safeguard their digital identity. And while all of these skills are important pieces of being a good digital citizen, they revolve around protecting oneself, not helping others or contributing to the wider community.

So we’d like to offer a different model for approaching the idea of citizenship, one that moves beyond the individual. To do this, we have found it helpful to think about citizenship using Joel Westheimer’s framework. Westheimer distinguishes between three kinds of citizens: the personally responsible citizen, the participatory citizen, and the justice oriented citizen. The table below helps to define each type.

Table taken from Westheimer’s 2004 article, linked above.

Using this model, we would argue that much of the existing dialogue around digital citizenship is still heavily focused on the personally responsibility model. Again, this is an important facet of citizenship – we need to be personally responsible citizens as a basis for the other types. But this model does not go far enough. Just as we would argue that we need participatory and justice-oriented citizens in face-to-face contexts, we need these citizens in online spaces as well.

So here’s our challenge this week: Is there a need to move beyond personal responsibility models of digital citizenship? And if so, how can we reframe the conversation around digital citizenship to aim towards the latter two kinds of citizen? How might we rethink digital citizenship in order to encourage more active (digital) citizenship and to begin deconstructing the justice and equity issues that continue to negatively affect those in online spaces, particularly those who are already marginalized in face-to-face contexts? And what are the implications of undertaking this shift when it comes to our individual personal and professional contexts, especially when it comes to modelling online behaviours and building (digital) identities/communities with our students?

These are big questions, and we certainly don’t have the answers yet – so we’d love to hear from you! Please consider commenting/responding in your own post, or come join us as we unpack these complex topics during the events listed below.

This week’s events:

  • On Tuesday, June 6 at 3 pm EDT, we will be hosting a webinar to discuss this week’s topic. If you are interested in being a panelist, please email us at alecandkatia@gmail.com – we’d love to have you join us! The Webinar will take place via Zoom.Us – to join as an attendee, just click this link.
  • On Wednesday, June 7 at 8 pm EDT, we will be moderating a Twitter chat with a number of questions related to this week’s topic. To join, please connect with us on Twitter (@courosa and @kbhildebrandt) and follow the #DigCiz hashtag.

Facebook Is Still Broken …

I received this email a few minutes ago (and a few hours after I noticed that my Facebook account was down).

Katia_Hildebrandt_says___

For the fourth time, Facebook has disabled my account because the company doesn’t believe I am who I say I am.

Yes, apparently I’m the one with the fake account.

Not “Obrien Gary Neil” or “Michael Walter” or “Nelson Colbert” or “Trofimov Sergei” or “Anne Landman” or “Dounas Mounir” or “Kyle W. Norman” or one of the hundreds of other fake accounts that I have reported to Facebook for using my images to scam vulnerable women across the globe. No. Once again, Facebook has decided to disable my account for using a fake name.

Despite the fact that I’ve already had to submit my government-issued ID to Facebook in each previous case.

Despite the fact that my account is nearly a decade old and linked to 2000+ Facebook friends.

Despite the fact that I’ve had countless media interviews about the problem.

If it can happen to me, it could certainly happen to you.

I’m starting to feel like a broken record, but I really need your help. Please share so that we can get Facebook’s attention. The reporting system is badly flawed, and as I’ve written previously, Facebook really needs to get it fixed.

Facebook Is About To Make Catfishing Problems Even Worse

Scam Computer Keys Showing Swindles And Fraud

Over the past week, I’ve had a number of people share articles with me related to Facebook’s testing of a new feature that is purported to alert Facebook users when it finds that someone is impersonating your account. Once the user is alerted, that user is then able to report the fraudulent account and pray that Facebook will take it down. However, given my 8 years of experience with this problem, I feel that I am qualified to say that this approach will simply not work for a number of reasons.

  1. Facebook often fails to take down fraudulent profiles: While I have successfully had Facebook take down hundreds of fake profiles (I find several new ones each day), there are certain profiles that it simply does not take down. For instance, I’ve been trying to get Facebook to take down the account of “Trofimov Sergei” (a user who is clearly using a profile photo of me and my son) for over a year now. Yet, no matter how many times I report the account, the profile remains. More disturbing is the fact that if you search for “Trofimov Sergei” on Facebook, you will see dozens of fake accounts by the same name using stolen photos of other men. Most of the deception is done in private communication with the (potential) victims, but every once in a while, you will find a public post where the fraudsters are asking for money for a feigned illness. Luckily, there are many people (often former victims) who do uncover and share their knowledge of these fraudulent accounts in order to contain some of the damage.
  2. Scammers may use photos of your children as their profile photo: After hundreds of reports, Facebook still refuses to take down the account of “Nelson Colbert,” a scammer who is using photos of my children as a profile photo. When you report an impersonation in Facebook’s current reporting tool, you ultimately have to choose one of the following: A) “This timeline is pretending to be me or someone that I know”, or B) “This timeline is using a fake name.” I have been completely unsuccessful when using Option B, and I have had only limited success with Option A: when you choose this option, you are asked to identify the user who is being impersonated, but when I identify myself, Facebook quickly rejects the report as it is clear that I am not the person in the profile photo. I have attempted to use Facebook’s “Report An Underage Child” tool (which is only available in Canada after you logout, apparently), but this has also been completely unsuccessful. The most unnerving part of this particular profile is that I receive more reports about it from victims than I do about any other. In fact, there are literally dozens of pages of search results that relate to “Nelson Colbert” and this scammer’s involvement in fraudulent activities. Yet, it appears that Facebook has made this account untouchable. I suspect that the scammer behind it may have created falsified documentation to get the account validated internally.
  3. Scammers may use your elderly mother’s photo as their profile picture: These criminals often create sophisticated networks of friends and family in their schemes. For instance, the scammers created a fake profile using my mother’s photos and named her Maria Gallart. I cannot report this profile directly to Facebook; instead I am only able to report it to my mother to deal with it. I did so, and as you would imagine, the distress, anxiety, and uncertainty that this caused my nearly 80-year-old mother was not something that she needed nor something that she necessarily knew how to deal with. And even with my assistance, reporting the fraudulent account from my mother’s account (many times) has not led to the account being taken down.
  4. Facebook doesn’t always believe the “real” person in cases of identity fraud: Facebook has taken down my account twice because a scammer reported me as being the fake Alec Couros. In both cases, I had to submit my passport to Facebook via email for verification (which is incredibly problematic for security reasons). I am unsure of why I had to do this twice, and I am puzzled as to why my account wasn’t verified either time (even though I have applied for verified status). Facebook’s proposed system will have to rely on verifying an account using a secure, consistent, and foolproof system if it is to be successful. To date, the company has failed miserably in this respect.
  5. Facebook’s proposed system could give an advantage to the criminals: Fraudsters have often used photos of me that I have never previously used on Facebook. Based on the incomplete details provided so far about this new alert system, one might assume that if I were to use any of my personal photos after a scammer had done so, I would be the one flagged as an impersonator. Thus, the criminal might easily be regarded as having the authentic profile, which sounds like really bad news.

The Mashable article shared at the beginning of this post states that Facebook is rolling out these features as the company attempts to push its presence into regions of the world where “[impersonation] may have certain cultural or social ramifications” and “as part of ongoing efforts to make women around the world feel more safe using Facebook.” If that is the goal, Facebook’s proposed technology won’t help, and it may very well make things worse for women (or anyone) using the site. Already, Facebook is plagued with identity thieves who adversely affect the safety, comfort, and freedom of many of its users, and the problem will only continue to grow with these types of half-baked efforts. You may not be affected now, but unless Facebook does something to fully address this issue, you almost certainly will be.

r/NextSpace (Reddit)

For a number of years, I’ve enjoyed using Reddit as a source for my daily reading. Reddit, often known as “the front page of the Internet,” is often where one can find stories and trends before they go viral in the mainstream. As well, because of the networking and conversational properties of the spaces, I’ve often mused about the potential of Reddit as a space where educational conversations might be hosted and shared. There are several education-related subreddits (specifically-themed topics or communities) such as r/education and r/edtech, but these spaces tend to be a bit stagnant.

Just recently, my friend @j0hnburns (and colleagues) took on the idea of developing a new subreddit at r/NextSpace with the goal of creating a space where deeper conversations around edtech related topics could be hosted and shared. He’s written about the launch and has included the overall rationale, how to get to started with Reddit, and how to contribute to r/NextSpace.

To help with this launch, I’ve agreed to do an AMA (Ask Me Anything) starting on Monday March 14th, 8pm EST (or see your time conversion here). To participate, check out this AMA thread, ask questions (you can post them early if you like), upvote or downvote the questions or comments of others, and I will do my best to respond to whatever gets asked. I know I’m nowhere near as big of a draw as those who have led some of the most popular AMAs, but hey, I’d like to help in any way to get this started. Plus, I think I have a lot to share regarding my thoughts on edtech, digital citizenship, digital identity, or other related topics. And of course, an AMA is about what you contribute as well!

So I hope that you will give Reddit and r/NextSpace a try, and hopefully I’ll hear from you at the AMA next week!

Romance Scams Continue And I Really Need Your Help

If you follow me closely, you know that I’ve been discussing romance scams (also known as “catfishing”) for several years now. In short, a romance scam is where criminals will harvest photos from social media and dating site profiles and then use these photos to set up fake profiles on these same sites to enter into online relationships with individuals for the purpose of defrauding victims out of money. A more technical definition of the term romance scam is provided below.

A romance scam is a confidence trick involving feigned romantic intentions towards a victim, gaining their affection, and then using that goodwill to commit fraud. Fraudulent acts may involve access to the victims’ money, bank accounts,credit cards, passports, e-mail accounts, or national identification numbers or by getting the victims to commit financial fraud on their behalf. (Wikipedia)

For at least eight years, scammers have been using my photos, and the photos of my family, to commit these crimes. I hear from new victims on a daily basis as they frequently find the “real” me through my previous writings on the topic. Unfortunately, many victims find out too late, often after they have already sent significant amounts of money to these scammers and/or have developed a significant emotional attachment. These are deeply complex crimes that rely on a victim’s capacity for love, trust, and good will for the execution of fraud.

Today, I read a victim’s report on a Facebook group that is dedicated to raising awareness of these scammers. The post is worth the read in itself as it highlights some of the tactics used in these cases. Relevant in this case is that the victim pointed to several social media profiles that were created with my photos and the photos of my family. I’ve included screenshots of these fake profiles below with some added context.

First, there is the fake profile using my photos and the name Alex Gallart. The use of a similar first name is notable as past victims have told me that once they found my real identity they would approach the scammers with evidence of me actually being “Alec Couros”. In turn, the scammers would simply say that they use slightly different names or surnames for whatever purpose (e.g., mother’s name, professional name, etc.). In many cases, this additional lie seems to be taken up as plausible.

AlecGallart

Then there’s photos of my real brother George whose fake name is John Williams in this case. Scammers will set up networks of fake profiles and communicate to victims from each of these to validate the key profile’s identity.

JohnWilliam

Then, why not throw in photos of my real daughter as well? In this case, the scammers use the fake name of Clara Gallant to set up yet another profile. Alec Gallart seems to be more authentic with each additional connection.

ClaraGallart

Wait. Not real enough for you? How about we add photos of my real mother in yet another fake profile. As we know, grandmas will never lie to you.

MariaGallart

But I guess that wasn’t enough. A family’s set of fake profiles is pretty convincing, but the scammers felt that they needed to go the extra mile to make Alec Gallart even more convincing. The scamers thought it would be great to exploit my father’s death (he passed away in 2013) by including this photo of my children at his burial mound.

_1__Maria_Gallart

And they also included this photo of my mother remembering my father’s death via Facebook.

_1__Maria_Gallart 2

So there you go. The more complex the social network becomes, the more convincing the scam will be.

So, this is where I need your help. This happens to me every single day. But it’s not just happening to me. It’s also happening to thousands of others on a daily basis. But, Facebook and other social networks just simply do not acknowledge that this problem exists. For instance, there is no specific way to report these accounts as romance scammers in Facebook’s reporting tool. In fact, as you can see by my friend Alan’s experience, these fake accounts will often be deemed as following Facebook’s community standards!

Back in August, Facebook announced 1 billion daily users. I am sure that number pleases their investor’s but I wonder how valid that claim is considering I’ve personally reported many hundreds of fake accounts and there are logically thousands more that I do not know of. Multiply my reality by the thousands of other people’s profile photos that are being used and I begin to feel that Facebook is intentionally not addressing this reality due to their significance.

But even if these accounts don’t really put a dint into the one billion user reality, there are serious implications here for the identities and well being of their current and future users. A social media site needs to feel safe if one is to connect, share, and communicate with other users, especially with those that we do not know so well. Facebook needs to do something about this problem, but I am convinced that they won’t act unless there is some heat on this issue.

So please, share this post widely. I want Facebook to acknowledge this problem. But more so, I want this issue to get to someone at Facebook that can help address this problem through the revamp of their reporting service and through a number of possible mechanisms to detect the scammers before they can hurt people. I’ve got ideas on how this can be done, but I need to be connected to someone who can address these changes.

If you want to know more about these romance scams, I’ve written several other posts and created several Youtube screencasts. See below. And, thanks for any help you can provide.

The Real Alec Couros

(Digital) Identity in a World that No Longer Forgets

[This post was written jointly with Katia Hildebrandt and also appears on her blog.]

In recent weeks, the topic of digital identity has been at the forefront of our minds. With election campaigns running in both Canada and the United States, we see candidate after candidate’s social media presence being picked apart, with past transgressions dragged into the spotlight for the purposes of public judgement and shaming. The rise of cybervigilantism has led to a rebirth of mob justice: what began with individual situations like the shaming of Justine Sacco has snowballed into entire sites intended to publicize bad online behaviour with the aim of getting people fired. Meanwhile, as the school year kicks into high gear, we are seeing evidence of the growing focus on digital identity among young people, including requests for our interning pre-service teachers to teach lessons about digital citizenship.

All this focus on digital identity raises big questions around the societal expectations about digital identity (i.e. that it’s sanitized and mistake-free) and the strategies that are typically used to meet those expectations. When talking to young people about digital identity, a typical approach is to discuss the importance of deleting negative artefacts and replacing them with a trail of positive artefacts that will outweigh these seemingly inevitable liabilities. Thus, digital identity has, in effect, become about gaming search results by flooding the Internet with the desired, palatable “self” so that this performance of identity overtakes all of the others.

But our current strategies for dealing with the idea of digital identity are far from ideal. From a purely practical perspective, it is basically impossible to erase all “negatives” from a digital footprint: the Internet has the memory of an elephant, in a sense, with cached pages, offline archives, and non-compliant international service providers. What’s more, anyone with Internet access can contribute (positively or negatively) to the story that is told about someone online (and while Europe has successfully lobbied Google for the “right to be forgotten” and to have certain results hidden in search, that system only scratches the surface of the larger problem and initiates other troubling matters). In most instances, our digital footprints remain in the control of our greater society, and particularly large corporations, to be (re)interpreted, (re)appropriated, and potentially misused by any personal or public interest.

And beyond the practical, there are ethical and philosophical concerns as well. For one thing, if we feel the need to perform a “perfect” identity, we risk silencing non-dominant ideas. A pre-service teacher might be hesitant to discuss “touchy” subjects like racism online, fearing future repercussions from principals or parents. A depressed teenager might fear that discussing her mental health will make her seem weak or “crazy” to potential friends or teachers or employers and thus not get the support she needs. If we become mired in the collapsed context of the Internet and worry that our every digital act might someday be scrutinized by someone, somewhere, the scope of what we can “safely” discuss online is incredibly narrow and limited to the mainstream and inoffensive.

And this view of digital identity also has implications for who is able to say what online. If mistakes are potentially so costly, we must consider who has the power and privilege to take the risk of speaking out against the status quo, and how this might contribute to the further marginalization and silencing of non-dominant groups.

In a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness

Our current strategy for dealing with digital identity isn’t working. And while we might in the future have new laws addressing some of these digital complexities (for instance, new laws are currently being proposed around issues of digital legacy) such solutions will never be perfect, and legislative changes are slow. Perhaps, instead, we might accept that the Internet has changed our world in fundamental ways and recognize that our societal mindset around digital missteps must be adjusted in light of this new reality: perhaps, in a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness, emphasizing the need for informed judgment rather than snap decisions.

So what might that look like? The transition to a more forgiving (digital) world will no doubt be a slow one, but one important step is making an effort to critically examine digital artefacts before rendering judgment. Below, we list some key points to consider when evaluating problematic posts or other content.

Context/audience matters: We often use the “Grandma rule” as a test for appropriateness, but given the collapsed context of the online world, it may not be possible to participate fully in digital spaces if we adhere to this test. We should ask: What is the (digital) context and intended audience for which the artefact has been shared? For instance, was it originally posted on a work-related platform? Dating site? Forum? News article? Social network? Was the communication appropriate for the platform in which it was originally posted?

Intent matters: We should be cognizant of the replicability of digital artefacts, but we should also be sure to consider intent. We should ask: Was the artefact originally shared privately or anonymously? Was the artefact intended for sharing in the first place? How did the artefact come to be shared widely? Was the artefact made public through illegal or unethical means?

History matters: In face to face settings we typically don’t unfriend somebody based on one off-colour remark; rather we judge character based on a lifetime of interactions. We should apply the same rules when assessing a digital footprint: Does the artefact appear to be a one time thing, or is it part of a longer pattern of problematic content/behaviour? Has there been a sincere apology, and is there evidence that the person has learned from the incident? How would we react to the incident in person? Would we forever shame the person or would we resolve the matter through dialogue?

Authorship matters: Generations of children and teenagers have had the luxury of having their childhoods captured only by the occasional photograph, and legal systems are generally set up to expunge most juvenile records. Even this Teenage Bill of Rights from 1945 includes the “right to make mistakes” and the “right to let childhood be forgotten.” We should ask: When was the artefact posted? Are we digging up posts that were made by a child or teenager, or is this a recent event? What level of maturity and professionalism should we have expected from the author at the time of posting?

Empathy matters: Finally, we should remember to exercise empathy and understanding when dealing with digital missteps. We should ask: Does our reaction to the artefact pass the hypocrite test? Have we made similar or equally serious mistakes ourselves but been lucky enough to have them vanish into the (offline) ether? How would we wish our sons, daughters, relatives, or friends to be treated if they made the same mistake? Are the potential consequences of our (collective) reaction reasonable given the size and scope of the incident?

This type of critical examination of online artefacts, taking into consideration intent, context, and circumstance, should certainly be taught and practiced in schools, but it should also be a foundational element of active, critical citizenship as we choose candidates, hire employees, and enter into relationships. As digital worlds signal an end to forgetting, we must decide as a society how we will grapple with digital identities that are formed throughout the lifelong process of maturation and becoming. If we can no longer simply “forgive and forget,” how might we collectively develop a greater sense of digital empathy and understanding?

So what do you think? What key questions might you add to our list? What challenges and opportunities might this emerging framework provide for digital citizenship in schools and in our greater society? We’d love to hear your thoughts.

Developing Upstanders in a Digital World

This past Wednesday, I was fortunate enough to co-facilitate a Day of Pink event for local school division. We invited approximately 100+ elementary students (mostly 7th graders) to participate. The themes included (digital) citizenship, (digital) identity, anti-bullying, and becoming an upstander (someone who stands up to support the protection, safety, and wise decision-making of others).

We shared a number of scenarios with the children to help them discuss the importance of empathy and responsibility for others. Some of the topics can be seen as controversial but are increasingly characteristic of the realities that children face today. One particularly authentic and timely scenario that was shared was the recent #cuttingforzayn hashtag which explicitly encouraged self-harm. This hashtag originated as a result of Zayn Malik quitting One Direction.

This morning, I received these messages via Twitter:

Twitter Messages - Upstanders

It takes a lot of courage for a child to stand up for another. And I would be naive to think that this act resulted simply from our event. Good parenting, positive role models (including teachers), and life-long experiences encouraging empathy allow kids to do the right thing and to make the world better for themselves and for others. Self-harm, suicide, bullying, and other similar issues are difficult to discuss with preteens and teens. Yet, as illustrated by this example, these conversations are necessary and can make a significant difference in the life of a child.

So please have those difficult conversations with your children or your students and continue to try your best to understand the complex world faced by our kids. Our digital world not only includes the problems and pressures that we had as children, but has also expanded to include a complex web of global and social influences that can pose seemingly unpredictable challenges. Events like this may prove to be a catalyst for change, but not without consistent encouragement and positive growth from home and school.