Engagement in a time of polarization: Topic 3

Last topic was an eye opener for me on many aspects that I’ve never given much thought to, or knew about.

Well I found the title of the main reading to be interesting and that’s what I started with. (even though I had planned to read other things) I noticed that Caulfield’s intros are very engaging and usually start with an interesting story (on another note, maybe he can let me in on how he makes these great analogies? 🙂 )

When he lists the ESCAPE checklist, I”m immediately reminded of my scientific thinking course that I took in my freshmen year. I was really taught to judge sources according to the six criteria, so much that for a while, I stopped using my common sense while making judgments, I only followed that “rubric”. I liked the concept of “using the network to check the network” concept that Caulfield talks about because it shows that effective “fact checkers” are able to pull out correct information faster due to their ability to go on other pages, make comparisons and verify with a variety of sources (from Wikipedia to Google Scholar), and this method, in my view, allows a person to refine what they are looking for, and to be able to identify the right questions to ask in order to distinguish correct, from not-so-correct information.

Reading the four moves of fact checking and source verification was more amusing and made more sense to me than the “ESCAPE” model which was honestly very bookish and uptight. I found that part to be very useful. The four moves and a brief about them are (they were made into an infographic too – yay):

  • Check for previous work aka find a reputable source
  • Go upstream to the source aka dig deeper in references and links within an article/page
  • Read laterally aka after digging to the source of the sources, check reliability, expertise, and agenda
  • Circle back (I found this unexpected to be one of the points as I was reading) aka this basically means that the above points don’t always do the trick, and we might go back to point zero, searching again with a different keyword.

The Jennifer Lawrence example was just right on point.

There’s something, however, about recognition that caught my attention. In the example about the Harvard news update, one of the reasons that helped me identify that it’s fake is noticing/recognizing that there were too many question marks at the end of the title, which deemed it unprofessional, while also the picture attached to the article was totally absurd. So here my own recognition played a quick role in at least questioning the information displayed and its authenticity. Maybe recognition in his context means recalling the ESCAPE (or any other list) way and following it blindly to know..
I would add here that gut feeling and initial impressions do play a role in shaping how we look at the information and how we go about verifying it.

I then moved on to reach a video called Rethinking our digital future by Ramesh Srinivasan, who gives a very informative and strong talk.

What was interesting in the first part of his talk is that he shows a map of the fiber optic cables running around the world to keep us connected, and says that these connections are synonymous with economic and political power, and he goes on to give an example that there are 2 cables connecting the two major continents of the global south – of south America and Africa. And so there’s inequality showing on the most basic levels of data transmission.

The way Facebook wants people to be connected is in a way that is not neutral, according to Srinivasan due to the nature of sponsored advertisements. Also developing countries like Mexico have 90% of their traffic go through either Google, Microsoft or Facebook’s data centers, so this data is now “owned” by one of these three. And this gives them power that Chris Gilliard talks about in his post.

Srinivasan also makes a strong point about how these social media platforms are reflective of the owner’s/founder’s visions. He gives an example of an AI system that whitened the face of president Obama(meaning that this app’s definition of ‘beauty’ is in having a white complexion). This very much reminded me of Facebook’s “dark posts” as it shows what Zuckerberg’s inclinations are. Another story that he describesto reflect on biases that are also inline with Silicon Valley’s segregation policies (previously mentioned by Gilliard) is the search results that came out when he was researching Cameroon, the country (which were mostly US-based fact books and statistics). And what is problematic here in my view of this, is that a lot of times we don’t notice things like this, we take for granted whatever results that come out to us, without reflecting on the fact that they might be biased, or have inclinations, or that search results might sometimes direct us into areas that we didn’t want to get into in the first place.

The next thing he talks about is ontology, which links in greatly with Chimamanda’s video on having a single story. He explains that how similar things are expressed in different languages and cultures, and the differences in explaining something within the same language is the essence of diversity, and this essence should be appreciated and recognized on all digital platforms, rather than creating systems that claim to be neutral. This idea entails talking, sharing and empathy with those people of diversity, while understanding that their world might revolve around different coordinates, that their culture and traditions may be based on history and geography and stories that just might be translatable that simply. And so systems need to be built in away where their culture and nature can be show cased rather than flattened into English words and literal translations, and this could be achieved through collective practice of engaging with technology. What made me think of a single story in this context is when Chimamanda was talking to her roommate who was surprised when coming to know that she spoke English well; that Silicon Valley and people who run social media platforms a lot of times have a single story and feed this story to us daily, with the notion that it is the only ‘neutral’ story out there.

The last thing he talks about that links in with what I said in my previous post about culture of use, and he seemed to have put it way better; is power over our identity in a system that takes in our information and algorithmizes it. We have power over what we show.

The third reading is titled The Digital Poorhouse  which talks about how the algorithms and data collection techniques ensure that the minorities remain minorities and stay marginalized in the eyes of other users too. This happens when surveillance on minority groups is increased and data collection is more intense. What makes this even more of a problem is that sometimes the information collected is disclosed to the government, corporations and even the public! This links in to what I said in the previous post about the government arresting political figures due to what they’re tweeting/posting, except in this case Virginia Eubanks remarks that even their slightest moves and transactions are monitored and scrutinized. (Goodness!)

I couldn’t not cite her on this:

Automated eligibility systems in Medicaid, TANF, and the Supplemental Nutrition Assistance Program discourage families from claiming benefits that they are entitled to and deserve. Predictive models in child welfare deem struggling parents to be risky and problematic. Coordinated entry systems, which match the most vulnerable unhoused people to available resources, collect personal information without adequate safeguards in place for privacy or data security.

These systems are being integrated into human and social services at a breathtaking pace, with little or no discussion about their impacts. Technology boosters rationalize the automation of decision-making in public services—they say we will be able to do more with less and get help to those who really need it. But programs that serve the poor are as unpopular as they have ever been. This is not a coincidence: technologies of poverty management are not neutral. They are shaped by our nation’s fear of economic insecurity and hatred of the poor.

What is appalling is that things like these are introduced in a so called ‘democratic’  country and into institutions that affect people’s daily lives like healthcare. I think this form of pervasive surveillance and inhumane treatment of the poor in terms of punitive policies is what brings about terrorism and crime and lunatic behavior. They push these people far beyond their capabilities of tolerance. They are put in digital poorhouses, that are in view, polarization by design, in order to bring the poor apart even though they share the same sufferings because it targets individuals and small microgroups and tailors control based on that.

Automated decision making now deny assistance applications, and refuse to provide certain sectors of society the help they need and deserve.  By having decisions made my machines which were designed by engineers and data analysts, judging the grey areas is impossible and this transfer of discretion can actually bring about more discrimination and bias because these devices identify patterns of discriminatory decisions and base their algorithms accordingly. And this goes back to the segregate nature of people in the tech industry, and their political agendas and inclinations; and this in turn reflects how they want to push the poor down, and keep the wealthy up- polarize.

On another note, I went on Twitter and picked out a few things to read, also retweeted a couple of things:
– If news and researchers can pull out fake news, then why can’t tech companies?
–  This thread was interesting but I couldn’t capture what message was trying to be conveyed, is it the fact that hearing “both sides” already assumes that there are sides and they are opposing in nature?
– Attempts to answer Bonnie Stewart’s question (this was challenging though)
– Live example of segregation

Advertisements

Engagement in a time of polarization: Topic 2

This is the first time I’m enrolled in a mooc. I think one of the main reasons I slack off with these things – even though they are very useful- is that sometimes I find committing to timings and assignments hard to juggle with the things that I ought to be doing. So in this course (Digital Literacy), I ought to be enrolled in the course, and so yay for me, I get to do something I’ve always wanted to do.

The first thing I attempted to do is link in the title of the course with the course I’m currently taking, and I’ve come to the resolution that since this is a digital literacy course, the mooc will talk about technology being one of the sources of polarization and how engagement can happen despite that. And this links in with what we do in class, in the sense that a lot of times we talk about how we’re different, but at the same time how these different issues can bring us together or cause us to feel certain things.

I recently liked Chris Gilliard from the Twitter activity that we did a couple of classes ago, and so when he was featured in Topic 2, I knew I was going to enjoy the readings and related topics in that module. I started by reading Power, Polarization and Tech reading   . There are a few things that got me in this reading.

He talks about the “market place of ideas” existing in Silicon Valley which is lie fed by the tech people there, according to him. Being in the tech field myself, I want to question this. It takes a whole lot of research and investment to bring forth ideas in the tech industry, some of which have a tremendous benefit. Yes, I agree that extraction of data plays a pivotal role in the design and management of the apps used daily, but it is not the one reason why they sit for years, and spend tonnes of money.

I didn’t follow through with the part about cyberlibertarianism as I didn’t really understand what it meant or what message was it passing across.

The promotion of polarization does not, in my opinion, come from social media apps like Gilliard is saying. I agree that people are drawn to social media spaces because they can emphasize the fact that they are different, special, maybe. But what leads to having people on opposing ends rather than united, lies in the culture of use of these social media. And the culture of use is not dictated by people like Mark Zuckerberg or Jack Dorsey. It is dictated by us as users. Example: Egypt is a very diverse country and different people use Facebook differently and for different purposes. A feature like Groups and Pages brings people who share a common liking together. Whether people write hate comments or give each other useful advice is shaped by the way they use Facebook. When there’s a political and global issue happening in the world, a lot of people express their solidarity by changing their profile picture.

There is a concentration of power in holders of data like Zuckerberg and Dorsey, and this is a powerful argument made my Gilliard because with this tremendous amount of information they can do pretty much a lot. (Mental Question: Do privacy policies and agreements prevent the people in power from misusing data?)

Gilliard takes another turn with his article by highlighting how Silicon Valley was built on the basis of segregation. But it’s also important to note that America’s social and political climate throughout the years has been based on segregation, and Silicon Valley would be just one example. I’m not justifying it in any way, rather putting the situation into context. I can also add here that given this argument, then it justifies how when a minority writes about anything that upsets the power construction, this is immediately deleted/banned. But minorities are different in every place. Does this mean the specific bands of minorities have less privileges than others?

Another interesting concept learned is “polarization by design”. This means that people like Zuckerberg work in collaboration with political campaigns or people in power in order to produce “dark posts” – posts which can only be seen by the people targeted.

Taking Gilliard’s words about polarization and how its relationship with power and segregation is very intact, I think an example of what he writes about exists in Egypt too and not just the states: the Egyptian government has arrested many activists, journalists and youth, many of these arrests were based on the people’s Facebook posts and/or tweets. While Facebook is claimed to be a space where opinions are openly voiced, it appears that powerful forces within the country can access specific people’s writing and take aggressive actions based on the information that Facebook gathers and claims that it keeps private and safe.

One of the most powerful statements in his article:

Like any abstraction, “Polarization” is fraught with meanings, but in this case, they are about class, poverty, race, gender, sexuality, technology, and power. These structures are filled with concrete instances from culture: content farms spewing out propaganda, police “heat maps” and the placement of cell site simulators in black communities, extractive platforms that benefit from the “engagement” of pitting one group against another, and the other hundreds of outrageous intrusions on our private and social lives that are first and foremost in the service of power. Digital Polarization is the technological mask for the age-old scheme of atomizing populations while making sure the powerful stay on top.

Then I watched the video where Amanda Martinez talks about fake news. And it was interesting to know that fake news take on many forms and levels and how the way information is presented could influence the way people feel and interpret this information.

This links in well with Chris Gilliard’s text because in one way or another social media tech influences what we see in our homepage based on the data collected on what we share, like, and interact with. And so a lot of time we see what we want to see, or what we believe to be “right”. What Martinez also hints at and is similar to what I earlier wrote is the fact that the audience’s critical view of information and being able to identify the right place for the type of information they need. I mentioned earlier that the culture of use of social media apps is not dictated by the founders of the social media apps but rather us. So if there’s a common understanding that interaction on social media platforms involves respect for other opinions, respect in how we present information and critical thinking towards what is being presented (and who is presenting it and how it is being presented), then part of the issue might be resolved.

Later on, I also read an article by Michael Caulfield which starts with an interesting analogy between cleaning the environment, and clearing up the information environment. I was appalled at the screenshots that Caulfield showed in the article and how shallow/commercial Google results could be. The impact of these results is not minor. (Mental question: Is there a reason for weak websites topping search engines other than money?)

I think cleaning the information environment as proposed by Caulfield is an idea that should be implemented worldwide and supported by educational, scientific and academic institutions. At least in the areas where it is possible to do the cleaning. Wikipedia, Google sites, blogs and informative videos are all our responsibility in order to spread high quality information that serves to answer people’s genuine inquiries. A long with that, awareness of critically assessing information’s credibility in terms of source, content and medium, as mentioned by Martinez, should be raised.
Another point I would add here in this area, is developing students’ search skills in a way that facilitates the appearance of valuable and better results; in terms of search keywords for example. (Mental question: how easy is it to get to Google’s top websites given the topic being not-so-niche?)

Another observation that might not be very related to any of this, is that I really liked the Digital Magazine and the way that it was laid out. I think it would be nice to try a software which helps in designing things like this.

I didn’t exactly engage on Twitter or on the discussion forums, but what I did was pick out a few things that Chris Gilliard or Bonnie Stewart was retweeting, read them, and post some thoughts about them while using the hashtag.

The first was about a fitness tracker for the brain developed by MIT. (Impressive stuff)
Another about a wife who was describing a tshirt to her husband over the phone and sent him a picture, only for him to find an ad for that tshirt on his Shazam.
And the stories go on: two people talking about buying snack bars and finding the ads for the bars on twitter (C.R.E.E.P.Y)
I also responded to one of Bonnie Stewart’s questions.

There’s this one thread on Twitter that I would love to know more about, but finding it hard to navigate through (ironically because I’m not good with Twitter). But it caught my attention because I rely on IEEE daily – it’s almost like my Google search, and so I want to understand what is it that’s causing the problem.