Last topic was an eye opener for me on many aspects that I’ve never given much thought to, or knew about.
Well I found the title of the main reading to be interesting and that’s what I started with. (even though I had planned to read other things) I noticed that Caulfield’s intros are very engaging and usually start with an interesting story (on another note, maybe he can let me in on how he makes these great analogies? 🙂 )
When he lists the ESCAPE checklist, I”m immediately reminded of my scientific thinking course that I took in my freshmen year. I was really taught to judge sources according to the six criteria, so much that for a while, I stopped using my common sense while making judgments, I only followed that “rubric”. I liked the concept of “using the network to check the network” concept that Caulfield talks about because it shows that effective “fact checkers” are able to pull out correct information faster due to their ability to go on other pages, make comparisons and verify with a variety of sources (from Wikipedia to Google Scholar), and this method, in my view, allows a person to refine what they are looking for, and to be able to identify the right questions to ask in order to distinguish correct, from not-so-correct information.
Reading the four moves of fact checking and source verification was more amusing and made more sense to me than the “ESCAPE” model which was honestly very bookish and uptight. I found that part to be very useful. The four moves and a brief about them are (they were made into an infographic too – yay):
- Check for previous work aka find a reputable source
- Go upstream to the source aka dig deeper in references and links within an article/page
- Read laterally aka after digging to the source of the sources, check reliability, expertise, and agenda
- Circle back (I found this unexpected to be one of the points as I was reading) aka this basically means that the above points don’t always do the trick, and we might go back to point zero, searching again with a different keyword.
The Jennifer Lawrence example was just right on point.
There’s something, however, about recognition that caught my attention. In the example about the Harvard news update, one of the reasons that helped me identify that it’s fake is noticing/recognizing that there were too many question marks at the end of the title, which deemed it unprofessional, while also the picture attached to the article was totally absurd. So here my own recognition played a quick role in at least questioning the information displayed and its authenticity. Maybe recognition in his context means recalling the ESCAPE (or any other list) way and following it blindly to know..
I would add here that gut feeling and initial impressions do play a role in shaping how we look at the information and how we go about verifying it.
I then moved on to reach a video called Rethinking our digital future by Ramesh Srinivasan, who gives a very informative and strong talk.
What was interesting in the first part of his talk is that he shows a map of the fiber optic cables running around the world to keep us connected, and says that these connections are synonymous with economic and political power, and he goes on to give an example that there are 2 cables connecting the two major continents of the global south – of south America and Africa. And so there’s inequality showing on the most basic levels of data transmission.
The way Facebook wants people to be connected is in a way that is not neutral, according to Srinivasan due to the nature of sponsored advertisements. Also developing countries like Mexico have 90% of their traffic go through either Google, Microsoft or Facebook’s data centers, so this data is now “owned” by one of these three. And this gives them power that Chris Gilliard talks about in his post.
Srinivasan also makes a strong point about how these social media platforms are reflective of the owner’s/founder’s visions. He gives an example of an AI system that whitened the face of president Obama(meaning that this app’s definition of ‘beauty’ is in having a white complexion). This very much reminded me of Facebook’s “dark posts” as it shows what Zuckerberg’s inclinations are. Another story that he describesto reflect on biases that are also inline with Silicon Valley’s segregation policies (previously mentioned by Gilliard) is the search results that came out when he was researching Cameroon, the country (which were mostly US-based fact books and statistics). And what is problematic here in my view of this, is that a lot of times we don’t notice things like this, we take for granted whatever results that come out to us, without reflecting on the fact that they might be biased, or have inclinations, or that search results might sometimes direct us into areas that we didn’t want to get into in the first place.
The next thing he talks about is ontology, which links in greatly with Chimamanda’s video on having a single story. He explains that how similar things are expressed in different languages and cultures, and the differences in explaining something within the same language is the essence of diversity, and this essence should be appreciated and recognized on all digital platforms, rather than creating systems that claim to be neutral. This idea entails talking, sharing and empathy with those people of diversity, while understanding that their world might revolve around different coordinates, that their culture and traditions may be based on history and geography and stories that just might be translatable that simply. And so systems need to be built in away where their culture and nature can be show cased rather than flattened into English words and literal translations, and this could be achieved through collective practice of engaging with technology. What made me think of a single story in this context is when Chimamanda was talking to her roommate who was surprised when coming to know that she spoke English well; that Silicon Valley and people who run social media platforms a lot of times have a single story and feed this story to us daily, with the notion that it is the only ‘neutral’ story out there.
The last thing he talks about that links in with what I said in my previous post about culture of use, and he seemed to have put it way better; is power over our identity in a system that takes in our information and algorithmizes it. We have power over what we show.
The third reading is titled The Digital Poorhouse which talks about how the algorithms and data collection techniques ensure that the minorities remain minorities and stay marginalized in the eyes of other users too. This happens when surveillance on minority groups is increased and data collection is more intense. What makes this even more of a problem is that sometimes the information collected is disclosed to the government, corporations and even the public! This links in to what I said in the previous post about the government arresting political figures due to what they’re tweeting/posting, except in this case Virginia Eubanks remarks that even their slightest moves and transactions are monitored and scrutinized. (Goodness!)
I couldn’t not cite her on this:
Automated eligibility systems in Medicaid, TANF, and the Supplemental Nutrition Assistance Program discourage families from claiming benefits that they are entitled to and deserve. Predictive models in child welfare deem struggling parents to be risky and problematic. Coordinated entry systems, which match the most vulnerable unhoused people to available resources, collect personal information without adequate safeguards in place for privacy or data security.
These systems are being integrated into human and social services at a breathtaking pace, with little or no discussion about their impacts. Technology boosters rationalize the automation of decision-making in public services—they say we will be able to do more with less and get help to those who really need it. But programs that serve the poor are as unpopular as they have ever been. This is not a coincidence: technologies of poverty management are not neutral. They are shaped by our nation’s fear of economic insecurity and hatred of the poor.
What is appalling is that things like these are introduced in a so called ‘democratic’ country and into institutions that affect people’s daily lives like healthcare. I think this form of pervasive surveillance and inhumane treatment of the poor in terms of punitive policies is what brings about terrorism and crime and lunatic behavior. They push these people far beyond their capabilities of tolerance. They are put in digital poorhouses, that are in view, polarization by design, in order to bring the poor apart even though they share the same sufferings because it targets individuals and small microgroups and tailors control based on that.
Automated decision making now deny assistance applications, and refuse to provide certain sectors of society the help they need and deserve. By having decisions made my machines which were designed by engineers and data analysts, judging the grey areas is impossible and this transfer of discretion can actually bring about more discrimination and bias because these devices identify patterns of discriminatory decisions and base their algorithms accordingly. And this goes back to the segregate nature of people in the tech industry, and their political agendas and inclinations; and this in turn reflects how they want to push the poor down, and keep the wealthy up- polarize.
On another note, I went on Twitter and picked out a few things to read, also retweeted a couple of things:
– If news and researchers can pull out fake news, then why can’t tech companies?
– This thread was interesting but I couldn’t capture what message was trying to be conveyed, is it the fact that hearing “both sides” already assumes that there are sides and they are opposing in nature?
– Attempts to answer Bonnie Stewart’s question (this was challenging though)
– Live example of segregation