The two concepts from Week 2 that raised the most questions for me were solutionism and big data.
As a computer science student who deeply believes in the social impact of technology, the critique of solutionism in “To Save Everything, Click Here” was wounding. Yet it is true that technologists are making heavy decisions, lightened by thoughts of riches and heroism. To what extent should we remove friction and difficulty from the human experience, and at what cost? What parts of our lives now are essential to the human experience, and what parts should be improved? Although the excerpt’s presentation of an extreme use case of technology was useful dramatically, it would have greatly benefited from nuanced differentiation of problems. Some futurist problems are more serious, such as civic gamification (to what degree does incentivization degrade free will?) and user-centred media (is the personal dynamism ideological discomfort fosters important?). “Silicon Valley’s great ameliorative experiment” (Morozov) is powerful if used with careful consideration of the crucial parts of humanity.
Big data is the most obviously manifest example of precarious solutionism. I read Black Boxes and Walled Gardens shortly after attending a thirty hour bootcamp on algorithmic trading, where I learned how and why hedge funds employ algorithms that consider variables as far-fetched as weather to place high speed trades, and immediately before listening to an episode from Kristen Tippett’s podcast “On Being”, called “The Moral Reckoning of Tech”, (it’s fantastic, check it out here!). Data, and implicitly technology, is socially positioned as neutral. Yet, there are human beings who make the choice of classifying a relationship between two variables as causation or merely correlation. Who gets to draw causation relationships? Who controls our data? Who gets to make judgements on how to wield data? What happens when someone like Trump has access to our data? Big data has massive impacts, shifts trillions of dollars in a blink, and rings reminiscent of Big Brother with promises of a Brave New World.
San Francisco entrepreneur Anil Dash says in “The Moral Reckoning of Tech” that technologists “bake [their] values into the choices [they] make when [they] design these tools”. Yet, dangerously, there is no ethical or philosophical curriculum for these computer science majors, sometimes a stray arts and humanities course. Yet, these technologists control our expression and our consumption on the tools that they make. For example, Facebook has lately been criticized for fake news. Less consciously malicious but perhaps even more serious is the liberal ‘bubbles’ that Facebook has been purported to create that led to people being sure of Trump’s defeat. Now the question is not if technologists should they allow people “to err, to sin, to do the wrong thing..” (Morozov) without technological amelioration and surveillance, but if user-centred tools should be constructed with moral algorithms, and what algorithms define those morals.
It is terrifying to ask these questions as an aspiring computer scientist who will likely work a corporate job. It must be even more terrifying to ask these questions of computer scientists who have devoted years to their craft and their products, yet questioning is necessary because of technology’s incredible impact. We must ask, and we must keep asking: what are the responsibilities of technologists? Of humanists? Of humans?