News
5 Activities for Cultivating Tool Criticism Thinking
by Karin van Es
The incorporation of computational tools in humanities research requires that we think critically about how these tools impact knowledge production. These tools, often adopted from the empirical sciences, have profound implications for data processing and interpretation. It is not surprising then that David Berry (2012) has called for a ‘third wave’ in the digital humanities; making the underlying digital component the object of scrutiny. Engaging with Berry, Rieder and Röhle (2017) have pointed to the need to develop knowledge about the procedures expressed in the code of our computational tools. They are specifically concerned with the concepts and knowledges mobilized by, what they refer to as, our ‘digital helpers.’ Investigating the epistemic effects of our computational tools requires expanding our engagement with digital methods to digital methodology and incorporating what we call tool criticism. Tool criticism is
the critical inquiry of knowledge technologies considered or used for various purposes. It reviews the qualities of the tool in light of, for instance, research activities and reflects on how the tool (e.g. its data source, working mechanisms, anticipated use, interface, and embedded assumptions) affects the user, research process and output. (Van Es et al. 2018: 26).
Committed to the doctrine of knowledge as situated, we as humanists need to raise a series of questions when using computational tools about the computational transformations that are executed in the process. Humanists, it could be argued, are particularly suited for this contemporary challenge as it invites us to apply our hermeneutic skills to our workflow (Dobson 2019). Our critical interpretive work is needed for a better understanding of the stakes of the choices and procedures followed in humanistic data research. Although the issue is gaining traction in academia, there still lacks an umbrella term to combine scholarly work on the topic. In an effort to bring digital humanities in dialogue with fields such as software studies and critical data studies, together with colleagues from the Data School, we have been promoting the term tool criticism.
In order to cultivate tool criticism thinking I have (often in dialogue with Maranke Wieringa) created a couple of exercises which I employ in my teaching. While no means a finished and well-rounded set of activities, I do think they lay the ground work for the kind of questioning that needs to be at the heart of tool criticism. While other kinds of skills and knowledges are important and relevant (e.g. coding, statistics, network theory etc.) the activities I discuss below provide a low barrier means to instill a critical attitude towards the computational tools and technologies we employ.
1. Data walking
Here they learn that for data to exist they need to imagined. The walkshop engages with the notion that knowledge is situated. Careful observation and reflection in combination with group conversations helps to provoke questions about the social, ethical, and political implications of datafication. |
We use the Powell Datawalkshop Process (see http://www.datawalking.org/) with some slight adjustments in the roles; the photographer becomes a documenter (not limited to taking photographs, but could rather collect sound, wifi signals etc.) and we replace the role of the collector with that of a director (someone who brings discussions back to point and keeps track of time). Moreover, we don’t define data with the group beforehand, but do that afterwards. 15 minutes into the walk each group is asked to define data. After 30 minutes they are asked to formulate a research question in relation to their chosen theme. They use this question to guide their observation for the remainder of the walk.
|
2. Sorting things out The aim of this exercise is to explore the politics of classification, think through some of interpretive choices made by those working with data and reflect on the affordances of the software. |
Work through the Excel data cleaning Pet Name tutorial. What classifications and standards are employed? What is made visible (and invisible) in this system? To what end? And with what consequences? Think about what breaks the system?
Incorporate, amongst others, Bowker, C Geoffrey and Susan Leigh Star. 1999. “Some Tricks of the Trade in Analyzing Classification,” in Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press: 33-50. |
3. Unpacking data visualization
The aim is to tackle the idea that data are in fact capta – taken not given. The exercise involves reflecting on how visualizations are situated and partial and how knowledge is constructed. |
Select a data visualization from a (online) newspaper or select an everyday interface. Analyze its representational logics and politics. What ‘interpretive acts’ have been made in the construction of the data visualization/interface? How is it necessarily partial?
Incorporate, amongst others, Drucker, Johanna. 2011. “Humanities Approaches to Graphical Display” Digital Humanities Quarterly 5 (1) http://www.digitalhumanities.org/dhq/vol/5/1/000091/000091.html
|
4. Embedded bias/values in algorithms
The aim of this assignment is to engage with the models of algorithms and consider their biases. |
Select and research an algorithm. You can pick from everyday life (e.g. for search, recommendation, newsfeeds), academic research (e.g. Google search, ForceAtlas 2 layout) or government systems (e.g. Syri, crime prediction). Although you can’t open the blackbox of the algorithm, you can come to understand the basic logics or principles of its functioning. Discuss these and reflect on the embedded bias/values of how the algorithm is designed to work. Incorporate, amongst others, O’Neil, Cathy. 2016. “Bomb Parts: What is a Model?” In Weapons of Math Destruction: How Big Data Increases inequality and Threatens Democracy. UK: Penguin. 15-32.
|
5. Playable datasets workshop (with Stefan Werning)
First the workshop aims to show how changing the game rules produces and exposes different kinds of bias when interpreting data through the ‘lens’ of the respective game. Second, it experiments with the “materiality” of cards as ‘tools’ and interfaces in order to re-assess the material affordances and the corresponding epistemic implications of tools in more established computational methods and traditions. |
During the workshop, participants play and co-create card games as a medium to explore small- to medium-sized datasets. We provide a sample game that will serve as a starting point for subsequent modifications; the combination of game mechanics in this context operates similar to a layout algorithm like ForceAtlas2 (Jacomy et al. 2014), i.e. mechanics need to fit the structure of the data at hand, but also distinctly (re)frame the types of insights that may be derived from the dataset. (see https://www.slideshare.net/stefanwerning/making-data-playable-workshop-slides) |
References
- Berry, David M. 2012. “Introduction: Understanding the digital humanities.” Understanding Digital Humanities. London: Palgrave Macmillan. 1-20.
- Dobson, James. 2019. Critical Digital Humanities: The Search for a Methodology. Illinois: University of Illinois Press.
- Van Es, Karin, Maranke Wieringa and Mirko Tobias Schäfer. “Tool Criticism: From Digital Methods to Digital Methodology.” International conference on Web Studies (WS.2 2018), Paris, October 3–5, 2018.
- Galloway, Alexander. 2012. The Interface Effect. Cambridge: Polity Press.
- Rieder, Bernhard and Theo Röhle. 2017. “Digital Methods: From Challenges to Bildung.” In The Datafied Society: Studying Culture through Data. Eds. Mirko Tobias Schäfer and Karin van Es. Amsterdam: Amsterdam University Press. 109-124.