Democracy, data and deception - an EASST2022 experience

This summer, I went to the EASST (European Association for the Study of Science and Technology) conference in Madrid to present my own work on Governance of Digital Practices, but also to learn what the field of European Science and Technology Studies have been doing on the topic. Governance seems to be an evergreen topic, while “the digital” continues to spark interest among researchers. For a conference with almost a thousand participants, there is bound to be more interesting presentations than anyone could hope to attend or to present in a humble blog-post. Therefore, I will give a summary of three of my favorite paper presentations on the topic of governance of digital practices from this year’s EASST, dealing with democratization of science funding, how data really works, and invisible humans.

 

New democratic forces in governance of research in the digital century?

When we think about governance, the classic image is one of some governmental body – the state, a father figure, some leviathan maybe. But, oftentimes reality is more complex than that. The forces shaping our actions may be distributed among several actors, and sometimes they’re abstracted into diffuse or otherworldly entities like “markets” or invisible hands. Ethnographer at Politecnico di Milano, Chris Hesselbein touched upon this in one of the opening talks at this year's EASST conference with his talk on the use of crowdfunding platforms for research funding. 

For most researchers funding typically comes from large institutions. They can be public, private, a little bit of both. They can be for profit companies, or non-profit foundations. But regardless of their organizational structure, if you want their money, you have to write a research proposal – which, from time to time, ends up being declined. So what do you do when you need money but you can’t find someone to fund your research project? Inspired by the likes of Kickstarter and GoFundMe, academics have turned to crowdfunding to get their projects off the ground or to get the last little injection of liquid assets to get their dissertations done and dusted. 

These platforms, dubbed academic crowdfunding, have been seen as an innovative new way of generating both income and interest for research. By making bite-sized text proposals and digestible video content, academics hope to reach out and convince a different crowd than the typical bureaucratic gatekeepers. So what happens when researchers delegate research funding to the “crowd”? Will the most charismatic researchers get the most money? Will only research on popular diseases find funds? Are we witnessing the advent of influencer academia? If we’re to believe Chris Hesselbein’s research, that’s not likely. It turns out that the success rate for academic crowdfunding is pretty low, and that the projects that make it are generally backed by people that are closely related to those who are looking for the money. So those who were hoping for a revolution in science funding schemes would have gone home from the EASST conference disappointed.

Check out an example of an academic crowdfunder here:

What did we get wrong with data governance?

We live in an age where data is seen as the new oil. On the one hand it is a valuable resource, but on the other it can have adverse effects on our personal and public environments. That’s why it’s been so important for the EU’s data strategy to regulate data carefully (read: GDPR), to maximize profit and minimize harm. In her talk on data reuse, associate professor at Copenhagen Business School, and author of The Politics of Mass Digitization Nanna Bonde Thylstrup challenged the idea that as long as there’s enough ethics, then data can travel freely and unproblematically.

According to Thylstrup, the belief at the core of the EU’s data policy is wrong. Looking at the documents coming from Brussels, she found the assumption that data is something that can be separated from the context they were made in – that you can give birth to data and then set it free to live a life on its own. Inspired by Karen Barad’s work on quantum physics, Thylstrup suggested that data is always entangled – entangled with its creators, entangled with its models, entangled in its use. To make data travel freely, we therefore need large architectures to “un-entangle” it. Still, subsequent use will re-entangle the data, but in new relationships. 

So by introducing the notion of data entanglement, Thylstrup suggested, we could stop talking about data re-use – as if data was something to merely be used and re-used – and that we should consider the notion of re-entanglement instead. This also means, Thylstrup argued, we’d have to re-evaluate our data policies. If data can’t just be passed around for anyone to use and re-use, but needs to be worked on to entangle and dis-entangle, who does that work, who pays for it, and who benefits? What would policies that take this alternative stance towards data look like, and would it cure some of the ills of the current data economy? The question is left hanging in the air, but the challenge is clear: rethink data and rethink governance.

 How digital are our practices anyway?

Automation has been promised to make our lives easier and simultaneously threatened to ruin our economies ever since the advent of industrialization. Even though neither its promises nor threats have materialized over the last hundred years, the rapid development of advanced digital computing over the last decades has brought new hype around both the good and the bad sides of automation. 

In mainstream media, ads and industry conferences, digital tools are predicted to do almost everything from grocery shopping, to raising your kids, to predicting the future, but if we are to believe Professor of Computational Social Sciences at the University of Munich, Lina Franken, this prediction is both misleading and misled. In her presentation on the relationships between humans and machines she pointed out that even though both humans and machines take part, we tend to forget the humans. This has led to an idea that as technology develops – and humans are taken more out of the loop – everything will go faster. But Franken reminds us that since humans are present in spite of their invisibility, sometimes new technologies lead to a slowing down, as humans will need to adapt their practices. Additionally, Franken observed, even when technologies develop, most of the work will still be done by humans.

So why do we think that digital machines will do everything better and faster than us, and almost independently of us? In her studies, Franken found that when products are presented along with the machines that made them, it is only the final versions that are presented. All the human work that went into tinkering and experimenting with producing the machine and the results is made invisible. So in the end it seems as if the machine did a larger part of the work than it did. In this way, it’s easy to forget how dependent we actually are on human labor. But who’s to blame for making humans invisible? Is it the people developing technologies, those who “buy” them, or the technologies themselves? If we want a truer picture of the labor that goes into the development of knowledge and technology, where do we start looking?

 How can we use this to think about governance of digital practices? 

Using the insights from this year’s EASST conference, what are the main takeaways for how we think about governance of digital practices? If we want to generalize the analyses of Chris Hesselbein, Nanna Bonde Thylstrup and Lina Franken, I would highlight three main points:

1. Large (and rich) institutions still matter
Although there are initiatives to democratize governance, like crowdfunding, things are resistant to change. We therefore have to acknowledge the importance of established institutions and those who represent them, and not get carried away by idealistic representations. Challenging the leviathan might therefore be a greater task than some would expect. 

2. Policies might fail because they’re wrong about the world
There’s nothing spectacular about policies not accomplishing their goals, but where do we look when they fail? Pointing to bureaucratic inefficiencies, corruption or lack of power are popular pastimes for critics, but sometimes governance efforts may fail because they are wrong about the nature of what they seek to govern. In particular, the nature of data continues to elude both policymakers and scholars alike. If data flows like oil, they can be put in pipelines and sent off to far away places for consumption and combustion, but what do we do if data is entangled like quantum particles?

3. We should remain critical to our assumptions
Related to the previous point, it is evident that even professional policymakers are prone to overestimate the power of digital technologies. Designing governing mechanisms then means we should be wary of accepting presentations of reality at face value. Who materializes your online order in front of your door? Who keeps your internet experience free from content you don’t want to see? Who navigates you when you need to go somewhere new? Is it a machine, or someone who has rights that should be protected and ensured by governing bodies? Maybe it’s not as simple as we like to think.

Coming back from the heat of Madrid to the cool breeze by the Vienna Danube, I bring home more questions than answers. Sure I’ve learned a lot about how things are, but, for me, the most important lesson has been to always stay critical and look beyond the idealistic and taken for granted versions of our reality.