I was doing a web search for hits that contain both pluralism and feminism and found this article by Esther Turnhout, which is adapted from her inaugural lecture as Chair of Science, Technology and Society (STS) at University of Twente: A better knowledge is possible: Transforming environmental science for justice and pluralism so I read the lecture instead. It explains a lot of the concerns I have, and I mostly enjoyed reading it. Here’s some bits I highlighted, and then some thoughts about how this relates to my work going forward.
As always, this post is perpetually in draft.
TL;DR
The lecture was not without frustration, so I wrote a tongue-in-cheek comment:
Scientists seem to be very solution oriented, without adequately investigating if this is the most appropriate problem to solve.
Engineers need the lowest cost, lowest risk solution. Deliver to a brief and don’t think about it too hard. Let your actions do the talking.
Sociologists want to understand the root cause of the problem, which frustrates me as a scientist-engineer because I ask, OK, then what? What do we do about it? Which is frustrating to sociologists because you can’t ask what to do when you don’t even acknowledge or understand what area to do it in yet.
How do we resolve this tension?
Notes from the lecture:
page 6: “nature inclusive alternatives are in fact possible” I just want to state explicitly that I agree with this. Better alternatives are possible and feasible.
page 6: “the domination of people and nature operates not just through
physical or material processes, but also through ideas.” I also agree with this. We are in a battle for ideas, and this links to her mention of story later in the lecture and my desire to build an emergent game.
then there is a few pages outlining the problem of science, and knowledge
page 8: “taking the planet as the object of knowledge production risks injustice.” I think this relates to our approach of “mining for knowledge” as a process of extraction. Science is not neutral. I have had many arguments over this. I think beer was even sloshed across faces once. Scientists, or people following the scientific method, really don’t want to hear they are not neutral, which I think implies, not morally superior. (Yes, I said it.)
Page 9: “A second problem is that sustainability science operates on a dangerous
illusion of neutrality. Many see neutrality as indispensable for the production
of truth, but it is not just unattainable, it is actually harmful.”
but I also have a bone to pick here, as she writes, also on page 9 : “It is well known that the carbon removal technologies that Steinberger refers to, including direct air capture, carbon storage, or massive tree planting are extremely unlikely to ever be effective as sufficient scale”. but the thing is, those “nature inclusive alternatives that are in fact possible” is pretty much in the same boat – perhaps not technically, perhaps only perceptually, but it does have the same challenge. We have to acknowledge that.
page 9: “A third problem is the imperialist hubris of much of sustainability science.” I understand this as, now that our rich white world is in trouble, now it is a problem. Wrecking the whole rest of the planet’s worlds, entire communities, whole tribes sold into slavery, etc etc etc, that was fine. My first impulse is to say, yeah well we didn’t think of it. Crap and all but can we move forward? And the lecture responds that the really crap thing is that we have to now acknowledge we ain’t saviours after all. This is why I hate any “save the planet” slogans. You want to save anything? Then leave, that’d be good enough. What do we really want to do? And that needs reflection… because the honest answer is most often not a nice one.
Quoting Kathryn Yusoff:
“[‘Saving the planet’] indicates a desire to overcome coloniality without a corresponding
relinquishing of power. The responsibility for the world is articulated anew
as the white man’s burden – a paternalism that is tied to a redemptive
narrative of saving the world from harm on account of others””
page 11: The subsequent calls for societal change [by scientific expertise] that they make are stunted due a combination of problematic scientific orthodoxy and political naivety. As a
result of a misguided idea of neutrality and a fear of being seen as political, they continue to produce knowledge that lacks actionability, reproduces the status quo, and ignores power, including power – and imperialist and colonial tendencies – within science itself.”
this is why i want to consider distributed contributor communities like OpenStreetMap, but also consider that the same problematic scientific orthodoxy and political naivety may exist therein, and then, importantly not stopping there, work towards new knowledge platforms that at least start addressing this, somehow.
page 12 the term “citizen science” is addressed in both the good participatory potential and the increasing risk of “citizen science benefit[ing] science because it forms a low-cost tool to increase data collection and analysis capacity” and the lecture unpacks some causes and drivers for this.
page 13 makes the bold statement that science is an obstacle for transformation, and I shudder and agree all at once. Trauma. This is the source of my trauma.
“These problems persist because of a general lack of recognition of the inevitability and consequences of framing. Clearly, facts do not and cannot speak for themselves. Rather, facts
and values entwine in frames. Since framing is inevitable, it is impossible to disentangle facts and values. These frames have consequences; they define not only what the problem is, including what items the problem consists of and how they are related, but also what solutions are possible and rational, and what knowledge is relevant.”
I really struggle with this; A huge part of scientific training is about framing your project adequately. I’d say it can take up half the project. And then at the same time we as scientists are also quick to say that facts speak for themselves. Pick a lane! We do need framings, we do need to figure out a way to break things down into bite-size chunks so we can get the job done, but we also need to acknowledge the limits of those particular chunks of work. I have spend the last few years turning myself into knots about this as I venture into how to address this big problem of … uh … knowledge transformation, because I know very well how a slight shift of framing turns a project on its head. I mean, just last week I had an existential crisis because I shifted my major lens from being “collaboration” to “reflection” and while all the references and topic headings are still relevant, none of the words are right anymore. It’s a mess. Bizarrely, linked data – yes, that’s part of technology – could help here, to tie tiny bits of explicit knowledge in order to create a fuzzy, tacit, collection of wholes. Or holes. Or webs. Or something.
Up to here I’ve been oscillating between fist-pumps of yes that is exactly the issue and poorly disguised eye-rolls of, yeah, but, like, whatever. This part is where I got a bit annoyed though, and where I wrote the tongue-in-cheek bit that asks ” OK, then what? What do we do about it?”
When I read (page 14) “But it would be a mistake to take such a superficial notion of effectiveness as a sign of success.” I want to yell, there is just no pleasing you!! WTF?! I bust my ass to make things better and you call it superficial and unsuccessful? Fuck you!
But when I calm down, and read “Common frames might provide the glue for connecting specific
actors in science and society, but they do so by excluding other perspectives, values, and forms of knowledge.” I agree. This was why I was doing a web search for pluralism. We can’t do the “all on the same page” thing anymore. This is the thing where we start with something amazing and then through all that finding common frames and common ground and common budget and whatever we get a watered down thing that, just, is a waste. It’s that design by committee thing on steroids. I don’t know what else to do, but the distributed contributor communities – the OpenStreetMap, Linux, Wikipedia projects nags at my mind. They work. They’re not perfect, by a long way, but there’s a magic there. What can we learn and improve at increasing orders of complexity? (by the way, this is what I call the metaverse)
“The blindness created by the false ideal of neutrality allows science to continue to continue to reinforce dominant values, interests, and knowledge systems, because questioning”.
This next bit is what would make me leave the room: “Unfortunately, many scientists fail to fully appreciate their complicity in these bad outcomes and take responsibility for them. They don’t think that this is their problem. This is one reason why the scientific community continues to resist proposals to transform science.” (the footnote says Other reasons are more mundane and related to a fear for the loss of authority, funding, or careers). Excuse me? EXCUSE ME? This is probably how men feel when we call them trash and rich and they go “not all men”. Some scientists are dicks, yes. Most of us are holding on by a thread. And we’re ignorant. YES, yes we are. But this is also incredibly hard and social science language is all but indecipherable. I have been reading up about stuff in the library since 2007, the start of my PhD, feeling something is very wrong and not being able to say what it is, and I only really started getting a handle on it in 2017. I have been actively trying and it’s taken me more than 10 years, in which I have basically destroyed my scientific career. It is incredibly not useful to say that as a group, we are responsible for the bad outcomes and denying our complicity,
OK we are responsible for the bad outcomes. And when we realise this, as I have and so many of my talented colleagues, we leave science. We start selling clothes or open a bakery or run a bed-and-breakfast or go into finance and aaaaaall that talent and investment dies. This can’t be the best way to do this. It can’t be.
What do we do AFTER we have taken responsibility for our complicity in the bad outcomes? Tell me that.
Oh no but wait, she says. And again I both shudder and agree.
“We all know examples, …, where the focus on outcomes has enabled practices that actually corrupt underlying objectives, even when they meet targets and indicators. So, it behooves us to approach the question what the desirable future end-point of transformative change can look like with openness and humility.”
But then where I shudder and don’t agree, she says
Page 20: “There are promising initiatives to innovate our ideas of economy, society, and nature, … Degrowth, solidarity economics, limitarianism, and agroecology … “
she says these are the victims of “cultural hegemony continues to ridicule, marginalize, and erase them. They will be seen as left-wing hobbies, not constructive, anti-freedom or communist, irrational, anti-science, or romantic.”
she also says, “If elites feel the need to marginalize and ridicule people and ideas, this is because they feel threatened by them. And this in turn might mean that they might be on to something useful, and that you should consider joining them.” My problem here is that she is also ridiculing the carbon removal technologies on page 9. My problem is that for the masses standing on the sidelines, these are two groups ridiculing each other in a power struggle.
How do we, the masses on the sidelines, navigate this? Can we, the masses on the sidelines, figure this out somehow? Can we, the contributor … citizen contributors help ourselves by building tools to figure this out?
page 20: “The endpoint of transformative change should not be defined, and especially not by science.” While I agree with this, I do think we need guiding principles or some sort of guide, just flinging our arms up doesn’t really get anything done. And I think science can help. I think the scientific method is a useful tool, and the body of knowledge, however flawed, has merit, especially when it can be teased apart and recombined and reinterpreted and then tested, by all of us in little overlapping wholes.
page 20 “I do however see it as a core task of my chair to foster the generation of radical alternative imaginaries as well as options for change that are guided by values of justice and pluralism and to join forces with societal actors to put them into practice. Knowledge and learning are an indispensable component of this task. Processes of transformation are complex and they
will inevitably create unanticipated effects. It is crucial that we need to build knowledge infrastructures to track these changes and effects. However, we need to do so in ways that counter the current emphasis on key performance indicators, control, and evaluation, and that facilitate emancipation, learning and reflection. Storytelling can be a method to support these knowledge infrastructures. Stories resist hubris and detachment because they are relational, because they combine the emotional, the personal, the factual, and the fictional, and because they are incomplete and open.”
I agree, but we need balance. Just as in emergent game design, where you do have an end goal to level up – so a check point, not the end of the game, you have things to guide you, but you are also free to explore and do side-quests. For me, I don’t just want to sit and sing kumbaya in a circle. What comes after listening to and sharing stories? It’s like preparing the food but never cooking and eating it. I want to DO stuff and that must also be valuable, surely?
Other stuff. post in draft ,etc etc
page 15 “Due to this refusal of science to change, knowledge inequities continue to
persist. I will give two examples. First, a recent article in Nature states that only
5% of total global research funding in agriculture is relevant for smallholders
while smallholders make up 70% of global farmers and are essential for food
sovereignty and food security.”
I noticed this too, for urban governance as another example. O’Brien ref and Brennan ref
page 17: “It also requires that research resists problem framings that distract attention away from these structural causes by promoting voluntary measures or by targeting individual behavior or
consumer choices. And it means that science has to stop producing and enabling technocratic and unjust solutions.” Agreed.
page 18: “Drawing on the work of Chantal Mouffe, the pluriverse requires that we enable struggles and debates to occur in the public space. Instead of placing science outside politics or allowing science to foreclose the possibility of politics, knowledge production practices need to be become part of political and dialogue and contestation.” Agreed. How do we do this? How do we operationalise this?
But where we have seen or tried these, they don’t work. Personally, I think these approaches do not take the pluralist version into account. They are also patronising in that people “should” (oh, the tyranny of should) follow these practices. These initiatives require everyone to do the same thing, and that is just not how humans work. I am not saying these can’t work, I am saying they are work as a pluralist patchwork. And you can’t force people to pick your option. So I think these initiatives also need the engineer’s reality check. I’ve been on both sides, and now I’m stuck in the mud in the middle and I can see that we can integrate this, I know we can. But it’s messy and violent and we just have to deal with that. Not eco-fascist violent, but maybe like death by a thousand cuts violent. Soooo I think my project is about developing the tools to deal with that. Which may, you know, include AI-facilitated psychology for revolutionaries. Sometimes we drink from dark waters, Kahlil.