What we play is life, version v5.2 – methodology development, causing yet another total rewrite

The previous post – version 5.1 was a mess and I found it really hard to work through all that stuff. Part of my frustration is the feeling that I am doing this to get academic research funds, but I don’t really care about this aspect of the academic rigour, and that doesn’t do wonders for the motivation. That changed this week. I’m really starting to see the value in it, for designing an actual game, for example, and I’m having fun! My brain hurts, but hey.

I’m currently looking at building something like a participation curiosity scale or something like that, inspired by Kahan’s article. I’m a bit embarrassed to say I’ve been quoting  Tim Harford left and right since 2017, but only actually read the article that he quotes this week, spurred on by Ricardo’s mention of causal approaches and me having a panic about how one measures ‘replacing judgement with curiosity’.

Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not.

Curiosity brought people together in a way that mere facts did not. 

The article is titled “Science Curiosity and Political Information Processing” by
Dan M. Kahan, Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, published in 2017. They refer to their 2015 article a lot where the actual work was done, and then, even better, Matthew Motta , Dan Chapman , Kathryn Haglin and Dan Kahan published a newer, better method in 2021, titled “Reducing the Administrative Demands of the Science Curiosity Scale: A Validation Study” How lucky am I?! So I started unpacking this and that led to a near total rewrite of the entire proposal. More on that below. First, the sisuwords.

Sisu, desenrascar and more

The name “participation curiosity scale ” or PCS sounds terribly boring, and seeing as I’m back on the Finnish thing, I thought, I’m basically wanting to develop people’s sisu. This then made me think, of course, to the Portuguese desenrascar, which despite me still being unable to pronounce it properly, is my favourite word. I finished off yesterday by getting lost in academic articles about these words, how wonderful! And then got lost in the rabbithole this morning of finding similar words in other languages. My, cough, academic interest, cough, in this, if I had to defend myself, would be to name this scale in the local context’s word for this … toughness … term, and of course the bloody English don’t really have a word for it (let me know if you have something). We think “hacking” comes close. This local naming would then link to the bottom-up, contextualised need for this work, which Elinor Ostrom did articulate through polycentricity, and Ricardo helpfully linked to this article titled “From policy coherence to 21st century convergence: a whole-of-society paradigm of human and economic development“.

So, first, some fun words (I mostly didn’t add the translations because I want to inspire rabbitholes for the reader too). Then I’ll give an outline of the whole rewrite that got me to the scale, and then some thoughts on how the scale is shaping up. I think I get confused again at the end so it’s probably all wrong, but definitely moving in a good direction.

  • sisu – Finnish. Thesis on the word, titled “Sisu as Guts, Grace, and Gentleness – A Way of Life, Growth, and Being in Times of Adversity
  • desenrascar – Portuguese. Article: “Positive Psychology Of Portuguese “Desenrascanço” In Multimorbidity: The General Practitioners’ Perspective
  • jeitinho – Brazilian. Article: “Between simpatia and malandragem: Brazilian jeitinho as an individual difference variable
  • baklei – Afrikaans. Or the phrase “maak ‘n plan” (I had to ask AI, because I couldn’t really come up with something, despite being Afrikaans. But I don’t disagree, I have heard people say “hy het baie baklei in hom”. But it doesn’t quite do it. )
    I didn’t like the AI’s answer for the Dutch, and the Germans one would think have an excellent word, but AI didn’t get that either. The AI suggested eating bad things:
  • Door de zure appel heen bijten – Dutch. Surely there’s a word for this?
  • Durch die Wand beißen – German. Come on, Germans have funny words for everything. There has to be something for this.
    The other culture with good words for everything is the Japanese. According to AI, they have:
  • ganbaru – Japanese.
  • gumption? – English. AI’s answers were “pluck”, “gumption”, and “grit”. Hack?
  • kækhed – Danish (also AI – chat with my Danish friends pending 🙂 but I dig this word already. To be expected from the Danes, they also have hygge.)
    Then I had to do my other consortium partners:
  • Çalıntı ruh – Turkish. Also İnatçılık but that seems to simply translate to stubborn. I’ll ask my Turks.
  • skärpa – Swedish. (AI answer, to be verified)
    sharpness? That’s new. So of course then we need to complete the set with the Norwegian word too:
  • kjempegøy – Norwegian. (AI answer, to be verified)
    This could be a whole project on its own! How different cultures approach resilience.
    And then I had to stop because work beckons, but I will update when I find more good ones!

The steps before the project’s 5 steps:

I found I have to tell quite the story before I get to the what. Points-wise:

  1. The complex, intertwined environmental, societal and political challenges we face cannot be addressed by simplistic, isolated, top-down solutions (Chan et al 2021). (I may call this the climate crisis, the crisis of democracy, the crisis of expertise, the crisis of participation. but you get the idea. There’s crises, and they all make each other worse).
  2. Responses to sustainability challenges are not delivering results at the scale and speed called for by science, international agreements, and concerned citizens.
  3. Yet there is a tendency to underestimate the large-scale impacts of small-scale, local, and contextualized actions, and particularly the role of individuals in scaling transformations (O’Brien et al 2023). – this is also the Ostrom stuff.
  4. Volunteer contributor groups, like OpenStreetMaps, are accessible ways for individuals to get involved in environmental activism. (Basically, I want structure and accountability to this local action. I guess because then it can scale.)
  5. However, participation in these groups are not without challenges (Horsham et al 2024). They typically have highly skewed participation demographics (REF – e.g. gender, income and minority groups) and these groups tend to create their own echo chambers (REF) where certain approaches and political views can be favoured (REF) which hampers long term success and health of the group. This bears resemblance to political polarisation, where it has been noted that most efforts to persuade people have been shown to have little or no effect (Kalla & Broockman 2017).
    (Basically I am doing this work because I have volunteered for a very long time now and I keep seeing this and it frustrates me beyond measure. So now I want to do something about it.)

The TL;DR of the 5 steps

  1. Deep canvassing operationalises Gilligan’s ethic of care, but it has issues
  2. Digital approaches can help address the issues, but getting people to engage with it in the first place is hard
  3. Game design can help get people engaged in a playful way, through considering what motivates them (28 dimensions) and structuring the rules of engagement
  4. Thinking in a metagaming way can help structure the rules of engagement
  5. Bringing all these together into a method of measuring and cultivating what I’m calling a participation curiosity scale (or a sisu-scale, or a [insert-your-sisuword-here]-scale).

The project in 5 steps

  1. Deep Canvas
    (working title for this section is something like Applying an ethic of care to politically charged scientific questions – combining deep canvassing and science curiosity method tool approaches.)

So then, these Kalla and Brookman guys did something about this, through deep canvassing. For some inspired reason, after talking to Ricardo about how to test things and behaviour change, I remembered one of the books I also keep referencing and embarrassingly didn’t go look up the original references up to now. The book is by Anand Giridharadas, called “The Persuaders: Winning Hearts and Minds in a Divided Age”. On page 316 it explains the deep canvassing methodology in 8 labour-intensive steps – I noted it in my blog back in 2022. I then had to go on quite a search to find the original article.

Deep canvassing is a one-to-one conversation methodology that has been scientifically proven (Broockman & Kalla 2016) to be able to do two vital things:

  1. Create lasting changes in how (or whether) people vote by shifting the underlying emotions and attitudes that determine our political views
  2. Generate new trust and connection across difference or disagreement.

What I loved about this approach is that it basically operationalises the stuff Carol Gilligan talked about. Check this blogpost from fricking 2018: “Experting in times of crisis: Ethics of care.” And I am still working on it! Impressive. I was calling this jouissance then. Maybe I need to add that to the sisu words. Although I feel like I’ve moved on. I dunno. (I also just realised I’m repeating a lot of the previous post, but hopefully this is getting to a better flow. Hey, this is for me, you don’t have to read all this.)

To me, jouissance is resistance to otherness, a way to encourage relationships of care. The six steps showing how to do this comes from Carol Gilligan’s paper “Moral Injury and the Ethic of Care: Reframing the Conversation about Differences” (2014, Journal of Social Philosophy, Vol 45(1), 89-106):

  • Association—the stream of consciousness and the touch of relationship—can unlock dissociation, bringing what is out of awareness back into consciousness. When it does, we have the sensation of discovering something at once familiar and surprising. Something we know, and yet didn’t know that we knew.
  • The ethic of care in its concern with voice and relationships is the ethic of love and of democratic citizenship. It is also the ethic of resistance to moral injury.
  • Listening in a way that creates trust.
  • Lifting, if even only temporarily, taboos: Replacing judgement with curiosity.
  • Gently embracing the intimate.
  • Rebuilding the trust that facilitates our ability to love.

The newer article by Gilligan and Eddy, 2021 focused on active listening, and then there’s Staddon et al 2023 whose article titled “The value of listening and listening for values in conservation” clearly places this in a nice context. So, I am fairly convinced that the project should then be rooted in applying deep canvassing principles to improve the functioning of volunteer-based, scientific contributor groups. 

I also came back to the ethic of care thing about why I want this transformation of expertise, which links to Chan’s levers and leverage points.

An ethic of care prioritizes empathy, compassion, and interpersonal relationships, complementing an ethic of justice which focuses on fairness, rights, and moral rules. Care ethics tends to be more contextual and particularized, considering individual circumstances and narratives, whereas justice ethics strives for universality and impartiality. An ethic of care is particularly relevant in situations where any approach is likely to be fraught, or contested. The original work by Gilligan focused on abortion studies, and this project intends to examine an ethic of care approach in the context of politically charged scientific questions.

But then, deep canvassing falls short. This is great, actually, because then my project can contribute to the state of the art. Deep canvassing is labour intensive. It’s one-on-one, done by trained canvassers, that doesn’t scale well. Secondly, deep canvassing may not work so well in science.

Thinking about how to make deep canvassing work better seemed to start aligning better and better to how the curiosity scale method was being streamlined. I’m still chewing on this. And overall how to get ALL OF THIS IN TEN PAGES. Crikey.


2. Digital
(the title for this section still needs work: Develop a framework for a digital, self-guided methodology to train active listening (??))

Ricardo shared a pre-print about “Deep canvassing using AI” I haven’t read it yet, it’s next on the todo list. My concern was what human connection can get lost in the compromise, but after reading the curiosity scale methods I’m starting to be more open to some AI prompt assistance. My collaborator in OMI designed a self-guided survey using AI for what the metaverse means to people and it was so gentle and so fun, several of us agreed it was a really nice use of AI that improved human connection, rather than stifle it. So it certainly is possible to do this well.

This section is work in progress, but will examine the need, and/or opportunity of using the digital, the visual and AI tools, but also take care to improve human connection rather than reduce it. My thinking here went towards, digital is all good and fine and then nobody uses it. There’s a reason deep canvassing is called “door to door” because you literally go bang on people’s doors. Whether it is door to door or digital, there is still the challenge of getting people to engage. And especially if it is about getting people who aren’t in the room but should be there.


3. Dimensions of play
(Draft section title: Consider the dimensions of play and how those needs are currently met, or not, in volunteer groups. )

Volunteer groups may not be willing to be canvassed. Or, the people who need to be part of the group, because of their skills, or a sociable personality, or plainly because they have a democratic right to be there, are not in the room, so to speak. Hence, look to another area where participation is voluntary: game design.

I think there’s two things here. The one is the dimensions of play, McKechnie-Martin et al’s 2024 article “A Meta-Ethnography of Player Motivation in Digital Games: The 28 Dimensions of Play” which I’ve crudely listed in this table:

Table 1: The 28 Dimensions of Play (McKechnie-Martin et al 2024), grouped according to what may be favoured by contributor groups, and what needs may be important but not met – to be tested in project.

Constructive driversDestructive drivers, present, but likely not to appear in self-reportingShould be more prevalent than it isNeutral, not immediately relevant
Competence
Continuation
Cooperation
Creativity
Expectation
Intimacy
Progression
Strategy
Value
Autonomy
Domination
Status
Experimenting
Exploration
Fellowship
Idle
Leadership
Sensation
Story
Violence
Competition
Escapism
Expression
Fantasy
Growth
Health
Relaxation
Thrill

And then there is a game rule, or game world or something design, that sets the rules of engagement. Something to help to support or explore things like cognitive dualism, complicating the narrative. 

I expect an outcome here of either people just plainly not wanting to see the game aspect, and very realistically that it is simply not possible to have any one intervention to meet different people’s needs. In addition, trying to make existing initiatives work better is hard. People don’t like change. So can we build bridges instead? It occurred to me that HOTOSM – the Humanitarian branch of OpenStreetMaps, works this way.


4. Metagaming
(Draft section title: Investigate how metagaming activities can complement initiatives, to provide suitable niches for different player types / contributors.)

This then links to Kahila et al’s 2023 article “A Typology of Metagamers: Identifying Player Types Based on Beyond the Game Activities“. And it links to the need to engage with Chan’s Interdependent metatheory (but I am still stuck on that), which treats human action as continually influenced by an interdependent web of factors. Complex, emergent, large scale of influence.

Table 2: Categorization of metagame activities (Kahila et al 2023) . A production category, and a curation, or accountability category should be added along with tailoring the context to make this more applicable to contributor communities.

Main driverExamples of activities in category
ConsumptionConsuming entertainment
Consuming information
Consuming art
Game-enablingPurchasing and maintaining
Modifying
Organizing
Information-seeking Seeking information about game progress
Seeking information about game features
Seeking information about user-generated content
Creating and sharing Creating and sharing entertainment
Creating and sharing art
Creating and sharing information
DiscussionDiscussing game progress
Discussing game features
Discussing user-generated content
Strategizing Planning
Reflecting and analyzing
Mastering

Kahila et al (2023) considered different ways that people may access information to improve their gameplay experience; Versatile metagamers, strategizers, and casual metagamers.

My hypothesis is that contributor communities are expected to have a high amount of strategizers, and casual game communities are expected to have more casual metagamers. So then I want to test if versatile metagamers can be a good bridge, allowing skill transfer between different communities. Something like that.

Considering metagaming extends the participation temporally as well, beyond the immediate activity of participation in the initiative or game. Consider aspects of passionate passtime and serious leisure stuff (Mansourian 2024).

Some outcomes I hope for here is to apply convergence culture to science. Map potential connections. Consider the trade-offs with having to keep up to date with several different platforms. Consider ways (technical and social) to have people strengthen the diversity of knowledge networks, rather than requiring all the people to straddle all the networks.


5. Curiosity
(draft section title: Design interventions to cultivate scientific participation curiosity.)

I’m having an issue that I don’t just want people to be curious, I want them to act on it. But seeing as this is a short project, maybe getting them curious about acting on stuff is a good start – what Kasperowski and Kullenberg (2018) terms civic technoscience, I think, I have to go check. Fang et al (2019) had a nice sounding article I still have to read: “How does participation and browsing affect continuance intention in virtual communities? An integration of curiosity theory and subjective well-being

So now it’s method time!

Motta et al’s 2021 explains the things they asked people, and how they chose them. They asked things based on two factors, a discrimination parameter that would correlate to where someone would fall on a scale. The, if it walks like a duck, talks like a duck, then it’s probably going to be at least something like a duck, thing. Then they have a difficulty parameter, which indicates how many people would have a high level of interest in this thing. If it’s low difficulty, lots of people would probably be doing it.

Motta’s method, for example asked people if they were interested in science. According to them, this item has a comparatively high discrimination parameter—that is, it is good at discerning who falls where on the scale. If people say they like science, they probably do —and a low difficulty parameter—that is, many people express high levels of interest. They also asked people if they attended a science lecture in the past year. This is a “difficult” thing, so fewer people would do it, and if people did attend it, they’re very probably to at least like science (unless they were there to throw paint or something, I guess). So Motta et all called this a a moderately high discrimination parameter and a high difficulty parameter.

I like the high/low discrimination parameters, and high/low difficulty parameters, and tried to apply it to my scale. I need to chat to Ricardo here because I think I start conflating the measures of the questions with the target audiences, start grouping people, which is probably not good.

The “participation curiosity scale “, or [insert-your-sisuword-here]-scale

I replaced the “discrimination” with “interest”, and “difficulty” with “effort” here but then started realising that I’m confusing the table. So I repeated the table below and tried to stick to questions. I also got a bit mean at the low effort people.

Participation curiosity scaleHigh interest discriminationMedium  interest discriminationLow  interest discrimination
High effort difficulty
Fewer people express an interest
Knowledge-related Volunteer participation – contributor to Debian, OpenStreetMap, Wikipedia etc – we want theseScience curiosity? Want to learn but unsure about their capability for action.Attend TEDx events.These are the people who are not in the room. Should they be? Could be, or was, passionate, but not reflecting as such. Rage quit Burnt out, or actively problematic.
Medium effort difficultySocial volunteer participation – recycling groups, knitting groups, forum moderator, guild master, organise TEDx, core team at parkrun – we want these, at risk group – needs often not met.Games, Sports, general hobbies with a collaborative component?
(Sweet spot for increasing participation?)
A missed opportunity?
parkrun regular attendeesOrienteering attendees
Casual contributors
Low effort difficulty
Many people express high levels of interest
People who don’t do stuff. Watching TV. The apathetic masses. It’s likely not possible to get them to participate.Puzzles like crossword, wordle. They’re happily doing stuff, but probably not going to reach out and do more. Passive. Whatsapp groups. Reddit comments. The annoying masses. They could be good, but they could also just drain energy. Require game rules to shift behaviour

Repeating the table, trying to stick to survey questions. I think maybe unlike Motta et al, I am trying to reach this whole table, or realistically, everyone except the low effort people who don’t want to be reached.

Participation curiosity method tools questionsHigh discrimination
Good at discerning who falls where on the scale
Medium discriminationLow discrimination
Low expected correlation
High difficulty
Fewer people express an interest
Questions related to participation in knowledge-related volunteer participationQuestions relating to curiosity, knowledge acquisitionQuestions about frustration – rage quit, burnt out, or actively problematic (likely reporting about others’ behaviour)
Medium difficultyQuestions related to general, social volunteer participation Questions related to games participation, sports, general hobbies with a collaborative component?Questions relating to casual participation
Low difficulty
Many people express high levels of interest
??

I then would have to overlay this somehow with people’s dimensions of play, and their metagaming activities. In a questionnaire, this would get hairy!

So I’m thinking maybe a decision tree type questionnaire (if this then go to that section) that avoids asking irrelevant questions can help. Or asking people to place themselves on a grid, rather than binary or multiple choice answers, covers two dimensions of questions per image. But I’m starting to think the AI thing may be helpful here, as a prompting device. OR, having richer, deeper, more interdependent questions. More story-driven, like a mini text-based game. Could probably develop something through a workshop and then with the help of AI. Because we are funded for the AquaSavvy project, I would have access to a bit of paid help here, and maybe I can convinced a few volunteers to help here. In the context of the proposal though, I am scared about how much I can “promise” or dream about that is outside of my control. On the other hand, this will be a really nice interdisciplinary fun thing to do. Once it’s shaped up I could easily pitch it to game research groups, for example. I think for the proposal I need to outline some stuff, it probably doesn’t need to be super specific. I think they say that if successful a more detailed experimental plan needs to be submitted.

Another risk could be that people lie about their participation, but because this is specific (name the communities, like OSM or parkrun or reddit or WoW or whatever), not a general “do you care about science” it should be OK. We don’t need to ask about their specific contributions, and we don’t care how often they do it. We do want to ask in targeted interviews about frustrations, we can have an opt-in for that.

Some methodology thinking

In Kahan’s 2017 article they situate their example, they are working on consumption of science films. So here we talk about quantity and quality (diversity of attendance, conflict management) of volunteer contributor groups working on eco-system conservation in the two target areas (in Azores and Turku), both biosphere reserves.  Their instrument was disguised as a general “social marketing” survey.  They had objective indicators such as film viewing time and postviewing information search – I particularly like the postviewing info search thing.

Aim for a sample size of about 3000 participants in each case study (Kahan had that number). Include anyone with interest in the area, not necessarily limited to their geographical location. I’m thinking, ask the larger population (e.g. like 50 000 people?), before testing: Do you do stuff in your free time, or have done stuff in the past and think about doing stuff again? Or would like to do stuff? Stuff being any stuff, not just “for good”. Hoping then that the self-selected people are then around 3000 or so.

A really nice link would be to tie the place-based curiosity to a sense of wellbeing – so the metaskills and the nature becomes a thing for overall wellbeing. See Phillips et al 2015. This also links to the need for bottom-up, local context stuff.

Start with unstructured interviews in each target area, about the group health, what issues it’s working on, what are the frustrations, successes, and difficult challenges. 

In parallel, have a survey open to any interested parties, with particular effort to reach the population located in the target areas. (this is the “50 000”). Hope to get around 3000 responses, and some, ten to a hundred opt-in for further interviews. 

This gives an understanding of the state of the volunteer contributor groups before intervention.

Then, the intervention – digital deep canvassing (metacognition development), as well as, or facilitated through, a simple game to nurture curiosity? Sortof like a talk-through game. You chat with an AI-bot about what you experience and what you liked etc as you progress. 

Risk: the AI bit should be easy and Mengjie can help through AquaSavvy. The game should be really simple, but I don’t know how to do that yet. I am hoping just using e.g. Worldle, or a group of games. Perhaps a selection of games that could work, people can choose one and have the AI chat open in another window??

This intervention is also monitored and used as data.

Then do the surveys again, see if there has been a shift in responses. Measure the “sisu-scale” difference.

What we are testing is if we can get more people to contribute, to join the volunteer contributor groups (in person, newsletter signups, forum posts etc) – we need to see if that’s what the groups want but I don’t think I’ve ever met a group who didn’t complain about low numbers (or set this as a requirement to participate in the study). And then, secondly, if the challenges progress differently. For a two year project we cannot promise to solve it, but how conflict is managed can improve after a single conversation.

So that’s what I’ve got so far.

One Reply to “What we play is life, version v5.2 – methodology development, causing yet another total rewrite”

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML Snippets Powered By : XYZScripts.com