Getting some thoughts out that have been on my mind, so I can stop worrying about forgetting them and move on with work and other things I’m trying to focus on.

First, my friend Colin has been experimenting with augmented reality glasses for displaying meeting captions so he can read them while still looking around the room. We tried this at my parents’ house over Thanksgiving break, and I was concerned about eye strain – but he says it’s fine. Set the background to black and minimize the captioning window to the bottom of the screen – it has a wired HDMI connection and functions as if it were an external monitor – and you have a large enough window to see teammates, whiteboards, etc. while looking incredibly dorky.

Ian and I got to chatting about this, and started wondering… could we display interpreting that way as well? Remote interpreting, of course. The setup would likely work better with VCO, where we speak for ourselves instead of signing back, but I would still want to set up a camera on me so that the interpreter can continue to get backchannel feedback that I’m understanding them. Or: what would VP calls look like if we could see each other and sometimes flip to a glasses-mounted camera so the other person (or interpreter) could look at what we’re looking at – in other words, if we could share artifacts during video calls more easily?

I’ve also been thinking about ways we’ve described qualitative methodology throughout history. Analytic induction was one I hadn’t read about before, but Alexandra’s (Coso Strong) recent paper popped this term into my awareness (Thanks, Alex). I’m somewhat skeptical of its stated aim of “discovering causal universals,” but the description of the process of refinement is unashamedly “look, researchers do things in their brains, and create these theoretical structures,” in contrast to “anyone going through this protocol would find a similar theory,” so that might be useful to me.

I struggle with the tension in portraying qualitative analysis as both “repeatable process” and “creative work” (which is inherently not reproducible or reducible into a protocol via which another person could definitely recreate it). I think I’m trying to find a way to show someone how my steps were reasonable, so they can retrace mine - but the bar isn’t that, locked in a room with only my data and a protocol, a different person would produce the same analysis I did (they’re different people, they’ll combine thoughts in different ways). But at the end of both our processes, we should be able to look at each other’s results and say “yeah, I can see how that makes sense.”

This is fascinating, and I’d like to understand it more. “Is there a universal hierarchy of the senses?” Data says no. Well, actually… it’s more complicated than a “no,” but more like “actually, there isn’t a clear pattern across languages on this.” What kinds of implications does this have for my thinking about multimodal learning, especially in terms of using what I personally believe are underdeveloped senses in engineering education pedagogy? (C’mon, folks; engineers gesture and build and move… we learn things by touch, kinesthetically, with proprioception… why not take deliberate pedagogical advantage of that as well?) A very nascent thought, here.

Ah, thinkalouds. I enjoy writing these - it feels like an indulgence. It’s also fun to come back to them years later and see what I was playing with at the time. I hope I don’t forget to follow up on some of these, but I think I should trust that I’ll come back to it later if it’s important (I want to read more analytic induction stuff, but right now is Not The Time to prioritize that - grades are due Friday).

Thus goes my brain.