It’s time for another Design, Cognition, and Learning reflection post! As usual, more detailed notes and full citations are at Today's topic is "design problems." We've already established that a definitive and objective and complete definition of design is impossible (as the Tao Te Ching goes, "the way that can be spoken of is not the eternal way / the name that can be named is not the eternal name"), so we'll skip that part -- but we can talk about what a problem is.

"There are only two critical attributes of a problem. First, a problem is an unknown entity in some situation (the difference between a goal state and a current state)... Second, finding or solving for the unknown must have some social, cultural, or intellectual value. That is, someone believes that it is worth finding the unknown. Finding the unknown is the process of problem solving." -- Jonassen's "Towards a design theory of problem solving"

Well, that' s a start. It doesn't tell us what sorts of problems are out there, how they change, how they come into being, what people do with them -- but I can agree with that. Dorst points out in "The Problem of Design Problems" that solutions and processes are greatly affected by the problems they're trying to solve and the way those problems are presented. (His implication: "we'd better take a look at problems, then.")

That sounded far too straightforward -- and it is. Design isn't algorithmic. The idea of sending A PROBLEM through A PROCESS to get A SOLUTION seems to me to imply staticness in all those things; here's THE PROBLEM and it will not change as it marches through the immutable PROCESS -- I get this image of boxes labeled INNOVATION dropping steadily off an assembly line, and snort.

This is probably why I found myself frustrated by Goel & Pirolli's "The Structure of Design Problem Spaces," which reads as if it were written by (stereotypical) engineers, although it's actually from a cognitive science journal (titled, unsurprisingly, "Cognitive Science"). I mean, they have phrases like "incremental development of artifact" and "predominance of memory retrieval and nondemonstrative inference" in a list of "invariants found in the structure of... design problem spaces." It's a great example of the positivist viewpoint described by Dorst's paper, written in a statement-heavy style: lots of categories, lots of "this is the way it is" mechanics.

Okay, maybe post-postivist. "There is nothing deep about there being three phases rather than n phases," they say sotto voce in a footnote. "Designers use these three phases to talk about their processes. We too found them useful for our purposes. It would certainly have been possible to do a finer-grained or coarser-grained individuation." But still. We're SCIENTISTS! We make TYPOLOGIES! I'd summarize the paper as saying that human designers are an information-processing system with a problem. Take a human with a memory and certain cognitive and physical capabilities, and place them in a situation with a goal and a problem, and you have a design situation.

I mean, I don't disagree with that, but it's like saying... that an artist is a paint-processing system. Take a chef and place them in a kitchen with ingredients, and you have a Cooking Situation. But doesn't that sound like a woefully inadequate description that got truncated before it even really started?

Dorst seems to share that frustration. His paper mentions Schoen, who I read last semester; Schoen is all about metacognition, about designers (and engineers) learning to be aware of their own actions and thoughts. However, I hadn't realized that he had fit into this historical context; Schoen's phenomenology was a direct response to the positivistic, rational ("program a robot to do it") paradigm. Design isn't an algorithm; you can't just codify expert knowledge into logical rules. Also, who you are affects how you design; an expert designer is, to a large extent, drawing on themselves as an instrument. With that in mind, what's "design knowledge" and how do you help students develop it, if they're (obviously) not the same person as their teacher or anyone else, and you can't tell them "here are the rules to follow"? Metacognition, says Schoen. Get them to stop and reflect on what they're doing so they can adjust and teach themselves according to who they are.

Which is, in fact, exactly what I love about dancing and dance class. Holly teaches us principles -- rotate the pelvis, extend the arm, curl the spine -- and then gives us time to explore how that fits into our skeletons. Kyler will do a movement and describe and demonstrate the image of effortless collapse he's trying to get, but he's a tall, thin guy, Lily is a petite first-year, and I'm a tall grad student with less hamstring flexibility than a log -- so it's about him trying to get us conscious of our bodies and the way our muscles can move to make shapes about the feelings he's trying to get at. Ethan and Cal also do this with me in the mornings at the fieldhouse: feel your quads kicking into that movement when you lower your hips? Can you pivot before you push off on that turn? Kinesthetic teaching is all about reflection-in-action, and I think engineering classes could benefit from more of that sort of pedagogy (which is why I'm so excited about Janet's work in teaching engineering statics through yoga poses).

Now, Janet's methods will work for some problems but not others. To help think about what sorts of problems work with what tools and situations, Jonassen presents a typology of problem solving:

  • logical (how efficiently can you manipulate this puzzle)
  • algorithmic (apply this formula)
  • story (stories with formulas/procedures embedded, like story problems in math)
  • rule-using (apply rules to a constrained system)
  • decision-making (weigh multiple options, there are a finite number of answers possible; personally situated and not abstract)
  • trouble-shooting (hypothesize and test)
  • diagnosis-solution (figure out what is wrong and fix a thing most efficiently)
  • strategic performance (apply tactics in realtime to reach an objective)
  • case analysis (no right answer, identify solutions and defend your position)
  • design (vague goal, no right/wrong answer, act on goals to produce artifact)
  • dilemnas (there is no solution)

It's interesting to think about how these intersect with Bloom's taxonomy and Piaget's developmental stages; little kids who haven't gotten to abstract thinking can't tackle dilemnas, for instance. And algorithmic puzzles only require low-level Bloom activities ("apply"), whereas case analysis pushes you up to at least evaluating and analyzing.

A final, not particularly connected-to-anything thought before I dash to class: a lot of people describe design problems as "wicked problems," and I had heard that so often in design/engineering circles that I just assumed that the term had originated there. But actually, "Dilemmas in a General Theory of Planning," by Rittel & Webber, is about... policymaking. I did not expect this, and I wonder who transplanted that seed over into the more technical realms, and how they got it to catch on. That sort of cross-over transplanting is a skill I'm trying to develop, so the history of successes of that type are of tremendous interest to me.