One of the really interesting things about my work here in Pensacola is that I get exposure to a wide range of problem-solving strategies. Our goal, of course, is to teach students how to solve problems using logical, repeatable steps; when they get here many of them have never had to solve any kind of complex problem (or, really, any kind of problem at all involving computers, radios, etc.) so they don’t have a good conceptual framework on how to do it.
Other students come in with the standard-issue trial-and-error methodology already in place: if something’s broken, I’ll just keep making changes until I fix it. This can be a pretty dangerous mindset. It’s one thing to solve your own problems on your own computer this way, but following the same strategy on a network that all 5,000 people aboard your aircraft carrier depend on isn’t likely to have happy results.
To supplant this process, we start with the basics:
- Did you verify the problem? Is the problem you saw the same as the problem that was reported? This is important because users’ problem reports are often imprecise or even flat-out wrong. Even when the problem description is precise, a non-technical user may report related symptoms, not the real underlying problem.
- Once you think you know what the problem is, how could you tell if that was really the problem? This is pretty straightforward: before you start trying to fix the problem, what are you going to do to identify the true root cause?
- Once you’ve identified the problem, how could you fix it? Some problems only have one solution; others have many. Before you try to fix anything, you should be able to identify candidate solutions that might solve the problem and select the appropriate one
- As you take steps to fix the problem, what tests can you perform to see if your fix is doing what you want? For multi-step solutions, checking your progress along the way is important.
- Did you verify the solution? This is critical, and it’s something students have to be trained to do because most people don’t do it, or if they do, don’t do it in depth.
As students make the transition from ad-hoc problem solving to a more systematic approach, one of the things that some of them tend to do is overthink the problem and/or the solution. This is natural because so much of the material they’re learning is completely new. The natural tendency is to dive in and try to apply all the detailed knowledge you’ve just gotten, but sometimes the problem is simpler than you think.
Our students have a term for this: “to nuc” a problem is to overthink something or go into too much detail. The term comes from the term “nuc,” used to refer to someone trained in the Navy’s nuclear power program. Nucs are legendary for being extremely well-trained, being able to master all the minutiae of nuclear reactor operations, and being somewhat nerdy. So when a student says that she nuc’d a problem, that means that she was looking too deeply for the cause or solution to the problem. This is a really hard problem to guard against, and I don’t have a good repeatable solution for it yet, other than asking them “are you nucing this?” when they seem to be diving too deep.
This is just one of the many fascinating issues that you run into when you’re teaching people using a revolutionary method. Doing what’s never been done before is hard sometimes…