Making Meta-Rationality Available
This free post is about how to learn meta-rationality and how to get credit for it.
It is a draft chapter from my meta-rationality book. If you are newly arrived, check the outline to get oriented. (You might want to begin at the beginning!) If you run into unfamiliar terms, consult the Glossary.
Making meta-rationality available opens important avenues for effective reasoning and action that may otherwise get missed.
These possibilities can be overlooked because meta-rationality is invisible from rationality’s point of view, which is the view of most organizations. Meta-rationality doesn’t fit into rationalism, which is the typical self-understanding of people doing rational work. Rationalism, as theory of effective thought and action, does not include a place for meta-rationality. It is not part of the ontology.
Meta-rationality is pervasive. It is always a part of all rational work, even if only as implicit default choices of ontologies and methods. When it is overlooked, when it is not understood and not engaged deliberately, it is unlikely to be done well. It’s usually done mindlessly or lackadaisically, rather than by putting adequate thought into it.
Since no one points at meta-rationality as a category and explains its importance, it may not be seen at all. When seen, it is usually viewed as insignificant informal trade-craft, and not worth talking about. From point of view of rationalism, only rational activity—Solving Problems—counts.
Making the meta-rational activity you are already doing visible, to yourself at least, allows you to make deliberate choices about how to use rationality.
This chapter returns to themes introduced at the very beginning of this book, and partially developed in “Rote rationality and unreflected meta-rational choices” at the end of Part Three. You might like to review those quickly, to see how your understanding of them may have changed since you first read them.
This chapter addresses points like these:
“If meta-rationality is so great, why have I never heard of it?”
“So how do I learn to do it? I can’t find any books or training courses, and my more experienced coworkers have never heard of it.”
“Much of my actual work is figuring out how to translate concepts between teams in different disciplines, and several times that’s resulted in significantly increased overall productivity. But I get no credit for that; it goes to the people whose productivity I increased, instead.”
“I can explain ways we could significantly improve our work processes, but my boss says no: we follow standard industry practices unless there’s some extremely good reason. Also, he says if any changes are going to be made, it has to come from the right level in the management hierarchy, and I’m too junior to even be thinking about it.”
“You have written that institutions, societies, and cultures are falling apart, due to a loss of faith in rationality, and that creating a ‘bridge’ to meta-rationality may be necessary to avoid civilizational collapse. How can individuals and institutions get started?”
This chapter has several sections:
What it means that meta-rationality is “invisible,” and what it may be seen as instead
Several reasons meta-rationality is typically overlooked
How to learn meta-rationality, even though explanations are mainly unavailable now
Rational reconstruction: the strategy of disguising meta-rationality as rationality to make it acceptable to rationalists
How to get credit for meta-rational work
Exercises
Meta-rationality is invisible, and therefore inaccessible, at two levels:
The category is not recognized, and there’s no commonly accepted word for it, or for its aspects and operations.
Individual situations in which meta-rational activity is helpful, or could be helpful, are overlooked as uninteresting and unimportant.
These blindnesses are mutually reinforcing. The category is not recognized because its members are ignored. Instances are not recognized because the ways they are similar—in being meta-rational, or in calling for meta-rationality—are missed, for lack of the category.
Because the category is missing, its forms and aspects and subcategories—the whole ontology of diverse meta-rational activities—are unknown. There are no commonly recognized ways of saying “we have a nebulous mess here, so we need to construct some formal Problems whose Solutions would help manage it,” or “what sorts of formally correct inferences will go wrong here, and how do we install stops on reasoning to prevent them.” We do those things sometimes, but the lack of a vocabulary means we do them less often than we should, and less well.
Let’s make this problem concrete, with simplified fictional examples. Suppose a recently-hired programmer has been assigned to a team that’s starting a project to build a new software system. The newcomer may soon have some complaints:
The team spent weeks flailing around incoherently before finally making some decisions and getting down to work. Man, that was a frustrating waste of time!
That may have been meta-rational work, and critical to the eventual success of the project. The group was reasoning together about how to do the rational work of system construction. If it was frustrating and disorganized and took much longer than expected, that may have been because they weren’t good at it. They didn’t have the conceptual vocabulary of meta-rationality to describe what they were doing as they were doing it. They hadn’t developed meta-rational competence, because that was never explained to them, and they had never been rewarded for it.
On the other hand, even if the team had outstanding meta-rational competence, the time they took to do the meta-rational work may have seemed like time-wasting confusion, “flailing around incoherently,” to someone who wasn’t familiar with meta-rationality.
They spent ages discussing words, and arguing about “what a ‘product’ really is,” and I was rolling my eyes and trying to stop myself from yelling “it doesn’t matter, why are we doing this, let’s start coding! This is supposed to be a software development team, not a late-night college dorm philosophical bull session! A ‘product’ is just an object in the database. It takes ten lines of code to implement that, let’s write those and move on.”
The team may actually have been figuring out an adequate ontology for the system they are going to build. That is the foundation for any well-designed software. Getting it as right as possible at the beginning—and revising it through the development process—is a major determinant of success. Engineering the relationship between a representation (a “product” object in a database) and reality (a specific actual eggplant in a supermarket) can be difficult, complex, and critical.
They argued in circles for a couple weeks about which framework technology to build on. Everyone had strong opinions that their favorite framework was the best. This one was fastest, that one was most stable, another had the best support, someone else’s was written in an exciting new programming language. All true, maybe, but there was no way of deciding which factor was more important. Eventually the team leader just used a random number generator to pick one.
This was a refusal on the team leader’s part to make a meta-rational judgement; presumably due to lack of knowing how to, or even from lacking the concept of “meta-rational judgement.” In the next chapter, we’ll look specifically at how to construct software system ontology, and how choose a software technology framework, as examples of meta-rationality in action.
Having seen some examples, let’s discuss the problem of invisibility a bit more abstractly now.
Meta-rational activity may be entirely overlooked, in the same way reasonable activity usually is. (See “Routine is too obvious to notice” in Part Two.) Then, for example, you imagine that a technical project is a Problem with a series of sub-Problems that must get Solved in succession. It’s true that is one aspect of it. But this hides questions like “Where did this Problem come from? How does it relate with reality? Why is it worth Solving? How was it subdivided into smaller Problems? Are there better ways of doing that? How do we decide what sorts of methods to use to Solve each one? What will be done with a Solution once we’ve got one?”
Alternatively, instead of being entirely overlooked, specific instances may get described accurately at the rational level. For example, “we spent two weeks figuring out what sorts of data objects our inventory software system should keep track of, which was good to get done early in the project” describes an instance of ontology construction, in rational language. The unusual, meta-rational quality of such instances is unlikely to be noticed (other than by people who have developed some meta-rational skills).
When meta-rational activity is noticed (but not as such), it may be misunderstood as circumrationality (“mere technician-level stuff”); as “people skills”; or in cases of undeniable spectacular success as “creative” “intuitive” “genius.” Let’s see how each of those can play out.
In the previous chapter, we saw that Norman Heatley’s immensely valuable, radically innovative work on penicillin was dismissed as technician work, not Real Science. (Recall from Part Three that a “technician” role means advanced circumrationality, such as using sophisticated measuring equipment competently. That doesn’t require full competence in rationality.) Real Science is the discovery of a universal law, and Heatley didn’t do that. He did invent an important chemical separation method, which is work scientists do and technicians don’t, but methods development didn’t count. (This has changed somewhat in the decades since, as it’s become obvious that most scientific progress comes from new methods; but science class still mostly teaches the obsolete ideology of law discovery as the essence.) He also designed and built automated laboratory equipment, which was unheard of in academic biology in Britain at the time. That is also not technician work, and should at least count as engineering instead.
In technical organizations, meta-rationality is often confused with irrational people-wrangling work. It’s true that meta-rationality often does require communicating and coordinating with other people, and organizing groups. So does rationality. And meta-rationality can be important when working solo as well; I’ll give some personal examples in the next chapter. It’s true that meta-rationality is often left to “high-touch” non-technical workers (managers and marketing people) in technical organizations. That is often just due to rational workers’ revulsion for anything non-rational. While reluctantly admitting that wrangling irrational people is often necessary, many would rather stay as far from it as possible. Unfortunately, this may result in leaving meta-rational technical decisions to non-technical people, which usually doesn’t work out well.
When meta-rationality undeniably produces valuable results, it is usually attributed to “creative intuition” or even “genius.” These are thought-stopping cliches, which function to prevent inquiry into what specifically was done and how and why. The success is regarded as inherently inexplicable, bordering on mystical. That’s a way to avoid having to understand the role, importance, and details of a non-rational activity. Failure to understand it makes future breakthroughs less likely.
In Part Three, we saw that rational reasoning requires deliberately un-seeing everything not theorized by the formal system you are using, to avoid distraction. Competence in rationality comes with making this a reflex habit, to minimize the effort needed for un-seeing phenomena and motivations from beyond the system’s scope. Unfortunately, those include meta-rational considerations that could provide significant guidance for your rational work. They become invisible.
Because the concept “meta-rationality” has been missing, there’s no collective effort to understand it better. It isn’t discussed, taught, researched, or innovated in as a coherent discipline. It is not part of the STEM curriculum, although meta-rationality is vital in STEM practice.1 That curriculum teaches rationality exclusively, and transmits rationalism as its ideology.
Rationalism denies the value of non-rational activities, including both meta-rationality and circumrationality. This is counterproductive in training future scientists and engineers, who on the job must do some of both.
Rationalism is fond of dual process theories, which lump everything non-rational in a single undifferentiated “other” category. (See “This is not a dual-process theory” in Part Two.) Such theories contrast rationality with irrationality, superstition, fantasy, self-deception, emotionality, aesthetics, unconscious thought, and subjectiveness. All these do have defects or limits. If meta-rationality was any of those things, ignoring it when doing technical work might be wise—but it isn’t. It’s a different non-rational category.
Meta-rationality extends reasoning to encompass nebulosity, which rationality can’t cope with. Nebulosity is uncomfortable because it makes complete certainty, understanding, and control impossible. Rationality is comforting because you can un-see nebulosity and work instead in an imaginary world that excludes it. It’s emotionally difficult to accept that this is illusory, and to take meta-rationality’s broader view that accepts and works with nebulosity.
Opening up to meta-rationality, and to nebulosity, can be a profound emotional shock. Later we’ll discuss a study of promising mid-level executives who were given training in meta-rationality and a view into the overall operation of the large companies they worked in.
Despite being experienced managers, what they learned was eye-opening. One explained that “it was like the sun rose for the first time. I saw the bigger picture.” They realized that their prior view was narrow and fractured. “I had only thought of things in the context of my span of control.”
Everyone was working on the part of the organization they were familiar with, assuming that another set of people were attending to the larger picture, coordinating the larger system to achieve goals and keeping the organization operating. Except no one was actually looking at how people’s work was connecting across the organization day-to-day.
This new perspective on the design and operation of their organization was described by some participants as “a turning point”; “an epiphany,” “depressing and frustrating,” or even “devastating.” It revealed that there were many opportunities to improve the organization, but also that working within the current design as managers with line responsibilities might be a waste of their efforts.2
These managers, originally trained in administrative rationality at business school, had assumed that their companies worked overall on a rational, systematic basis—and discovered that they didn’t. Every large company is a giant mess, held together by circumrational “glue people” who compensate with reasonableness for the disconnects between the many internal rational systems. The trainees discovered that meta-rationality was lacking, resulting in enormous waste of effort. About half left line management to take on meta-rational organizational restructuring roles instead. That usually implies accepting less highly rewarded jobs; but they thought they could be more useful that way.
There’s an interesting analogy here, originally suggested in “Trouble, repair, breakdown, meaninglessness, and rationality” in Part Two, and which we’ll explore further in “Developing meta-rationality” later in Part Four:
Just as the paralyzed blank stare of reasonableness breaking down resembles the disinterested objectivity of rationality, so too the floundering vertigo of rationality breaking down resembles the groundless open-ended curiosity of meta-rationality.
Classroom instruction can’t teach you to drive, ski, or make breakfast. Reading books is only a little bit helpful. Rationality is unusual in being teachable in the abstract, in the classroom, rather than in the doing of the activity. Meta-rationality is not like that. Currently, meta-rational skill is largely tacit. It is learned through experience and apprenticeship. This is partly inevitable; we shouldn’t expect a perfectly definite account of methods for dealing with nebulosity.
However, there are disciplines best learned with a combination of explicit and implicit means. Medicine, for example: reading and classroom instruction are required, but so are hands-on experience and the practical one-on-one mentoring you get in a hospital residency. I believe meta-rationality can be like that. Explaining it explicitly ought to help, and that’s what this book aims to do.
My explanations are less detailed than I would like, because we simply don’t know all that much about meta-rationality. The topic needs much more empirical study. A chapter late in this Part sketches a research agenda.
In the absence of a coherent existing body of knowledge, I’m drawing together scattered observations from disparate academic and practical disciplines, together with collected anecdotes, case studies, and my own experience. My aim is to popularize these insights, in order to make them relevant to ordinary technical practice—rather than abstractly academic, as the few in-depth studies have mainly been.
Just pointing out that meta-rationality is a distinct, valuable, and mostly-overlooked activity should be helpful in making it visible, and therefore accessible. It will help to draw attention to it, and to devote resources to developing it. This is true at all levels: in your personal work and personal development; in organizational development; and society-wide. This book aims for those broader scopes, as much as for being a manual for individuals.
Meta-rationality is valuable in all rational work, but like medicine, it is more craft than science. Crafts are not formally rational, but they rely on explicable reasoning in part. A craft might be “intuitive” and “creative,” but this is not anti-rational or especially mysterious. We don’t have a scientific theory of what goes on in a theatrical costume designer’s brain, but no one thinks it’s alien or magical.
Since classroom instruction and texts have been mainly unavailable so far, meta-rationality is mostly learned through personal experience and through informal apprenticeship.
Learning meta-rationality from experience requires open-ended curiosity. This is the sort of curiosity that does not expect to find definite answers; it seeks understanding, rather than knowledge in form of true statements. It is experimental and reasoned, but comfortable with nebulosity. It is willing to be confused, and willing to allow confusion to persist. When a phenomenon stubbornly refuses to make sense, open-ended curiosity neither jumps to judgement, nor rejects it as boring or frightening.
A main mode of learning meta-rationality from experience is reflection-in-action.3 That is observing yourself doing rationality, and considering how it is working, and how well, and why. It is wondering—with open-ended curiosity, and taking into account the context and purposes—how else you might do it. It is also observing and being curious about your own practice of meta-rationality.
Reflection-in-action requires enough cognitive power left over to reflect on rationality while you are doing it. It requires enough mastery of the rationality trance that you can divide your attention between rational problem-solving and watching and thinking about it, without distraction and confusion. Then you allow the rationality to flow while maintaining awareness of context and purpose as well.
It is also good to reflect on what you have done, and how it went, after the fact. That is usually easier, although you may have forgotten significant details already.
Reflection, whether in action or in retrospect, depends on noticing opportunities for meta-rational intervention. That may be in terms of the explicit ontology of meta-rationality: you may notice instances of the categories described in “Opportunities for meta-rational improvement.” Seeing an activity as improving a system’s circumrational interface, or as adding material supports for reasoning, can be helpful. Reflection may also be tacit and vague: for example, you get a feeling that something is off, for reasons unclear, and it’s time to step back and rethink your rational approach.
The role of a mentor in meta-rational apprenticeship is not to convey the theory—that’s what this book is for—but to encourage and support your learning from reflection. It’s helpful even just to point and say “pay attention when something like this happens, it’s significant.” Beyond that, the mentor may say “look here, this is meta-rationality” or “this is an opportunity to apply meta-rationality” (maybe with “of such-and-such a type”), or suggest how.
As a fictional example, imagine a graduate student in data science has been asked to help a team studying the evolution of ecological predator-prey relationships. They’ve collected a large data set of interactions between species, and need to make sense of it.
The student bursts into their advisor’s office with a breakthrough:
Student: I think I’ve got it! An efficient new algorithm for computing the overall ecological drift factor, without having to consider all pairs of species. It’s really cool. It’s kind of complicated. First, it takes the eigenvalues of the predation matrix. Um, I think that’s going to work. Then we construct a kappa-reduced bipartite clique meta-graph—
Advisor: Uh, let me interrupt there for a minute. This sounds very clever, and I think I can see where you’re going with it, and it does seem like it could be made to work. So, what is your purpose in doing this?
Student: It’s a way of avoiding dealing with all possible species interactions…
Advisor: So, do you think you could get a publication out of this new algorithm?
Student: Well, maybe, I guess? I hadn’t thought of that? It’s not all that original…
Advisor: And you want to graduate this year. You wouldn’t want to take the time out to write and submit and revise the paper. So what does that say about your purpose for this algorithm?
Student: Well, I didn’t mean for it to be a publication, I’m just trying to solve the data analysis problem!
Advisor: So, in that case, being clever and original is?
Student: Not necessary? But it’s really cool, it avoids—
Advisor: It avoids considering the n-squared species interactions. Does that matter?
Student: Well, an n-squared algorithm doesn’t scale. They’re no good.
Advisor: How many species are there in the data set?
Student: A bit over a thousand. I forget the exact number.
Advisor: So the total number of interactions?
Student: Well, a million and a bit.
Advisor: And how long does it take to compute a single interaction?
Student: I don’t know?
Advisor: Order of magnitude. How many arithmetic operations?
Student: Uh… well, a hundred maybe, less than a thousand anyway… so the total number of operations would be about a billion? I guess that would take… maybe a minute? At the absolute worst, a few hours, counting non-arithmetic overhead. So you are going to tell me to just consider all possible interactions, and forget about the new algorithm!
Advisor: How long have you spent working on this?
Student: You are going to tell me I’ve been wasting my time for the past week—but your approach doesn’t scale. Not if there were a million species instead of a thousand or so.
Advisor: But there aren’t. If somehow someone asked you to do a million species, working out a new algorithm might be necessary. But they won’t. And your purpose now is?
Student: To graduate. Which means solving the problem I’ve actually got as quickly as possible…
Advisor: Bingo. So, remember what I told you about meta-rationality, and meta-rational maxims. Are there some meta-rational lessons you could take away from this?
Student: “Keep your broader real-world purpose in mind while doing rational work”?
Advisor: Excellent! What else?
Student: Um… “What counts as a Solution to a formal Problem depends on your purpose.” Oh, and “on the specifics of the situation.” No, that can’t be right, a capital-S Solution has to meet the specification of the formal capital-P Problem, which you explained is independent of purpose and context.
Advisor: So?
Student: I guess in this case it’s more like “Keep in mind the possibility that the formal Problem you are trying to Solve may not correspond to the real-world task you have.” The real-world one might be much easier. Or, I suppose, harder, when you have a mess, not a clear-cut Problem.
Advisor: Yup! So, we can look for more and less general maxims to take away. What more specific lessons might be learned here?
Student: “Optimize scaling only only when the actual thing is big enough to need it.” And, “Use order-of-magnitude estimates to get a preliminary sense of problem size.”
Advisor: Great! And in the other direction, think how you could generalize your experience as broadly as possible.
Student: “Don’t try to be too clever”?
Advisor: That’s one way of putting it… I’d say “Try the most obvious, stupid-sounding approach first; they work surprisingly often, and usually fail quickly if they don’t.”
The value of very general maxims is that they work across domains. You may be working on something completely different in a few years, and you’ll remember the predation project, and think “Oh, wait, I’m being clever here, maybe I should try being stupid first.”
Uh… and about that algorithm. It does sound clever. Your time may not have been wasted. You might consider taking another look at it after you graduate, and see whether you could work it up into a journal article. Things like that often need gestation for a few months, or years. It will be clearer to you if you come back to it with some distance.
Gradually, meta-rationality becomes a way of being. It involves not just learning methods, but a personal transformation. Upcoming later in Part Four, the chapter “Developing meta-rationality” is about that.
The section after this one explains how to get credit for meta-rational work. One strategy, introduced here, is rational reconstruction: re-presenting meta-rational work as if it were done rationally. To explain that, we need to revisit some history of rationalism. Then we’ll look at some good and bad consequences.
The term comes from logical positivism.4 Remember that, from Part One? It was the final serious attempt to make rationalism work, back in the first half of the twentieth century.
Logical positivists wanted an explanation for how science works, so we could be sure that everything it discovers is true. Unfortunately, they discovered that scientific discovery simply isn’t explicable in rational terms. They found that what happens during the discovery process appeared to be “irrational”—by which they meant non-rational; and meta-rational in my terms.
To salvage the situation, they distinguished the context of discovery from the context of justification.5 Their view was that the function of science was to produce true knowledge. They couldn’t say how it did that; what happens in the context of discovery is mysterious, but (they said) it’s a matter of human psychology, which they declared irrelevant. Anyway, the reasoning of actual people is defective, capricious, nebulous, and inaccessible, so they didn’t want to deal with it.
What matters, they said, is that science produces truths, however it does. What remains is to show that this is true: that science does produce truths. This is an epistemological issue, not a psychological one. So what we need is a formal procedure for verifying that a supposed truth is in fact true (or true with a numerically specified probability). The work done in the context of justification is to provide a proof of correctness, which anyone rational can check.
That is a rational reconstruction of the discovery. We ignore any extraneous or inconvenient details of what actually happened, and explain how the truth could have been discovered rationally. Then anyone can go through the same process and verify that it works.6
So, now you can see why I’ve gone through this history: re-presenting your meta-rational work as if it were rational is a way to make it understandable to rationalists, and then you can get credit for it. More on that in the next section. But first, having introduced this useful concept, let’s look at some more consequences: both good ones and not-so-good ones.
Logical positivism, although a failure in its own terms, is still more-or-less the mainstream, implicit ideology of science.7 As a result, rational reconstruction became the standard for scientific publication. A journal article doesn’t explain what you actually did. It’s especially required to leave out all your meta-rational and circumrational work: how you came up with your hypothesis, how you worked out the experimental design, and all the many false starts, experimental and theoretical breakdowns, and your repairs in response. That stuff is irrelevant to whether what you say you discovered is true.8
Instead, a scientific paper explains what, in retrospect, you rationally could have done to get the results that you did get. Then other scientists can replicate your experiment. So long as you did get the results you say you did, your fictionalization of the process isn’t misleading, because everyone understands that this is how it works.
Truth is important. Insofar as your goal is to discover truths that hold independent of context and purpose, this standard for explanation is valuable. However, for science, there’s two downsides to ignoring the context of discovery.
First, science actually has to be done by people, who actually do have to do the meta-rational work, and who actually have to learn how to do it. It would be very helpful if science papers explained the meta-rational aspects they are required to omit. Then you could learn meta-rationality from examples, especially ones relevant to the particular sort of science you want to do. In fact, often scientists are most excited about the meta-rational parts of their work, and try to sneak a couple sentences about it into the Discussion section of the paper. It officially doesn’t belong there, and usually reviewers tell them to take it out. Instead, this gets transmitted only orally, as folk wisdom or trade-craft. That’s inefficient, and excludes anyone who doesn’t have a personal relationship with a meta-rational mentor or colleagues.
Second, it’s a rationalist fallacy that the only goal of science is to accumulate universal true laws of nature. Science also aims at understanding, which is found in the context of discovery and obscured by rational reconstruction. Understanding involves “how” and “why” and “what does it mean” issues that rationality deliberately ignores. (The upcoming chapter on meta-rational epistemology explains this.) Discovery and justification are interwoven, and can’t be separated without damaging understanding.
Although the term “rational reconstruction” mostly only gets used when discussing science, it’s a universal pattern across rational disciplines. The most innovative and exciting rational work almost always has an unusual meta-rational component. It’s normal to pass over or explain that away, in part for the benefit of those who aren’t ready to work meta-rationally themselves.
When meta-rationality aids rationality in Solving a Problem, the Solution can be understood without reference to the meta-rational work. An exciting Problem Solution is one that transfers: many other Problems can be solved the same way, by analogy. Then the meta-rational insight that led to the first Solution isn’t needed any longer. Ideally, Solving Problems by analogy to the original one gets reduced, for efficiency, to a routine rational procedure. That requires no thought, just following a series of formal instructions.
This is good, if there are mechanisms in place to ensure that the mindless procedure only gets applied when the relevant usualness conditions hold. This is an example of “limiting rational inference,” a topic we’ll look at in detail later in this Part. Otherwise, you crank out Solutions that are formally correct, but that don’t relate properly to reality, and cause trouble. That is an example of “oblivious rationalization,” discussed in “Rote rationality and unreflected meta-rational choices” in Part Three.
The next step, as discussed in “Rote rationality,” is degeneration into rationality theater: you know the procedure doesn’t work in your context, or you don’t care whether or not it works, but for institutional reasons you have to perform it anyway.
In the next chapter, we’ll look at examples of some important meta-rational insights about software development (“Agile” and “Domain-Driven Design”) getting gradually rationalized into counterproductive rationality theater.
Competent technical rationality gains significant prestige. Competent meta-rationality rarely gets any, despite its rarity and sometimes-extreme value. Rationalist institutions lump anything that is non-rational as uninteresting low-value low-status work—and almost all institutions run on rationalism.
As a personal strategy, I recommend mainly avoiding meta-rational work if you aren’t in a context that recognizes and values it. You may be tempted to do it anyway, because you can see that it would make your team, project, or organization more effective. However, if it’s not valued you won’t be able to get meta-rational insights adopted, and you may get actively punished for suggesting them.
There are exceptions. If quietly doing some meta-rational work makes you better at deploying rationality, so you can Solve some difficult Problem, you will get rewarded for the rationality. Here is where rational reconstruction comes in! If you have to justify your work to people who see rational Problem Solving as the only worthwhile form of thinking and acting, you have to present your work as if you did it rationally. How far your fiction must diverge from reality, and how feasible it is to do this at all, varies from case to case.
However, whatever part of your time was spent on the meta-rational work probably won’t get recognized. (“It’s great that you Solved the Problem, but why were you off in space wasting your time all last week?”) Depending on how closely your work is monitored, you may not be able to take the time for meta-rationality.
Also, not all meta-rational improvements can be made legible in rational terms. Likewise, if you enhance the effectiveness of other people’s rationality, for example by developing a better ontology which they adopt, you probably won’t get any credit for that. It doesn’t count as a Solution to a Problem. Your better ontology will get regarded as “obvious common sense” in retrospect.
These rather gloomy recommendations apply only to steadfastly rationalist environments. Some managers do recognize meta-rationality when they see it, and do reward it. I’ve had two bosses who recognized my unusual ability to bring multiple, seemingly disparate fields to bear on projects, resulting in unexpected, valuable products.
Meta-rationality is often recognized only when done by people who already have senior roles. This is sometimes completely explicit: managers at some level in the hierarchy decide how work is to be done, and worker bees are meant to follow mandated procedures. This makes sense to some extent. Meta-rationality does require lots of experience, and also requires being in a position where you can get the information needed to see the big picture. The senior person isn’t any more rational, and may know less about cutting-edge methods of the field, but may “have a feel for things” that finds shortcuts through difficulties, devises or selects better approaches in some way that can’t be explained, and makes projects run smoothly. However, recent management theory emphasizes the value of delegating many decisions about how work is best done to the people doing it.
So, if you wish to develop meta-rational competence, I recommend seeking roles and work environments that recognize, allow, or encourage it. Those are only moderately uncommon. Waiting until you have enough formal seniority is one strategy.
Alternatively, you may do well to create your own work environment, by becoming a consultant or entrepreneur. In those roles, it’s much less important how you get your work product, than that you do.
A chapter in Part Five, about meta-rationality in organizations, discusses these issues further.
Did it occur to you, reading the discussion of meta-rational reflection in this chapter, that many of the exercises in the book so far have been prompts for you to do that? Some have asked you to recall instances of meta-rationality from your experience or reading. Others have suggested ways of looking out for opportunities for meta-rationality in your work as you do it from now on.
In retrospect, can you remember times when you engaged in meta-rational reflection-in-action?
Have there been times when you have noticed opportunities for meta-rational improvements that were overlooked by others? Were you able to help other people see them? How? Or did they continue to dismiss them? How?
Or have there been times when someone you worked with noticed such an opportunity that you had overlooked, and then you could see it? Can you remember why you missed it?
I described opening up to meta-rationality, and discovering the nebulosity that reveals, as an emotional shock. Does that fit anything in your experience?
Have you had difficulty in getting credit for meta-rational work? Or seen others have trouble with that? What did you, or they, do about that? Did it work?
Would you consider taking a different job in order to get support for exercising meta-rationality?
A main reason for posting drafts here is to get suggestions for improvement from interested readers.
I would welcome comments at any level of detail: from typos, to improvements in sentence, paragraph, and section-level flow, to better explanations, to overall structural problems.
What is unclear?
What is missing?
What seems wrong, or dubious?
How could I make it more relevant? More practical? More interesting? More fun?
Did you try the exercises? How did that go?