Divergentism

This entry is part 3 of 3 in the series Lexicon

Divergentism is the idea that people are able to hear each other less as they age, and that information ubiquity paradoxically accelerates this process, so that technologically advancing societies grow more divergentist over historical time scales. The more everybody can know, the less everybody can see or hear each other. I first outlined this idea in a December 2015 post, Can You Hear Me Now? Rather appropriately, that post reads a little weird and hard to understand now, because the title and core metaphor comes from a Verizon ad that was airing on television at the time.

Here is how I described the idea then:

Divergentism is the idea that as individuals grow out into the universe, they diverge from each other in thought-space. This, I argued, is true even if in absolute terms, the sum of shared beliefs is steadily increasing. Because the sum of beliefs that are not shared increases even faster on average. Unfortunately, you are unique, just like everybody else

The opposed, much more natural idea, is convergentism. In my experience, this is the view most people actually hold:

Most people are convergentists by default. They believe that if reasonable people share an increasing number of explicit beliefs, they must necessarily converge to similar conclusions about most things. A more romantic version rests on the notion of continuously deepening relationships based on unspoken bonds between people. 

In the 6+ years since I first blogged the idea, it has turned into one of my conceptual pillars, so I figured it was time to put down a short, canonical account of it. Here is a whiteboard sketch of the idea. The x-axis is time, interpreted as either historical time or individual life-time, and the y axis is something like size of collective belief space. The cone represents the divergence.

The core idea remains the same, but I’ve added two corollaries:

First, the divergentism/convergentism dichotomy applies to societies at large, and individual psyches as well, not just the intersubjective level between atomic individuals.

At the societal level, societies understand each other less and less with increasing information ubiquity, at any level of aggregation you might consider, from packs to nations. You might get random spooky entanglements, but by default, society is divergentist. The social universe expands.

This idea is consistent with one in Hitchhiker’s Guide, that the discovery of the Babel Fish, by removing all translation barriers to communication, sparked an era of bloody wars. But conflict in my theory is merely the precursor to a more profound universal mutual disengagement.

Second, At the sub-individual level, where you consider the non-atomicity of the psyche, things are more complex, and I’m fairly sure the psyche by default is not divergentist. It is convergentist. A divergentist psyche is one characterized by a sort of progressive fragmentation of self-hood. A simple example is when you read something you wrote 10 years ago and it feels like it was written by a stranger. Or when somebody quotes something you wrote at you, and you don’t recognize it.

As a thought experiment, imagine you could have different versions of you, at different ages, all together. How much would you agree about things? How well would you understand each other? How easily could you reach consensus on things. Like say all versions of you needed to pick a restaurant to get dinner after the All-Yous conference. Would it be easy or hard? How about a book to read together?

I think I’m a psyche-level divergentist, but I think most people are not. Most people grow more integrated over time, not less. In fact, increasing disaggregation of the psyche is usually treated as a mental illness, though I think there is a healthy way to do it.

So to summarize the 3 laws of divergentism:

  1. Most societies diverge epistemically at all scales of aggregation over historical time scales
  2. Most social graphs get increasingly disconnected over societal time scales
  3. Most individuals get increasingly integrated over a lifetime, but some have divergent psyches

I am most confident about the second assertion.

Divergentism is both an idea you can believe or disbelieve, and a basis for an ideological doctrine (hence the –ism) that you can subscribe to or reject. You could capture both aspects with this simple statement: Humans diverge at all levels of thought-space, from the sub-individual to species, and this is a good thing. The doctrine part is the last clause.

If you are a divergentist, you hold that the social-cognitive universe is expanding towards an epistemic heat death of universal solipsism, and you are at peace with this thought. You explain contemporary social phenomena in light of this thought. For example, political polarization is just an anxious resistance to divergence forces. Subculturalization and atomization are a natural consequence of it.

Locally, there may be reversals of this tendency, even in very late historical stages. These manifest as what I call mutualism vortices, which are a bit like islands of low entropy in a universe winding down to a heat death. Dissipative structures of shared knowing and meaning. But overall, everything is divergent. But they become progressively rarer, just as there is an infinite number of primes, but they get rarer as you go down the number line.

Tools

This entry is part 2 of 3 in the series Lexicon

There are two kinds of tools: user-friendly tools, and physics-friendly tools. User-friendly tools wrap a domain around the habits of your mind via a user-experience metaphor, while physics-friendly tools wrap your mind around the phenomenology of a domain via an engineer-experience metaphor. Most real tools are a blend of the two kinds, but with a clear bias. The shape of a hammer is more about inertia and leverage than the geometry of your grip, while the shape of a pencil is more about your hand than about the properties of graphite. The middle tends to produce janky tools unusable by everybody.

Physics-friendly tools force you to grow in a specific disciplined way, while user-friendly tools save you the trouble of a specific kind of growth and discipline. Whether you use the saved effort to grow somewhere else, or merely grow lazier, is up to you. Most people choose a little of both, and grow more leisured, and we call this empowerment. Using a washing machine is easier than washing clothes by hand, and saves your time and energy. Some of those savings go towards learning newer, cleverer, more fun tools, the rest goes to more TV or Twitter.

Physics-friendly tools feel like real tools, and never let you forget that they exist. But if you grow good enough at wielding them, they allow you to forget that you exist. User-friendly tools feel like alert servants, and never let you forget that you exist. If you grow good enough at wielding them, they allow you to forget that they exist. When a tool allows you to completely forget that you exist, we call it mastery. When it allows you to completely forget the tool exists, we call it luxury.

The nature of a tool can be understood in terms of three key properties that locate it in a three-dimensional space. One we have already encountered: physics-friendliness to user-friendliness. The other two dimensions are praxis and poeisis.

The praxis dimension determines how a tool is situated in its environment. The poeisis dimension determines its intrinsic tendencies.

Shell scripting is high praxis, low poeisis. Shell scripts live in the wide world, naturally aware of everything from the local computer’s capabilities to the entire internet. Scripting in a highly sandboxed language like Matlab is low praxis, high poeisis. Matlab scripts are naturally aware of nothing except the little IDE world that contains them.

The shape of the range of a tool in this 3-dimensional space might be called its gamut, by analogy to the color profiles of devices like monitors and printers in 3-dimensional colorspaces (which are variously defined in terms of user-friendly variables like hue/saturation/value, or their more physics-friendly cousins like “La*b*” CIELAB color space).

What we think of as the “medium of the message” is a function of this gamut. Extremely specialized tools, such as say wire strippers, have a tiny gamut, but are very precisely matched to their function. They are the equivalent of precise Pantone shades used by color professionals. Other tools, with very large gamuts, like hammers, are not very precisely matched to any particular function, but are roughly useful in almost any functional context.

I am bad at learning new physics-friendly tools. In my entire life, I’ve really only learned three to depths that could be called professional-level (but still well short of self-dissolving mastery): Matlab, LaTeX, and WordPress. Matlab is high poiesis, low praxis. WordPress is the opposite. LaTeX is somewhere in the middle. I’m much better at learning user-friendly tools, but then, so is everybody, and what makes an engineer worth the title is their ability to pick up physics-friendly tools quickly and deeply.

I’ve learned dozens of physics-friendly tools in a very shallow way, up to what might be called hello-world literacy. Deep enough to demystify the nature of the tool, and develop a very rough appreciation of its gamut, but not enough to do anything useful with it. I can do this very quickly, but run into my limits equally quickly. This makes me a decent technology manager and consultant, but not a very good engineer.

In the last couple of years, through the pandemic, I self-consciously tried to change this, and learned several physics-friendly tools in deeper ways. For a while, I was calling myself a “temporarily embarrassed 10x engineer” on my twitter profile, a joke reference to a John Steinbeck line that was mostly lost on people. A more honest assessment is that I’m a 0.1x engineer who might make it to 0.5x with effort.

Most of the tools I learned through the pandemic were tools I’d previously learned to hello-world level, while a few, such as crimping and 3d printing, were entirely new to me. Here is a partial list:

  1. CAD (with OnShape)
  2. Soldering
  3. Electronics prototyping
  4. Embedded programming (with Arduinos)
  5. 3d printer use
  6. Working with a Dremel tool
  7. Python
  8. Animation with Procreate

Right now, I’m trying to pick up a few more — PyTorch (a machine learning framework in Python), 3d design/animation with Blender, and the basics of Solidity, the programming language for Ethereum. I hope to get to amateur levels of competence in at least a dozen tools before I turn 50, spanning perhaps 2-3 different technological stacks and associated tool chains. I have a sort of nominal goal for this middle-aged tool-learning frenzy converging towards “garage robotics” capabilities, but I’m not very hung up on how quickly I get to the full range of skills needed to build interesting robots (and yes, my current conception of robots includes machine learning and blockchain aspects). It’s going to take me a while to acquire a garage anyway.

This is uncomfortable territory for me because I’m by nature a tool-minimalist. Getting good at even one tool feels like an exhausting achievement for me. That’s why, despite being educated as an engineer, I am primarily a writer. Writing typically requires you to work with only a single, simple toolchain. If you’re good enough, you can limit yourself to just pen and paper, and other people will trip over each other trying to do all the rest for you, like formatting, editing, picking a good font, designing a good cover, getting the right PDF format done, and so forth. I’m not that good, so I have to work with more of the writing toolchain. Fortunately, WordPress empowered writers enough that you can get 90% of the value of a writing life with about 10% of the toolchain mastery effort that old-school print publishing called for, and I am perfectly happy to lazily give up on that last 10%.

So why try to gain competence at dozens of tools? So many that you have to think in terms of “stacks” and “toolchains” and worry about complicated matters like architecture and design strategy? The reason is simply that doing more complex things like building robots takes a higher minimum level of tooling complexity. We do not live in a very user-friendly universe, but we do live in a fairly physics-friendly one. So you need something like a minimum-viable toolchain to do a given thing.

There’s fundamental-limit phenomenology around minimum-viable tooling. A machine that flies has to have a certain minimal complexity, and building one will take tooling of a corresponding level of minimal complexity. You won’t build an airplane with just a screwdriver and a hammer like in the cartoons you see in Ikea manuals. In an episode of Futurama, there is a gag based on this idea. Professor Farnsworth buys a particle accelerator from Ikea that comes with a manual that calls for a screwdriver, a hammer, and a robot like Bender.

Periodically, there is a bout of enthusiasm in the technology world for getting past the current limits of minimum-viable tooling, and so you get somewhat faddish movements like the no-code/low-code movements that move complexity around without fundamentally reducing it. Often, such efforts even lead to tools that are overall harder to use. Even generally lazy people like me, who eagerly await the convenience of more user-friendly tools end up preferring more “geeky” tools in such cases. This is something like the tool equivalent of a popular science book making an idea much harder to understand by refusing to include even basic middle-school mathematics. So instead of a simple equation like a+b=c, you get pages of impenetrable prose.

Premature user-friendliness is the root of all toolchain jankiness perhaps.

Fundamentally reducing the complexity of tooling required to do a thing requires understanding the thing itself better. Simpler, more user-friendly tooling is the result of improved understanding, not increased concern for human comfort and convenience. You have to get more engineering friendly to generate such improved understandings before you can get more user friendly with what you learn. Complex tooling usually gets worse before it gets better.

If you try to skip advancing knowledge, you end up with tools that try to be more user-friendly by becoming less physics-friendly, and the entire experience degrades.

Animation Sublimation

I’ve decided to teach myself the basics of animation this year. Writing hasn’t been as much fun lately but drawing is suddenly becoming more fun. This is probably some sort of sublimation response to writer’s block making me mildly stabby and grumpy 🤬🔪🔪(“I write, therefore I am”).

I’m starting with the rudimentary capabilities of the $10 Procreate app, and am posting gifs approximately daily on Twitter. My initial goal is to make 100 simple animations in the form of gifs a few seconds long. I’ve made 8 so far. You can follow my 100-gif-adventure on this thread. Once I get to 100, hopefully in a few months, I’ll probably upgrade to a more expensive tool and try to make longer things. Maybe 10 one-minute shorts will be the next goal. Here is one of my early attempts with an actual story.

I’ve always harbored animation ambitions, and idle dreams of making a Futurama or Rick and Morty style animated comedy science fiction show, but the tooling is finally getting good enough individuals can do stuff. One can dream :)

Storytelling — Narrative Wet Bulb Temperature

This entry is part 6 of 12 in the series Narrativium

Telling jokes at a funeral is hard. Even entertaining an urge to do so is perhaps not a decent thing to do. At best, you might get away with telling a poignantly humorous anecdote about the deceased as part of a eulogy. The context of a funeral is simply not appropriate for joke-telling, and it’s not just a matter of social norms and performance expectations of grieving solemnity. People simply wouldn’t be in the mood.

Even if you were a comedian who left instructions for your funeral to be conducted in the form of a comedy festival, if people actually liked you, they’d likely find it somewhat difficult to get into the spirit of the idea.

Jokes at a funeral are a simple example of what we might call poor narrative-context fit, NCF. Not all stories can be told at all times with equal impact. And here I mean any performance with a narrative structure, not just actual fiction. The idea applies to nonfiction works too.

What drives narrative-context fit? I don’t have a general answer, but I have one for a special case: storytelling in a time of generalized crisis, such as we are living through now.

It is no secret that it’s been hard to tell compelling stories in the past few years. Television and cinema have turned into a wasteland of reboots and universe extensions. Thought leadership storytelling has descended from the smarmy heights of TED talks to the barely readable op-ed derps of today. It’s not that there are no good stories being told, but compared to say 2000-2017 or so, we’re definitely in a tough market.

A clue about why this is hard can be found in Robert McKee‘s description of narrative suspense:

“As pieces of exposition slip out of dialogue and into the background awareness of the reader or audience member, her curiosity reaches ahead with both hands to grab fistfuls of the future to pull her through the telling. She learns what she needs to know when she needs to know it, but she’s never consciously aware of being told anything, because what she learns compels her to look ahead.”

And

Suspense is “curiosity charged with empathy…” Suspense focuses the reader/audience by flooding the mind with emotionally tinged questions that hook and hold attention: “What’s going to happen next?” “What’ll happen after that?” “What will the protagonist do? Fee?”

Suspense is a “what happens next” curiosity you care about that anchors your attention to a period of time leading up to potential resolution. Or to put it another way, suspense literally creates your sense of future time. If you are not feeling suspense about how something in the future might turn out, in a sense, you’re not feeling the future at all. Your consciousness is concentrated in the past and present only, and not in a good way.

No suspense, no story, no future.

Now, extend this logic to the general background of suspense in the environment that a story has to compete with. We do not consume stories against a blank canvas backdrop. Whatever is going on in the world — a pandemic, a space telescope on a fraught deployment journey, a critical election — shapes the suspensefulness of life in general.

In fact, we might frame a hypothesis, which I call the suspense blindness hypothesis: You can’t see past the next big identity altering thing in your future that’s keeping you in suspense. The most acutely felt “what happens next” thing.

Note that this is a spectator point of view. Suspense only exists if you can’t do much to change the uncertain outcome. You can only watch. If you can act, you’re in the story, not watching it unfold from the sidelines.

When there is a high level of suspense in the general background, it is harder to tell stories because you have to beat that level of suspense. It gets especially hard if you have to tell a story that extends far beyond the temporal horizon created by the suspense blindness. If everybody is waiting for the outcome of a critical election in a year, it’s hard to tell a story spanning the next decade. And this applies equally to a TED talk painting (say) a vision of progress over the next decade, and to a fictional story that plays out over the next decade.

Some of this is merely technical difficulty dealing with storytelling in a forking future. If there is no vague consensus around the future being a certain way, it’s hard to tell stories set in that future. It’s a bit like having to choose a foreground paint color that works against many different background colors, ranging from black to white.

Your only technical recourse is to jump far enough out into the future — a century say — that the stark forking divergences of today can be assumed to have been sorted out. But then the storytelling loses access to the emotional energies of the present.

I came up with a weird metaphor for thinking about this — narrative wet-bulb temperature.

The wet-bulb temperature is a complicated measure of the body’s ability to cool itself. It is a function of temperature and humidity, and when it goes above around 35C, the body can no longer cool itself through sweating. This is one of the many ways in which climate change is a more serious threat than you might think, since it can drive dangerously high wet-bulb temperatures.

Here’s the metaphor: we tell ourselves stories to regulate the amount of narrative tension we feel in life generally. Felt suspense is one measure of this tension (though it’s a rich mess of many contributing textures, such as cringe, horror, fear, amusement, mystification). We metaphorically “cool” or “warm” ourselves through stories (where “temperature” maps to a vector of attributes. Like thermoregulation, narrative regulation is a function of context.

Narrative wet-bulb temperature is a measure of how well narrative regulation can work in a given zeitgeist. Beyond some metaphoric equivalent of 35C, perhaps it becomes impossible to tell stories. Perhaps the appropriate scale is a weirdness scale, measured in Harambes. Perhaps above 35H, storytelling is psychophysically impossible.

As with climate, we have some ability to control our environments through the narrative equivalent of air-conditioning. Personal climate control, through management of exposure to the stresses of the general outdoor zeitgeist, can be done through gatekeeping information aggressively (this idea is central to the book I’m writing). But to the extent storytelling is a public act, such “air conditioned” stories can only be heard by those who share your particular cozy climate-controlled headspace.

We appear to have collectively accepted this particular tradeoff, in that we have collectively abandoned public spaces (and by extension, truly public storytelling) and retreated to the cozyweb.

Random Acts of X

The phrasal template random acts of ________ is clearly one of my favorites. I seem to have used it 20+ times on Twitter in the last few years. Here are the actual instances:

  1. random acts of ontology
  2. Random Acts of Web3ing
  3. random acts of policy vandalism
  4. random acts of templing [as in, treating something as a temple]
  5. random acts of patchy, pointillist, impressionist worldbuilding
  6. Random acts of philosophy in the “air game” and random acts of tinkering in the “ground game”
  7. Random Acts of Magical Thinking
  8. random acts of tariffs
  9. random acts of sciencing
  10. random acts of art production
  11. random acts of revenue-generation
  12. Random acts of petrichor
  13. random acts of strategy
  14. random acts of cash-flow management
  15. random acts of consulting
  16. Random Acts of System Integration (RASI)
  17. Random Acts of Product Development
  18. Random Acts of Workflow Improvement and Unnecessary Optionality
  19. random acts of solutionism
  20. Random Acts of Mildly Profitable or Break-Even Teaching
  21. random acts of twitter strategy
  22. Random Acts of Overt Marketing
  23. random acts of garam-masala-ing

At one point I tweeted a prompt inviting people to fill in the blank, and got a whole bunch of responses, some clever, others not so clever.

iirc, the very first example I encountered, sometime in the 90s I think, was “random acts of marketing.” That stuck with me because it seemed like such an apt description of the marketing efforts of most companies.

Random acts of X are a regime of behavior that you might call “bullshit agency” — some fraction of it works, but you don’t know, and to a certain extent don’t care, which fraction. Hence the famous John Wanamaker quote, ““Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

Random acts of X happen when you act opportunistically, based on circumstantial possibilities and very little thought, and with indifference to whether or not your actions make any sort of larger strategic sense. The randomness in what the immediate circumstances allow or encourage you to do translates to randomness into what you actually end up doing. Noise in, noise out.

This does not mean that the opposite of “random acts of X” is strategy. You can have “random acts of strategy” too, and in fact most strategy fits that description. A CEO goes off on a leadership retreat with a few buddies, enjoys good food, good wine, and whiteboard sessions, and returns with a nice mind-map and strategy notes… and it’s back to the quagmire of operations within a day. That’s random acts of strategy.

Random acts of X regimes are attractive because they allow you to act in very low energy regimes, with low intelligence. And we default to such regimes as a slightly superior alternative to being frozen in inaction and doing nothing at all. The leap of faith underlying random acts of x-ing is belief in a benevolent universe where doing something, anything, beats doing nothing.

Reviewing my tweets, I notice that I use the phrasal template more often to refer to my own behaviors than to comment on others’ behaviors. The template has no particular stable valence for me. Sometimes random-acts-of-x-ing is good, sometimes it is bad.

But looking at my (over)use of the template, I do wonder, what does it take to move such behavior into a non-random regime, without overwhelming it with the artifacts of deterministic planning, and destroying what little energy there is.

The best guide I’ve found so far is Charles E. Lindblom’s classic 1959 management article, The Science of Muddling Through. It is one of the articles I recommend most often to consulting clients (I found it via John Kay’s excellent book, Obliquity)

Muddling through is the act of adding just enough determinism to a default random-acts-of-x situation to get it to make some sort of roughly right directional progress. In Lindblom’s account, muddling through involves a “method of successive limited comparisons” as opposed to a “rational comprehensive” approach.

Muddling through is both a better term, and a better concept, than its degenerate modern descendants like “agile.” The salient feature of Lindblom’s account is that he doesn’t claim muddling through is a “theory” but rather a manner of doing that “greatly reduces or eliminates reliance on theory.”

Still, whether you call it agile and pretend you have a theory, or call it muddling through and admit that you don’t, the problem remains — how do you prevent this regime of behavior slipping into either useless randomness or getting swamped by the imposition of energy-draining theorizing?

One part of the answer is, as Karl Weick argued, to give up on theory, but not on theorizing. The idea that “what theory is not, theorizing is” has been the linchpin of my consulting work for a decade now, but I’ve never quite clarified the essence of the distinction to myself.

Weick’s idea is similar in spirit to the Eisenhower line that plans are nothing, but planning is everything; or Frederick Brooks’ idea that you should “plan to throw one away” (and Joel Spolsky’s counter-argument that you should not throw one away)

I think the common thread here is that your history of engagement with a problem or question is important, but the specific conceptual scaffoldings you used in generating that history are not. The data matters, the algorithm you used to generate it doesn’t. Be the data, not the algorithm.

This then is the solution to the perils of the “random acts of X” regime — better memory. Turn the memoryless random acts of X into memoryful not-so-random acts of X.

This assumes that memory by itself has something like a gradient to it; a historical logic that can bias the context of random-acts-of-x-ing enough that your actions acquire a drift, a direction of muddling through.

This direction is not a True North. It is not a teleological potential induced by a goal, but an etiological potential induced by a history (or more generally, data). A True Past perhaps. The test of truth being that it creates a coherent future despite the randomness of circumstantial forces. Such an etiological potential is, however, merely necessary, not sufficient. To get past historical determinism, the True Past must only be allowed to frame the random acts of x-ing in the present, not fully specify it. And if your random acts are not capable of blowing up the historical context that contains them, they are not random enough.

I think of it as “fuck around and find out, but never forget.”

2021 Ribbonfarm Extended Universe Annual Roundup

This entry is part 15 of 17 in the series Annual Roundups

There is no getting around it: I basically took the year off from this blog, not just in the sense that I wrote much less here than usual (29 posts), but in the sense that all the posts were short ones with self-consciously modest ambitions. In fact, most posts were actively anti-ambitious, since I carefully avoided writing anything with viral potential. The blog basically went underground. For the first time ever, and by design, there was not even a single post that could be called a hit, let alone a viral one.

A big reason was: I had nothing to say in 2021 in blog mode.

And a big reason for that was that the medium of blogging itself is not sure what it wants to say anymore. We are in a liminal passage with blogging, where the medium has no message.

So it’s not just me. It feels like the entire blogosphere (what’s left of it) took the year off to figure out a new identity — if one is even possible — in a world overrun by email newsletters, Twitter threads, weird notebook-gardens on static sites or public notebook apps, and the latest challenger: NFT-fied essays.

All those new media seem to have clear ideas of what they are, or what they want to be when they grow up. But this aging medium doesn’t. And while I have a presence in all those younger media, they don’t yet feel substantial enough to serve as main acts, the way blogging has for so long.

Perhaps there is no main-act medium in the future. Perhaps we are witnessing the birth of a glorious new polycentric media landscape, where the blogosphere will be eaten not by any one successor, but by a collection of media within which blogs will merely be a sort of First Uncle to the rest. The medium through which you say embarrassing things at Thanksgiving, with all the other media cringing. Maybe, just as every unix shell command turned into a unicorn tech company, every kind of once-blog-like content will now be its own medium. Listicles became Twitter, photoblogs became Instagram, and so on.

The entire blogosphere is going through perhaps its most significant existential crisis since the invention of blogging 22 years ago. And I’ve been at this for 15 of those years — this is the 15th annual roundup! Ironically, every couple of years through that period, there has been a round of discussion on “the death of blogging,” but now that it seems to be actually happening, there isn’t an active conversation around it.

If this is the end, it’s a whimper rather than a bang.

One sign it is real is — this is the second roundup I’ve felt compelled to title “extended universe” because my publishing presence is now simply too scattered for the blog alone to represent it.

But I rather hope not. I think there’s a chance it’s going to be a Doctor Who style regeneration instead, and if so, I’m here for it. If blogs must die, so be it. If there’s a fighting chance of a regeneration, the fight will be worthwhile.

On to the roundup, with embarrassing-uncle commentary on the brave new world.

[Read more…]

Thinking in OODA Loops

I’ve been meaning to turn my OODA loop workshop (which I’ve done formally/informally for corporate audiences for 5+ years) into an online course for years, but never got around to it. So I decided to just publish the main slide deck.

Here’s the link.

This deck is 72 slides, and takes me about 2 hours to cover. It actually began as an informal talk using index cards at the 2012 Boyd and Beyond conference at Quantico, to a hardcore Boydian crowd, so it’s survived that vetting.

The two times I’ve done the full, day-long formal version for large groups, I’ve paired a morning presentation/Q&A session with an afternoon of small group exercises applying the ideas to specific problems the group is facing. More commonly, I tend to just share the deck with consulting clients who want to apply OODA to their leadership challenges. We discuss 1:1 after they’ve reviewed it, and begin applying it in our work together.

In the spirit of John Boyd, whose OG briefing slides are freely available on the web (highly recommended), I’m releasing these slides publicly without any specified licenses, restrictions, or guarantees. There’s a lot of random google images and screenshots from documents in the slides, so use at your own risk.

Feel free to use these slides as part of your own efforts to introduce others to OODA thinking, including as part of paid courses. You can also modify/augment/remix them as you like. Attribution appreciated, but not expected.

Read on, for some notes/guidance on how to design a workshop incorporating this material.

[Read more…]

Jumping into Web3

This entry is part 1 of 1 in the series Into the Pluriverse

I’m kicking off a new blogchain to journal my explorations of Web3: the strange world of NFTs (non-fungible tokens), DAOs (decentralized autonomous organizations), domain names ending in .eth, and so forth. I wasn’t going to get into it quite yet, but events in the last week dumped me unceremoniously into the deep end.

I’m chronicling the play-by-play in an extended Twitter thread. There is also now an NFTs page for ribbonfarm. I’ve already sold two (on mirror.xyz and on OpenSea.io).

As I write this, a 24-hour auction for my third NFT, is underway on foundation.app. I’m thinking of it as my first serious minting, since it’s a piece that a lot of effort went into — the ribbonfarm map of 2016 (if you’re interested in bidding, you’ll need the metamask wallet extension and some ether).

I’m still pretty down in the weeds and haven’t yet begun to form coherent big picture mental models of what’s going on. But I did make this little diagram to try and explain what’s going on to myself… and then made an NFT out of it.

I’ll hopefully have more interesting things to share after I have some time to reflect on and make sense of the rather hectic first week.

Beyond the fun game of making money selling artificially digitally scarce objects, the broader point of diving in for me is that it’s clear Web3 is going to drastically transform the way the internet works at very deep levels. Not just in the sense of deeply integrating economic mechanisms within the infrastructure, but also in terms of how content is created, distributed, and presented. If this develops as it promises to, Web2 (what used to be called Web 2.0) activities like blogging and writing newsletters are going to be utterly transformed. So this is as much a sort of discovery journey, to figure out the future of ribbonfarm, as it is a dive into an interesting new technology.

The highlights of my first week (details in the Twitter thread):

  • Minted and sold 2 NFTs, participated in a 3rd via a minority stake
  • Got myself a couple of .eth domains, including ribbonfarm.eth — which led to an unexpected windfall
  • Set up a Gnosis multi-sig safe for the Yak Collective, and helped kick off plans to turn it into a DAO
  • Entered something called the $WRITE token race to try and win a token for the Yak Collective to start a Web3 publication on mirror.xyz (you can help us get one by voting tomorrow, Wednesday, Nov 10)
  • Signed the Declaration of Interdependence for Cyberspace, my first crypto-signed petition
  • Presumably pissed off about 20% of my Twitter following going by this poll (Web3 is a very polarizing topic)

There’s a lot going on, as I’m discovering. Every hour I spend exploring this, I discover more new things, at every level from esoteric technical things to subtle cultural things.

If you, like me, have been thinking that being roughly familiar with the cryptocurrency tech scene of a few years ago means you “get” most of what’s going on here, you’re wrong. The leap between the 2016-17 state of the art and this is dramatic. There’s a great deal more to understand and wrap your head around.

I’ll update this blogchain with summaries and highlight views as I go along, but the devil really is in the details on this one, so if you’re interested in following along without getting lost, I recommend tracking my twitter thread too.

Ghost Protocols

This entry is part 1 of 3 in the series Lexicon

A ghost protocol is a pattern of interactions between two parties wherein one party pretends the other does not exist. A simple example is the “silent treatment” pattern we all learn as kids. In highly entangled family life, the silent treatment is not possible to sustain for very long, but in looser friendship circles, it is both practical and useful to be able to ghost people indefinitely. Arguably, in the hyperconnected and decentered age of social media, the ability to ghost people at an individual level is a practical necessity, and not necessarily cruel. People have enough social optionality and legal protections now that not being recognized by a particular person or group, even a very powerful one, is not as big a deal as it once was.

At the other end of the spectrum of complexity of ghosted states is the condition of officially disavowed spies, as in the eponymous Mission Impossible movie. I don’t know if “ghost protocol” is a real term of art in the intelligence world, but it’s got a nice ring to it, so I’ll take it. One of my favorite shows, Burn Notice, is set within a ghost protocol situation.

If you pretend a person or entire group doesn’t exist, and they’re real, they don’t go away of course. As Philip K. Dick said, reality is that which doesn’t go away when you stop believing in it.

So you need ways of dealing with live people who are dead to you, and preventing them from getting in your way, without acknowledging their existence. When you put some thought and structure around those ways, you’ve got a ghost protocol.

[Read more…]

MJD 59,514

This entry is part 21 of 21 in the series Captain's Log

This Captain’s Log blogchain has unintentionally turned into an experiment in memory and identity. The initial idea of doing a blogchain without meaningful headlines or fixed themes — partly inspired by twitter and messenger/Slack/Discord modes of writing — was partly laziness. I was tired of thinking up sticky and evocative headlines, plus I was getting wary of, and burned-out by, the unconsciously clickbaity nature of headlined longform.

I couldn’t remember anything of what I’ve written here, so I just went back and read the whole series, all 20 parts, and it’s already slipped away from my mind again. Names are extraordinarily strong memory anchors and without them we barely have textual memories at all. I can recall the gist of many posts written over a decade ago given just the name or a core meme, but for this blogchain, even having re-read it five minutes ago I couldn’t tell you what it was about. The flip side is, it wasn’t actively painful to reread the way a lot of my old stuff is (which is why I rarely re-read). In some ways it was kinda surprising and interesting to review. The lack of names means a lack of fixed mental models of what posts were about. It’s weird to be able to “cold read” my own posts. It’s like simulated Alzheimer’s or something, and it’s almost scary. It would be terrible to go through life with this level of non-recall.

The amnesiac effect of lack of names is reinforced by the lack of narrative, which is a function of lack of theme (or more concretely, lack of memetic cores). Over the 20 parts so far, I’ve wandered all over the place, with no centripetal force driving towards coherence. The parts were also far enough apart, there was no inertia from being in the same headspace between parts. It’s been a random walk of my mind.

This feels weird. It’s easy to remember at least a few highlights of themed blogchains, even if they lack a proper narrative throughline. I have a (very) vague sense of the ideas I’ve covered in the Mediocratopia or Elderblog Sutra blogchains for instance. Even if there isn’t a necessary order and sequence to the writing, a themed series grows via a web of association. So if you recall one thing, you remember some other things.

But order matters too. We remember things more easily when there is a natural and necessary order to them. This was reinforced for me in this blogchain in dealing with a bug The series plugin I use screwed up and indexed several of the posts out of order, which I took 5 minutes to fix. But reading the posts out of order made zero difference. Since they are not related, either by causation or thematic association, order is neither necessary, nor useful. It’s like how chess players have uncanny recall of meaningful board positions that can actually occur in a game, but not of boards with randomly placed pieces. It’s more than a mnemonic effect though. There is intrinsically higher randomness to a record of unnamed thoughts. The only order here is that induced by me and the world getting older.

This all seems like downsides. Recall is far worse, coherence is far worse. For the reader, the readability is far worse. Is there any upside to writing in this way? I’m not sure. It does seem to tap into a sort of atemporal textual subconscious. It also makes for a very passive mode of writing. A name is a boundary that asserts a certain level of active selfhood. A theme is a sort of grain to the interior contents. A narrative is a sequence to the contents. Each of the three elements acts as a filter to what part of the outside world makes it into the writing. When you take down all three, the writing occupies something like an open space where ideas and thoughts can criss-cross willy-nilly. It is homeless writing, with all the attendant unraveling and disintegration of the bodily envelope (I wrote about this in a paywalled post on the Ribbonfarm Studio newsletter).

A named idea space is a space with a wall. A named and themed idea space is a striated space with a wall (in Deleuze and Guattari sense). A named, themed, and narrativized space is a journey through an arborescence. A nameless, themeless, storyless space develops in a rhizomatic way, reflecting the knots and crooks of the environment. It is not just homeless writing, it is writing where there’s nobody home. It’s the textual equivalent of the “nobody home” affect of far-gone mentally unravelled homeless people.

Another data point for this effect. I just finished a paper noteboook I started just before the pandemic. So it’s taken me about 2 years to fill up. Back in grad school, 20 years ago, I used to be very diligent with paper notes. There was a metacognitive process to it. I’d summarize every session’s notes, and keep a running table of contents. I’d progressively summarize every dozen or so sessions. My notes were were easy and useful to review. Now I’m lazy, I don’t do anything of that sort. It’s just an uncurated stream of consciousness. With just a few pages left in the notebook, I tried to go back and reconstruct a table of contents (thankfully I was at least dating the start pages of each session) but it was too messy, hard, and useless, so I gave up. Progressive summarization ToC-ing is only useful and possible when you do it nearly real time. Naming and headlining work only when you name and headline as you work. So what I have with this latest filled notebook is just one big undifferentiated idea soup that’s nearly impossible to review. It’s worse than Dumbledore’s pensieve. It’s something of a memory blackhole. It is recorded but not in a usefully reviewable way. But arguably, not doing the disciplined thing led to different notes being laid down. I thought and externalized thoughts I would otherwise not have thought at all. I can’t prove this, but it feels true. And while it’s harder to review, perhaps the process of writing made it more transformative?

About the only thing I’ve been able to do with both this blogchain and the paper notebook, in terms of review, is go back (with a red pen or the editor) and underline key terms/phrases, and maybe tabulate them elsewhere into an index. I can trace the evolution of my thought through the index phrases. These nameless memories are indexable, but not amenable to structuring beyond that. It’s the part of your mind that you can Google but not map (this is the real “googling yourself”). These are demon notebooks. It’s dull to review now, but in a few years perhaps, it will be interesting to review as a record of what I was thinking through the pandemic. Maybe latent themes will pop.

Twitter of course is the emperor of such demon notebooks, though shared with others. I’ve taken to calling the nameless structures that emerge in my tweeting threadthulhus. These blog and paper demon notebooks though, are not threadthulhus. They are more compact and socially isolated. They are lumps of textual dark matter. They are pre-social, more primitive. They lack the identity imposed by mutualism.

With both this blogchain and my unreviewable demon paper notebook, I think I’ve kinda explored what names/headlines, target themes, and narratives do in writing: they alienate you from your own mind by allowing you to create a legible map of your thoughts as you think. Anything you structure with a name/theme/narrative (the alienation triad) is a thing aside from yourself that you can sort of distance from yourself and point to as an object, and let go off, and even meaningfully sell or give away to others. Alienation is packaging for separation. Anything that you don’t do those things to remains a part of you. This is not a bad thing. Not everything you can think is ready to be weaned from your mind. Even if you’re willing to share it with the world, it does not mean you are able to separate it from yourself. Just because you make second brains doesn’t mean first brains disappear. Exploring them is a distinct activity.

This sort of writing is arguably indexical writing. Writing as self-authorship. What doesn’t have its own name, theme, and narrative is part of you. In fact the only thing holding it all together is the fact that you’re writing it. This is a self-reinforcing effect. The act of writing in that mode sort of encourages those least detachable thoughts in your head from emerging and making themselves available to hold and be.

There is a paradox here. The most indexical writing is also the most open-to-the-world writing since it lacks filters. So it is both a self-authoring process and a self-dissolution process. What comes out is both most truly you, and not you at all. Self-authorship and self-dissolution are two sides of the same coin. Being is unbecoming. To be homeless is for there to be nobody home.

You could argue that it is the process of giving names, boundaries, and thematic and narrative structure to thoughts to externalize them that is a highly unnatural and strange process. Like mutilating your brain by carving out chunks of it to push out. I am not sentimental enough about the writing process to actually feel that way, but I kinda get now what angsty poets must feel.

I think this is the key difference between diary-writing or journaling and “writing.” The lack of traumatic separation and self-alienating packaging.

This experiment hasn’t yet run its course, and I might keep it going indefinitely, but I think I finally understand the point of it, and why I unconsciously wanted to do it and why I feel it helps the other writing.

Where do you go from this kind of writing? Well, if you continue down this course — and I already see this happening a bit — you head towards increasingly commodity language. You seek to avoid evocative turns of phrase, stylistic flourishes, and individual signature elements — anything that asserts identity. You seek to make the writing unindexable, not just unmappable. You seek to go beyond individual self-authorship and channel a larger vibe or mood. Or maybe you try to fragment your own mind into a bunch of authorly tulpas. Or maybe you mind-meld with GPT-3 and write in some sort of transhuman words-from-nowhere mode. Ultimately you get to various sorts of automatic writing. I don’t necessarily want to go there, but it’s interesting to see that that’s where this path leads. This is the death of the author as an authorial stance as opposed to a critical readerly stance. It’s a direction that naturally ends in a sort of textual suicide. At the level I’m playing it, it’s merely a sort of extreme sport. Textual base-jumping perhaps. But this direction has strong tailwinds to it. Increasingly large amounts of public text in the world form this sort of featureless mass that’s grist for machine-learning mills, and increasingly, no identity of its own.

You might say the natural end point of this kind of writing is when it becomes indistinguishable from its GPT-3 extrapolations and interpolations.

Or going the other way, there are potential experiments in radical namefulness. Everything is uniquely identifiable, memorable, evocative, and nameable, and has a true name. Narrative coherence is as strong as possible. Thematic structure and causal flow is as tight as possible. Un-machine-learnable texts. I’m not sure that kind of text is even possible.