The philosophies of science that I find most compelling, such as Paul Feyerabend’s, tend to argue for methodological anarchy as the characteristic of the most historically impactful science. It is not immediately obvious, but I think this is equivalent to arguing that the best science (and any sort of inquiry conducted with a scientific sensibility) is necessarily permissionless. Anarchic permissionlessness though, does not equal chaotic lack of structure. It is just that structure emerges from the nature of the research, rather than generic procedural templates. Investigations always require protocols, even casual ones, but they need not be derived from some abstract high-modernist notion of a uniform “scientific method.”
Do uniformities in the nature of all knowledge justify privileging particular research methods at all? This is an epistemology question to which I have yet to hear a satisfying answer. In some ways this is the most important practical question in the philosophy of science. Whether or not you think you need permission from an authority figure to do research depends on whether or not you think certain methods ought to be naturally privileged.
If there are no meaningful uniformities, and all knowledge is contingent and somewhat idiosyncratic, there is arguably no well-posed category deserving of the label of “science” at all, and no fundamental epistemological reason for permission-granting authorities to exist. At most, there might be ethical, risk-management, or resource-scarcity reasons.
If on the other hand, there are significant uniformities, and a defensible class of privileged methods deserving of the label science, that fact must be demonstrated, not merely assumed casually based on a shallow projection of bureaucratic procedural conceits onto the history of science.
I think I land in the middle of the spectrum of possible answers personally. I think there is some generality in the nature of knowledge and in the methods to get at it (and therefore some justification for permission-granting institutions to exist around locally privileged methods), but no universality. We should neither expect every investigation to come packaged with an idiosyncratic, solipsistic epistemology underwriting its validity, nor a one-size-fits-all official “scientific” epistemology to underwrite all valid knowledge. We should expect loose affinities among investigations creating an illegible and shifting landscape of methodologies, with constantly contested borders. The philosophy of science is less about discovering the One True Way™ of sciencing, and more about tracking the evolving state of border conflicts between swampy methodological territories.
The older I get, the more I suspect the naive bureaucratic process of “hypothesis, experiment, result” that kids are taught to think of as “science” is at best one territory among many. For most investigations, it is not even wrong, and does far more harm than good. Knowledge does not work that way. In fact, the conservative position is that we do not actually know how knowledge works at all. We just have a collection of investigation hacks of dubious generality that we can learn to tastefully apply in particular domains.
To the extent any officially sanctioned “scientific method” is necessary or sufficient for the production of valid knowledge, it is within the narrow context of what I think of as “ordinary laboratories,” where a certain narrow kind of controlled research can be conducted. Much research though, happens in what I think of as real-world extraordinary laboratory contexts. These contexts are something like the wilderness of science.
If nobody can really tell you how to science, it’s not really meaningful for anyone to impose scope boundaries on what you can science. At best you can be denied access to some pre-existing knowledge, or to tools you are unable to construct for yourself. The greater the element of unknown in the research, the more true this is.
If absolutely everything worth knowing is known, authority figures also know the reach of different methods, and can meaningfully police that reach. If absolutely nothing is known, the only method available to a researcher is random experiments with entirely unpredictable effects, and it is hard to dictate scope a priori. Naive regulation attempts to dictate scope regardless of what is known, leading to ill-posed demands such as the currently popular one that AI output be “explainable.” More sophisticated attempts aim to ban entire classes of methods, such as gain-of-function, cloning, or training large machine-learning models, on ethics or risk-management grounds. And of course, people can also object to research on the grounds that it represents a gross misallocation of societal resources (space exploration is a common target of this sort).
The temptation to govern research is understandably strong among those who have to live with the consequences, good or bad, but most knowledge-producing societies have historically understood that some governance must be surrendered, and a corresponding amount of anarchy accepted, for the fruits of research to be available at all. But even where a significant amount of anarchy is officially tolerated, the amount of anarchy actually found in research is very low. Lack of resources explains some of this of course, but is only a small part of the story.
If you set aside the subset of research domains where safety, risk, or resource-scarcity arguments apply, you find that vast realms of research are being governed where there is neither an epistemological justification, or a practical one. There are also tons of ethically unproblematic, safe, cheap, and ungoverned research questions sitting out there, with nobody paying attention to them.
The thing is, even where there is a degree of wise restraint in the impulse to govern research, the temptation to seek permission can be strong.
Talented but inexperienced researchers tend to focus on what they imagine to be important questions, and insofar as importance is nearly always a matter of social and institutional consensus (with funding on offer further distorting the research impulse), the natural next step is seeking permission from influential authority figures with visible influence over that consensus. It feels natural to ask their opinion on whether to pursue this or that question, and to defer to their suggestions as though they were binding judgments. This approach can reliably produce good, solid research within mature paradigms, but only within the limits of the vision of governing agents.
Producing imaginative and bold research takes a more methodologically and thematically opinionated approach that starts with what an individual researcher suspects will be personally interesting. If it turns out to be a rich vein rather than a dead-end, importance tends to follow naturally, and institutions and methodological discipline eventually emerge. Even within obviously important domains, such as seeking cures for cancer, interestingness is usually a better starting point.
If you learn to operate in an interestingness-first way, or tend to do so naturally, you will be very open to suggestions and feedback, but won’t seek permission to pursue the curiosities that hook you, or let lack of permission stop you. You will avoid permission-oriented institutions, and seek more laissez-faire contexts for your research. But this does not necessarily mean retreating to the wild margins of non-institutionalized research, or limiting yourself to solitary crackpottery. Structured social contexts aren’t just sources of unwanted permissioning, governance, constraints, and captive resources. They are also sources of stimulations, provocations, catalysis, skeptical scrutiny, and company. And with thoughtful design, these affordances can be made permissionlessly available to all comers. And where governance is sufficiently restrained, and willing to pursue such thoughtful design, the upsides of being in a loosely governed structured research environment far outweigh the costs.
Ironically, it is often autodidacts and independent researchers outside of formal institutions who are most sensitive (though often unconsciously) to the importance of the subtler affordances of structured contexts. One sign is that crackpottery is more often marked by wishful adoption of the cosmetic markers of institutional legitimacy than by wild incoherence. There are more beautifully formatted but subtly wrong crackpot LaTeX documents out there than there are strange websites with Theories of Everything rendered in green Comic Sans.
This brings us to an important practical question. Given that some amount of institutional structure is desirable, how should it be provisioned?
I think there are four important principles worth following.
First: laissez-faire management. In my experience of research institutions (universities and corporate labs), the best research managers strongly resist the temptation to tell researchers what to study or how, even when researchers seem desperate for such micromanagement. They realize that such managerial behavior can quickly harden into a conservative, risk-averse, permission-driven culture. Of course, funding programs require some sense of purpose and direction, but the best research missions manage to establish purpose and direction primarily via inspiration and catalysis rather than strong constraints on scope and methodology.
Second: mindful presence. The best research managers also resist the temptation to withdraw from process entirely, limiting themselves to the role of banker-protector-judge of a chaotic playpen. Instead they are intimately involved, as stewards of the evolving methodological anarchy. They challenge lazy assumptions, constantly test the rigor of unfolding thinking, point out and enable connections, and are generally mindfully present. Perhaps the most important effect of such presence is that it gets researchers to challenge each other continuously, while avoiding self-limiting groupthink. The late Bob Taylor of Xerox PARC famously drove the researchers to greater heights by having them vigorously test each other’s thinking.
Third: context-sensitivity. What counts as a solid research program design in 1960s California is not the same thing as what counts as solid research program design in a 2020s virtual network. The structure has to be thoughtfully entangled with its environment by design, upfront, and allowed to become even more entangled with it over its lifetime.
Fourth: porous boundaries. To the extent you are not certain of your answer to the fundamental question of methodological universality vs. anarchy, you should design an institution to hedge against the possibility that you guessed wrong. Since most traditional research institutions err on the side of being too closed and structured, this generally means being more open to unstructured anarchy than might feel comfortable.
This last point is particularly relevant for permissionless research.
Xerox PARC is often held up as an instance of an institution being “too open” (anyone could wander into the lab and check out the research in progress, as Bill Gates and Steve Jobs famously did), but this is only from the narrow perspective of intellectual property rights and commercial returns. From a broader perspective, it was a good thing for humanity that PARC was so open and permissionless. The computing industry as we know it likely wouldn’t have come into being if it hadn’t been. What’s more, as Henry Chesbrough and others have demonstrated through their work on open innovation, the perception that Xerox “fumbled the future” is wrong. It was actually able to derive a company-resurrecting level of value from the one PARC invention it did commercialize: the laser printer. And the only reason that invention saw the light of day was that the researcher who invented it, Gary Starkweather, moved to PARC from the Xerox Rochester lab (where I worked for 4 years almost a half-century later), which was too closed.
In general, despite naive fears, openness is good for research institutions, so long as there is an element of strategy in the openness, as there actually was at Xerox. Simon Wardley showed that when companies combine openness and a high level of strategic play, they tend to become dominant in their industries. The same is true, mutatis mutandis, of nations and entire cultures. Openness is good.
Laissez-faire management, mindful presence, context-sensitivity, and porous boundaries. That’s the current best formula for how you can be be both methodologically anarchic and usefully institutionalized at the same time.
In a virtual, networked environment, where permission cultures have more to do with access to web communities and information channels than to physical buildings and watercooler conversations, the formula leads to modern ideas about open source and working in public, within radically larger research contexts. Where the accessible open-anarchy ecosystem around Xerox PARC was limited to the Bay Area, the accessible open-anarchy ecosystem around an online community of researchers is effectively the whole world. Which means, the permissionless research going on outside your porous boundaries is likely more important than what’s going on inside. The relationship between inside and outside begins to asymmetrically favor the outside. To the point that to many, being on the inside can start to feel like stifling withdrawal from excitement rather than liberating access to resources and security.
Such people often gravitate to the most public contexts they can survive in, and are unwilling to give up freedom for resources.
To some degree, this is describes my thinking in 2011, when I left comfortable institutional research environments (Xerox and before that, academia) behind for a precarious existence based on a blog. So it feels rather ironic that I now find myself managing a formal summer research program (the Summer of Protocols) with institutional backing. It feels like I’m an escaped lunatic who’s been lured back in and handed keys to the asylum warden’s corner office.
But in a way I dimly saw this in my future over a decade ago. Back around 2010, my boss at Xerox once asked me if I was actually interested in the standard research management track that I was being groomed for. I said no, and at most, I might want to run “something in the ecosystem.” It was an off-the-cuff answer, but now I find myself actually doing something like that. I see my main challenge as somehow letting the permissionless surrounding ecosystem drive the agenda of the weakly permissioned inner world of the program. While I think we’ve done a great job picking people for the program, there are obviously a lot more great people out there who are not in the program, and looping them in somehow is the key to interesting sorts of leveraged impact.
This means figuring out what it means for a research environment to be structured but permissionless.
Two big pieces of the puzzle are, of course, open source and working in public. But there are many more pieces that are as yet unclear. This is a fundamentally new mode of research, at least in this internet-supercharged form, and there’s a lot to figure out. Much of the research management playbook from the golden age of research laboratories and institutes has to be thrown out, but it’s not yet clear what can take its place.
When inexperienced or insecure researchers resist a permissionless research process, it is usually because they unconsciously view the group as a permission-granting peer group rather than as a catalytic resource, and the institutional boundary as a line of defense rather than a blocker of information flow. Experienced researchers simply treat such social contexts as (porous) crucibles suitable for a certain developmental stage in the life of an idea. They reserve the right to decide when to subject an idea to such an environment. Too early, and you risk compromising the imagination and boldness of the idea. Too late and you risk it turning into fragile crackpottery that cannot withstand even sympathetic critical attention, let alone hostile attention.
Under conditions of permissionless methodological anarchy, convergence to a social consensus about what counts as truth is driven by the natural contours of the domain itself. Two independent-minded bird researchers inclined to reject official ideas about Proper Ornithology™ might still converge on similar research methods, such as classifying birds by color and size. Even without formal top-down theories, they might arrive at consensus about subtler things like the importance of classifying toe arrangements and perching behavior. Even researchers widely separated by cultural distance are likely to fruitfully converge where it matters. Ancient Greek and ancient Chinese observers might have named birds differently, but would likely have agreed that some birds fly.
So long as there is open sharing, effective research methods are likely to spread by imitation, and ineffective ones are likely to get discarded. The philosophy underlying such an expectation is simply that external reality exists and people paying disciplined attention to it are likely to converge in interesting and important ways, even if they diverge in other ways and never cohere into an orthodoxy.
This expectation runs counter to the currently popular expectation that a permissionless environment without legitimate authority figures reining in the chaos necessarily leads to a multiverse of solipsistic alt realities. In my experience, this is a vastly overblown fear, attributable more to the impact of malicious and deliberate epistemic-pollution behaviors than to any natural human tendency to solipsism. The presence of bad actors with malicious agendas is a reason to be more careful and skeptical, not more closed and permissioned.
Solipsism is fun rhetorical posture to adopt (an epistemological LARP), and it is possible to cleverly defend radical subjectivism, but disagreements about whether birds are real or whether the Earth is flat get tedious after a while. Metaphysical arguments about reality versus perception do get at interesting questions sometimes, like in talking about subatomic particles or altered states of consciousness, but generally devolve into word games when we are curious about, say, the behavior of birds.
And on the flip side, when the potential of permissionless research is actually unleashed, without being hamstrung by needless fears, explosive kinds of impact become possible. Impacts that are hard or impossible to achieve in highly permissioned contexts.
re “The older I get, the more I suspect the naive bureaucratic process of “hypothesis, experiment, result” that kids are taught to think of as “science” is at best one territory among many. For most investigations, it is not even wrong, and does far more harm than good. Knowledge does not work that way. In fact, the conservative position is that we do not actually know how knowledge works at all. We just have a collection of investigation hacks of dubious generality that we can learn to tastefully apply in particular domains.”
can you say more about alternative ways in which knowledge works? this post does not seem to need to be limited to “science”.
Any good scientist biography seems to showcase idiosyncratic methods,
And yeah, not limited to science. I think the same holds for management and leadership models for example.
Speaking of management models, I and a small team have several years within the tech industry seeing first hand how the need for permission, or the concept of higher ups signing off work, to innovate and experiment can have profound impact on the rate of innovation. Not only the rate but this can also limit the scope of innovation as you pointed out. The permission to innovate often means it can only happen “within the limits of the vision of governing agents”.
One example of this we faced when working at a global media company was when a lower management stakeholder of a project deemed our innovations of being able to build large scale applications quickly to be a waste of time in his eyes saying “I only build one app so I don’t care if it can be used for other apps.”
It’s true in his eyes the innovations were useless but when looking at a global business problem they are key innovations.
Another point you shared about laissez-faire management and how you find the best research managers strongly resist the temptation to tell researchers what to study or how.
What I’ve experienced is often thinking differently is seen as a combative affair and so only encouraging the ideas that align with their thinking is seen as the strategy to success. This results in the elimination of diversity that time and time again we see many examples where a lack of diversity has catastrophic consequences.
We has a group have written a manifesto discussing an idea we call Gardening. Would love to get your and your readers thoughts on these ideas.
We too believe in openness and so much of what we do can be found on GitHub.
https://github.com/thousandyears/garden
If there are any questions happy to reply to any comments you might have by raising an issue on the GitHub repo.
The quote does make sense. Not from a “philosophy of science” perspective, but via an argument from purity. A scientist would hardly ever theorize, let alone validate and hypothesize.