Can you predict the last word in this sentience? It’s not “sentence”. The last word in sentience research is that most of what our brains do is try and predict the signals they’re about to receive, like the words you read on a page. Prediction shapes our perception, which is why that word appeared as “sentence” the first time you read it.
Our brains implement predictive models at multiple levels, from general worldviews to detailed patterns. When reading a text, your brain starts predicting the language and theme based on its model of the publication. This drives the prediction of sentences based on a model of grammar, prediction of how words should be spelled, and finally a detailed prediction of how characters should display.
Consider:
- This looks weird.
- This looks wiedr.
- Looks weird this.
- Your mom looks weird.
Our brains optimize for predicting incoming signals over our entire lifetime. This is achieved in two ways: doing a good job of predicting inputs right now, and learning new models that will allow us to make great predictions in the future. Does reading the post so far feel unpleasantly confusing? That’s because the content was too unpredictable, and contradicted too many of your existing models of how brains work. Did it feel awesomely mind-blowing? That’s the joy of acquiring a new model that offers a condensed explanation of what you already know, and thus a promise of better predictions to come.
In either case, you should learn more about the predictive processing paradigm of cognition from this series of articles or this book review; this blogchain is mostly done covering the established science. Instead, we’re going to forge forward irresponsibly and use predictive processing to explain political polarization, identity, war, and adjunct professorship.
Do you think you know what’s coming?