Mike is a 2013 blogging resident visiting us from his home blog Omniorthogonal.
“I want to see you not through the Machine,” said Kuno. “I want to speak to you not through the wearisome Machine.”
– E M Forster, The Machine Stops
Martin Buber (1878-1965) was a Jewish philosopher best known for integrating traditional Judaic thought with existentialism and other modern influences. His I and Thou is one of those little books that can utterly transform your worldview in just a few pages. It has some of the concentrated linguistic power of poetry or mathematics. Given its mystical religious overtones, that makes it feel somewhat dangerous to me — I can’t entirely embrace what it is saying, but fear that its linguistic spell might overpower my usual defenses.
Introduction
The book turns on the idea that there are different stances an individual can take, and that these stance have correlates in the deep structure of language. In Buber’s scheme, there are two “basic words” a person can speak: I-it, a word and resulting world in which an individual interacts with and experiences individual objects, and I-you, a word that creates the world of relation. (Buber’s translator, Walter Kaufmann, takes some pains to explain that I-you is a much better translation of the original German Ich und Du; “thou” is much too formal a term, suitable for addressing God perhaps, but not an intimate human being).
Buber’s dualistic scheme is oversimplified, of course. Walter Kaufmann provides an entertainingly skeptical prologue, pointing out that there are many more stances available to man, rather than just two, and that it is the oldest trick in the world for philosophers to reduce the available options to two and then promote one of them while denigrating the other:
The straight philosophers tend to celebrate one of the two worlds and deprecate the other. The literary tradition is less Manichean… Ich und Du stands somewhere between the literary and philosophical traditions. Buber’s “It” owes much to matter and appearance, to phenomena and representation, nature and means. Buber’s “You” is the heir of mind, reality, spirit, and will, and his I-You sometimes has an air of Dionysian ecstasy. Even if I-it is not disparaged, nobody can fail to notice that I-You is celebrated
– Kaufmann, p 18
Buber doesn’t view the I-It world as evil in itself, and acknowledges that it is necessary to sustain life, not something to be scorned. But it is clear that his heart, his aim, his values, all are in the other world of I-you. He says that as humanity progressed through the advancement of material civilization, it was in danger of displacing the other world entirely, leaving hollowed-out people incapable of true relationships. “When man lets it have its way, the relentlessly growing It-world grows over him like weeds” (p96) The cultural phenomena that he noticed in the 1920s have only been take to new extremes since then.
Holism and Soulism
The You encounters me by grace — it cannot be found by seeking. But that I speak the basic word to it is a deed of my whole being, is my essential deed…
The basic word I-You can be spoken only with one’s whole being. The concentration and fusion into a whole being can never be accomplished by me, can never be accomplished without me. I require a You to become; becoming I, I say You.
All actual life is encounter.
Buber’s viewpoint is both holistic and religious. As such, it raises my reductionist hackles. The scientist in me doesn’t want to hear of some level of reality that can’t be broken down into simpler interacting parts. What is this “whole being” that he speaks of? I’m skeptical that it exists, although perhaps that just reflects poorly on me – whole beings can see other whole beings, perhaps I am merely partial, deficient in some wholiness.
In his holism there is a lot of resonance between Buber and the metaphysics of Christopher Alexander (of Pattern Language fame), who is also convinced that “wholeness” is fundamental to the structure of reality. Both of these writers I find maddeningly tantalizing and fascinating, even as I struggle to accept their worldview which is so opposed to what I have always been taught and largely believe. My reaction to Buber, Alexander, and almost all religion is something similar: torn between skepticism, and an insistent nagging feeling that maybe there is something there after all, something which is of fundamental importance that must be attended to.
Reductionism may be a true and proper approach to the scientific understanding of the universe, but it fails as a guide through actual life. We don’t interact with other people, by analyzing them into components. We all may be composed of physical processes and independent drives, but it simply doesn’t work to relate to other human beings as physical processes. There is clearly something wrong with that approach. I can’t say exactly what that wrongness is, but Buber may provide some clues:
The life of a human being does not exist merely in the sphere of goal-directed verbs. It does not consist merely of activities that have something for their object.
I perceive something. I feel something. I imagine something. I want something. I sense something. I think something. The life of a human being does not consist merely of all this and its like.
All this its like is the basis of the realm of It.
But the realm of You has another basis.
Whoever says You does not have something; he has nothing. But he stands in relation.
— I and Thou (p54)
Buber’s approach here (and it is really the only mode of religious writing that works for me at all) is apophatic: he describes his mystical (though embodied) ideal by all the things it is not: goal-directed, perceiving or sensing particular objects, possession. It’s something that is not any of those things, though what it is remains essentially elusive.
Buber vs the fragmentary self
Buber was a religious man who took the reality of Thou very seriously. I am not, or not very, and consider the Thou more as a useful fiction. But where I find myself in harmony with Buber is in his quasi-algebraic analysis of the relation between grammar, metaphysical stances, and their parts and symmetries. If “Thou” is a fiction, then “I” is a fiction as well. They take form and tremble on the edge of reality together, they partake of a similar sense of the sacred. Fictional does not mean unreal or trivial or dismissable.
I like to put Buber’s viewpoint up against those of psychologists who emphasize the disunity of the self (Freud, Marvin Minsky, George Ainslie). Their work exposes and theorizes the fragmentary nature of mind, how it is composed of parts that are often in conflict with each other, how such conflicts are settled, and how a largely fictional unitary self is constructed out of these warring mechanisms. Partly they are motivated by scientific curiosity, but there is also a therapeutic motivation. Most of the time the machinery works so well that we aren’t aware of it, but the disordered mind exposes its mechanisms. Ainslie based his work on a theory of addiction, the most obvious case of a mind in conflict with itself.
At first glance these thinkers seem to be polar opposites from Buber. His focus is on the kind of relationship that can only by expressed by a whole being; while they seem to deny that there even is such a thing. What seems whole is actually composed of warring parts, there is nothing solid there to have an I-you relationship.
Ainslie’s theory of the self holds that the main reason we have one at all is to mediate between our different urges, and in particular to deal with the fact that our preferences are not consistent over time, and that we have a need to make bargains and treaties with future versions of ourselves. Without going too much into the details of his theory (which I confess I only barely grasp) this results in a sort of recursive, chaotic process that both requires and produces unpredictability, in part because predictable rewards lead to satiety:
…when a puzzle becomes familiar your mind leaps ahead to the ending, dissipating the suspense and poorly replaying the cost of attending to it in the first place. … you then have to search for new puzzles or gamble on finding more than just new things of the same kind. Durable occasions must either (1) change so that they remain novel (new problems, new faces, new plots, new decor, or, as the style of puzzle becomes familiar, new styles) or (2) be intricate or subtle enough to defy total comprehension. This is the quality a work of art must have to save it from the obsolescence of fashion, and maybe too the quality needed by an enduring personal relationship.
— Ainslie, Breakdown of Will p169
This suggests to me a connection between the mechanical, componentized view of mind and the Buber’s holistic view. The process of self-creation is inherently illegible to itself and to others, in order to avoid the kinds of predictability that Ainslie is talking about. Selves resist being characterized in instrumental reductive terms, and so demand to be understood in a different frame. Essentially we are forced us to become unpredictable to ourselves and each other, and this dynamic points the way towards a different cognitive style, one suitable for comprehending the deliberately incomprehensible.
Admittedly this is still a far cry from Buber’s mystical vision where all boundaries are erased. It doesn’t make the self any more real, but it suggests why it is that insofar as it is real, it exists in a different way than the mere mechanisms and objects of everyday cognition.
The Asperger’s-tinged world of the present
Computer people (myself included) tend to have a streak of Asperger’s, and one of the syndromes is difficulty in modeling other people and social situations. In psychological jargon, we have something wrong with our “theory of mind module”. This interpretation of Asperger’s is controversial, because similar failures of mechanism are often blamed for much worse things, like sociopathy. (For the purposes of this post I’m going to assume that there is in fact an epidemic of mild and undiagnosed Aspergerishness in the computer world –- although there is certainly controversy about whether this a legitimate medical fact. But the nature of nerd culture has unmistakable affinities with the actual condition, whether it is real or simply metaphorical. )
The I-you stance that Buber proposes as the most important thing there is, which does not come easily to anyone, is even harder for us. We tend to take comfort in abstraction and systems rather than the presence of others. But all but the most severely afflicted do not check out of social life entirely. All the high-functioning denizens of Silicon Valley nerddom are solving the problem of being a human at the same time they are solving their problems of how to structure the digital world.
Some learn to compensate for their deficiencies and act more or less normal, others manage to turn their idiosyncrasies into fame and fortune. Temple Grandin, who should know, has diagnosed Facebook founder Mark Zuckerberg as an aspie, which doesn’t surprise me a bit. It would be fascinating to have a real ethnographic/psychological study of Silicon Valley cultural practices that could reveal how the Aspie style contributes to everyday interaction. My unsupported intuition is that it does but in ways that aren’t that obvious, since most people here don’t have it but are influenced by those who do.
I worry, though, that as software eats the world, the Aspergerishness qualities of nerd culture are crudely invading the sacred space of relationship. This is most obvious with Facebook, with its stupidly reductive notion of friendship and clumsy manipulation of the social fabric. I am ambivalent about how important this is. On the one hand, actual friendship, if it is worth anything at all, is not going to be greatly reduced because Facebook abuses the term. On the other hand, I do believe communication media have the capacity to radically reshape thought, so in fact it is a big deal if large swathes of human interaction are mediated by a socially-retarded private corporation.
What would Martin Buber think of Facebook and digital culture more generally? I can’t presume to say, but the quasi-public, performative nature of it I’m guessing would rub him the wrong way. That’s not to say that actual presence and actual relationship can’t flow through electronic media, just as it can work through earlier forms of writing. But Buber’s model of relationship demands at least a temporary exclusivity of attention, and that is something the Internet certainly does not encourage. It provides us with vast quantities of information, amusement, distraction, taking the process of I-It encroachment that Buber talked about to a far greater level than he could have imagined.
Despite this, I find myself (to my own surprise) with a greater faith in the human spirit than many of the critics of technology, like Sherry Turkle, Jaron Lanier, or E M Forster. We aren’t going to be reduced to the I-It by our machines. They may be remaking our social fabric, but I’m pretty sure human presence is strong enough to survive and find new ways to relate. This faith may be related to the quasi-aspie experience – if we have to struggle a bit more than average to achieve our humanity, then that experience of struggle can help the rest of humanity break through the electronic miasma.
Buber himself was something of a difficult character who had trouble with everyday relationships. Maybe he himself had some aspie qualities; maybe he could develop his theory of the I-You because it was not that natural to him as it is to the neurotypical. Perhaps he was like the rare fish who developed a theory of water, or like Moses, who led his people to and saw the promised land, but was not permitted to enter.
““thou” is much too formal a term, suitable for addressing God perhaps”
Ironically, “thou” was drastically less-formal than “you” for most of their existence; it was chosen in the King James as the suggested term for addressing God, to show intimacy and familiarity.
Shakespeare plays with this, having Katherine resolutely say “you” in the face of a flurry of “thou”s and “kate”s.
I think there are ways to move from a very asperg-ish state of mind to a very holistic one, here’s an example, although it may end up being longer than your post:
I quite like seeking the “you” in “it”, which is something that strikes me as an intrinsic part of the process of design, even iterative and evolutionary design:
Design can be seen when considering the anthropology of objects, the way that an object becomes part of a life, or the autobiography of objects, in the way it forms an expression of the designer.
Design isn’t really a conversation, but I have a strong suspicion that all effective design actually acts as a response; you have some impression of the user’s life that allows you to make the mental leap, or you design for yourself and people like you.
This is similar to but not identical to the kind of aesthetic wholeness of an elegant looking piece of kit, or of the various mathematical patterns of classical architectures. This is because these kinds of wholeness can be created without any reference to interaction, to use, except for the minimal interaction of superficial observation. In his earlier work christopher alexander talked about the idea of design as fit, as the lock and key structural combination of a situation and a piece of design. Another way I like to think of it is as a gearbox; a successful design attaches to its various relationships and brings their arbitrary connections a sense of order and internal coherence. In other words, rather than form following function, form integrates its functions. This is similar to the way that the human being, in seeking meaning, redefines the arbitrariness of their relationships as a distinct personality. We combine the different fields we are interested in, the relationship with people we have got to know, and the attachments of life into something that not only accommodates them without contradiction but actually translates between and connects them, that creates them into a broader force and motion, and combines them structurally into some kind of embracing pattern.
Now so far we have a picture of design as a model, as the capacity of a set of relationships to be transformed into sense by the intentional action of the designer, analogous to his own creation of self-identity, but this still leaves aside the real I-you thing, the capacity of wholes to speak to wholes.
That comes in repeating the action that has created the object, you have put your wholeness into it’s wholeness, or more prosaically, you have put your processes of self-integration into the integration of something else. Not in reproduction of yourself, precisely, but in recreating the same principle of growing, integrating and balancing in your activity of finding the combination of connections of relationships that makes it something distinct as an object and a kind of justification and naturalisation of it’s arbitrary conditions.
If this act can be repeated by the object itself, if it can be created such that those people with whom it has a relationship find themselves assisted into coming into coherence with themselves, then the design has succeeded in a kind of meta-design goal. Christopher Alexander slightly grandly referred to the goal of architecture as healing space; that these things create coherence such that the coherence of their surrounding is also enhanced. This is a basic generosity founded upon an assumed common ground of ontogeny, creating spaces that feel more alive and have their own coherences even as they make us feel more alive and “centred”. At the end of the day, any designer who just satisfies themselves with making something that makes sense in its context, without trying to actually enrich the life of the user, is missing out!
This is a roundabout way of saying, that I think you can see the wholeness of individuals as analogous to the wholeness sought in a well founded design process; although it might be true to talk about the various components of a life that are sometimes not brought into a complete unity, into coherent relationships, it is also possible to talk about the properties of successful composition, of the creation of a certain structure and harmony that has it’s own personality.
I think you are onto something when putting Buber and Alexander together, in that they share a fundamentally similar idea of the value of the shared project of life. That creativity, self-creation and assistance of others all fall into the same framework. This is not about the disruptive technology, about the tightly controlled rocket ship startup more in sync with its own ideas than anyone else is, and riding that advantage all the way, but about finding problem spaces that are a total mess in terms of clashing paradigms and bringing some kind of coherence. It’s about peace, and the atomic step of community.
Facebook in contrast is about pure delivery of change, as something to be reacted to. Although on the one hand the designers talk about allowing people to share information, their focus has been on pushing the front of constructing the social graph. This is fundamentally not about helping people share their self-perceptions and integrated stories, but creating new kinds of connection that others will find ways to integrate. Facebook is so disruptive to social etiquette because its objective is to rewire and reconnect, it is something to be adapted to and to be integrated, not something to help integrate.
This is fundamentally because Facebook is lacking in design, and this is actually one of it’s strengths; although I deeply respect the view of design as healing, absorption of difference without collapse, and the creation of wholes, I also see a lot of value in raw engineering and hacking that runs great plow lines through social structures, or creates extra-dimensional connections between them, like wormholes. This is mainly because disruptive tech, although immediately damaging to community, could conceivably allow it to make optimisations and improvements that communities’ own structure could not, trapped as it is to a present of mutual recognition. Revolutionary technology has power for the ways it leaps blindly beyond foresight, not for the extent to which it embodies vision.
Although facebook has some deeply unpleasant elements in its attitude to people, that we can see embodied in the trivialising frameworks it uses to mediate interactions, as long as they put more focus into committing code than settling out usage patterns, they should continue to create features that destabilise their own frameworks, even as they do other people’s.
And this comment is far too long already!, but there is another inversion; any revolutionary technology will come to the point where it starts to be reintegrated into society, when it starts becoming part of these activities of being-towards-community. This is parallel but not identical to the technology becoming a utility, especially as it happens in jumps; it’s the world of random conversations across tables in trains. At this point the technology provider should get as much out of the way of these processes as they can, clear out the tripping hazards so that their structure can be inhabited by these actual authentic interactions. They are not the designer of these relationships, the users are, so they should take cues from them and not overplay their own design work. The extent to which these are non-destructively accommodated within the coherence of the broader design of the system will I think show its longer term humanity.
Regarding the “you” in “it”, you might like Richard P. Gabriel’s essay “Designed as Designer”: http://www.dreamsongs.org/DesignedAsDesigner.html
Josh — lots of deep ideas in that! I think they deserve a post of their own somewhere, or several. The image of Facebook as a sort of anti-design engine, trampling through existing social structures and creating new ones…that is very good.
I share your sense that there is something in the field of design or the act of designing that bridges the gap between humans and the world of objects, in both practical ways and at its best (for lack of a better word) spiritual ways. At least from my engineer-nerd perspective, good designers seem to be in touch with something higher than mere problem-solving, something that unites aesthetics with ethics and basic pragmatism. I’d like to be able to do that.
The intersection of design-thinking with software has an interesting history. For awhile there was a movement to try to found a new field called “software design”, which largely petered out, although Richard Gabriel’s work that Robin cited is in that vein. There are lots of UX and graphic designers now, of course, but they tend to focus on the more superficial aspects of software, not the deep structure.
Mike –
This is an incredible piece for me. Especially coming at the moment it does in my life, having spent most of it trying to define and relay the importance of this very thing – a world that is reducing itself down to I-‘IT’ and thus losing the purpose of life as humanity’s spiritual/emotional/social growth of I-You. And the easy damage and disintegration of all life that inadvertently, although logically for those of us who believe in the webbing of all life by I-You, results in that I-IT slide away from what we would say, most defines that “humanity”.
Regardless of how uncomfortable, (and understandably so when you think of religious history in our world), the concepts Buber brings up for you, I have to applaud the way you tried to understand Buber, even despite admitting your potential “Aspergerishness” that may indeed be blocking some of that relational methodology of understanding. I have personally come across this in all aspects of my deconstructing the very same dynamics in trying to understand and amplify healthy “human” learning. One of the most profound of those understandings, as a teacher of some 30+ years now, is how this “Aspergerishness” has grown in those around me, especially males around me, most able to think easily in logical/rational/linear computer-machine like ways. Reductionist ways. Noticeably exacerbated and exaggerated by the increasingly machine-model pedagogies (the disease of “STEM” in educational thinking) and the worldview that seems to maniacally grow with this untethered model, when it comes to relational understanding.
The I-You does indeed become reduced to I-IT and those most entranced in that mathematical maze, still think they see human solutions to complex human problems without ever dealing with human beings in any deep sense. The discomfort of those thus often too “chaotic” and certainly inchoate relationships, becomes anathema to those with no real skill sets, or natural protections, to negotiate to the “wholeness” of the I-You. Especially because, historically the I-You has been many times so badly done and developed indeed in irrational and non-healthy ways. Adversarially. But thus the math problem gets “solved” leaving out all the important variables, and yet claims to be the solution to the far more difficult interpersonal and social complex problems — like the concept of a ruler who nerve gases his own people and the moral dilemma of the rest of the world standing by trying to “calculate” the risks of acting within their stated moral precepts. Let’s let the computers tell us and thus rationalizing away our own I-You responsibility.
It’s an excellent piece, Mike. Unfortunately I feel it loses some sense of its own important conclusions by deciding,
“We aren’t going to be reduced to the I-It by our machines. They may be remaking our social fabric, but I’m pretty sure human presence is strong enough to survive and find new ways to relate. This faith may be related to the quasi-aspie experience – if we have to struggle a bit more than average to achieve our humanity, then that experience of struggle can help the rest of humanity break through the electronic miasma.”
I love the concept of surviving a trend that actually might be neurologically reorganizing our brains in the exact opposite direction of I-You and towards I-IT, by those very people who most have subsequently moved that way, but it isn’t what our world and the research tends to show. Even in my classrooms, I have noticed such an increase in autism spectrum diseases and particularly “Aspergerishness – rewarding” pedagogy and IT reinforcement, which result in real and painful struggle (and when young, actual tears of frustration in their eyes that soon learn to “dry up”) to try to reach back and be able to reconnect with that fading emotional resonance that defines that connection to another, or all others. I like your cheerfulness of thinking that it is so possible, considering that narrowing direction of facility. My one hope is that the true reduction of irrational and explosive unhealthy emotionality in Aspies I’ve known (and I think I am married to one:-) reduces the “noise” to help them get to that clearer new I-You space, but the work of reteaching their courage to attempt reaching out and actually “touching” with open skin (emotional vulnerability) is painful to see and almost too frustrating to work on eliciting.
Too often, the separation has been done so well, it literally just doesn’t “read” on their monitors. The “connection” has been blown.
That direction, if continued in children at the time of their most critical emotional learning and neuron development, may indeed limit the chances of ever understanding I-You, at all?
I like to hope I am so wrong on this. Your post, Venkat’s sometimes too, give me hope that there might be some chance. But so far, for me, the machines (I-‘IT’) are definitely winning.
Doesn’t mean I’ll give up, since I am one of the ones still able to see those young
drying-up-but-still-there- frustration-tears at the losing of the possibility of I-You, no matter the ultimately non-commensurate “IT” rewards in our society.
Thanks for your heartfelt comments.
I am interested in your view that the push for more STEM in education is damaging. I’ve always felt that was a good thing – and still do, but maybe it is pushing out other parts of the curriculum or warping the nature of education? That is, I think it is a very good thing to be able to think in the reductionistic, technical, rationalistic, mechanical mode, but I wouldn’t want that to be anybody’s exclusive take on the world. My impression is that the majority of people could use more rather than less rationalism – but my own little nerdish corner of the world has the opposite problem. You are a teacher and thus on the front lines of where the culture is going, would be interested to hear more from your perspective.
I am (somewhat uncharacteristically) optimistic because people have been predicting that technology will destroy our humanity for thousands of years. They are partly right! It does disrupt lives and social relationships. But I think that the human spirit is robust enough to survive computers, just as it survived industrialism, mass media, and the invention of movable type. Not that there won’t be a lot of churn and turmoil in the process.
I guess my views of technology are somewhat parallel to Marx’s view of capitalism – while critiquing it, he wasn’t in opposition to it, far from it, in fact he thought it was an inevitable stage of human development, producing much good and much bad, and eventually it would undo itself through its own internal contradictions and lead to the next stage of being. Our present technology destroys and rebuilds human relationships on a basic level, so “all that is holy is profaned; all that is solid melts into air”. Marx’s preferred mode of progress – class-based mass movements and revolution – has been tried and found not to work that well, so we need some other way to learn to live with technology and guide it into the future.
This is ultimately a much longer and more involved conversation for me Mike, one it seems I have a lot. I think that’s because on the face of it, everyone has clearly and literally “bought into” the idea that technology has always been progress in a positive direction, or so primarily positive, that it is a rarely examined “default” good. But in fact, though there are many examples of this being true historically, the counterfactual could easily be argued that we are on the brink of species extinction because of this very technology, not only destructive technologies (weapons) that were the driver of much of the rest, but environmental disruption caused by our thoughtless technologies, many to enhance human greed over human health.
That could possibly be mitigated by some of what Josh W. writes above and you noted in respect to design: “This is a basic generosity founded upon an assumed common ground of ontogeny, creating spaces that feel more alive and have their own coherences even as they make us feel more alive and “centred”. But that isn’t the way the dominant “thinking” has gone to date.
My argument is how you create and educate for that dominant thinking; what areas of cognitive/rational reinforcement you overemphasize and reward, educating for that kind of thinking above a more integrated rational/emotional coherency, that hides some of the implicit and yet un-excavated value systems reinforced by that technology and its chosen “usage”? Thus adding to unhealthy imbalance in leading to more “deadening” and destructive technologies and lifestyles instead, (far more I-IT than towards I-You), that are subsequently limiting in imagining out of that dominant direction. That actually is potentially and actually physiologically “pruning” more holistic concepts of the only human skill sets of empathy, compassion and emotional intelligence that lead to greater integrated world views and thus healthier results and possibilities.
One interesting take on this, as “Adam” notes in his comment to Venkatesh’s Archtypes post, is in the book by Ian McGilchrist’s “The Master and his Emissary;The Divided Brain and the Making of the Western World”.
But my concern is in what I actually see happening to people who most interact and use these technologies, especially the young boys in my classroom. Phillip Zimbardo (of the famous Stanford Prison experiment) writes and talks about this and gives an interesting TEDtalk on “The Demise of Guys” that relates to what most concerns me here, http://www.ted.com/talks/zimchallenge.html.
I’ve written of the bigger issues that this narrowed understanding of human development (of course STEM cannot be abandoned, but not understanding how multi-disciplinary and not just scientific thinking is necessary, is ultimately damaging to all thinking and feeling) for the UN Chronicle here: http://www.un.org/wcm/content/site/chronicle/home/archive/issues2012/dialogueamongcivilizations/bringinghumanpassionintosustainabilityeducation.
The argument that, “People have been predicting that technology will destroy our humanity for thousands of years” is true, and now we actually have the technology to do that. Completely. My real argument is that, the development of the technology and over-dominant STEM focus since the Enlightenment, educationally, far outweighed the development of our human understanding and connection to each other which is so critical to balance it for ultimate human health and sustainability and avoid that “destruction”.
It’s that imbalance that has reached such an unsustainable point and thus the call for more “STEM” thinking, reinforced again by STEM technology, is not imho going to help us get there without understanding what it not only doesn’t teach, but actively damages in the human brain: the I-You, more intuitive and relational basis of understanding and of true connecting. It simply isn’t valued or developed by that narrow construct alone and thus is not at all would I as an educator, think we should be emphasizing to the exclusion of that more needed balance of arts, humanity and philosophy, to name just a crucial few.
Hope that helps, Mike.
Just a brief note in this complex dispute. I spot a little over-eagerness in the quasi mystical ambition of bringing whole-being or unity of emotion and cognition to people via education. One aims at improving nature in its natural operations. Asperger becomes a pars pro toto of a certain failure of our nature and the aspergerization of society is understood as call for return to human nature and human values.
But isn’t education the exact opposite and since religion has been mentioned, isn’t one, if not the only admirable trait of religion left to us, that it counters our natural habits in exercises like e.g. silent meditation for hours or following a rigid ritual path through life? This is not life-style design for inhabitants of Mediocristan who wish to be Artisans. In the modern liberal arts curriculum only few traces are left and they are associated with mathematics and philosophy, the disciplines which go against the grain. Math counters our associative memory and capability to make logical jumps and perceptive interpolations and philosophy counters our ordinary, conversational language. They are means to slow us down, they deconstruct our natural capabilities and re-synthesizes them through work.
Kay –
I didn’t want to not respond to this as I think it is an important point, but am not sure I understood it completely?
“But isn’t education the exact opposite and since religion has been mentioned, isn’t one, if not the only admirable trait of religion left to us, that it counters our natural habits in exercises like e.g. silent meditation for hours or following a rigid ritual path through life?”
My argument is that although historically logical separations seemed to have been necessary between the “sciences” and “religion” to allow the sciences to develop along valid and necessarily rational ways, the “split” was overly emphasized for the development of the “alternative priesthood” as the only “truth”. It’s this either/or dichotomy that defined education so strictly as the learning of the “rational” (see below), and then defining education as the “only” way to develop one’s intelligence, a necessarily alienating constriction of the rest of that intelligence.
I don’t believe that definition of education is complete enough to continue to serve us well in solving the truly complex, completely human problems we face in sustaining our world or species. I believe some form of marriage between rational and emotional intelligences is necessary, to transform that either/or thinking. Both are needed, together and thus in education, the primary means of developing human problem-solving capacity in our current world?
Economics is the field of emotional manipulation and rational calculus, whereas politics is the sphere of emotional calculus and rationalist manipulation. Technology has become the area of religious expectations and magical experiences, backed by science and engineering who gave us a new world of wonders. I already see all those living Yin Yang diagrams floating around us. I wouldn’t say those express happy marriages but who is happily married? They are frosty and psychotic and paradox just like the empathic US tyranny and war mongering we experienced the last couple of days by the rationally as well as emotionally intelligent Kerry/Obama, which was stopped by the commitment to democratic processes in the last instance – the true act of coming to reason.
So what I don’t see is steady growth or sustenance and harmony with the environment but the wild ride of an emotional and empathic animal rationale at the edge of chaos, which expresses its passions in each and any direction. In contrast Buddhism has been organized around a different dyadic pair, placing passion/suffering on the one side and contemplation on the other, with the latter being advocated as a radical alternative: the inner chaos has to go, no matter how productive and powerful it made us, the amour fou of the human being with itself. I-Thou and I-It become yet another denotation of our opposite drives and passions. From the point of view of a master of contemplation they might easily fall under the verdict of what Venkat called introspective incompetence in the thinking cap article of his Tempo blog.
The dyad (ratio | emotion) is not enough. We need at least a triad (ratio | emotion | contemplation) where one suspends the other two. A ribbonfarm style puzzle was to ask if we could imagine a 4th item which suspends the other three? Anyway, it gets long.
Maybe our perspectives are different. When I was in school, technology was not emphasized very much, it was still the era when technology obsession was a nerdy, underground subculture, of bright kids who had to educate themselves. The technological takeover of culture was very nascent. I found Ted Nelson’s /Computer Lib/, which was building connections between technology and humanistic pursuits, but also in a sort of underground vein. I know the exact moment this subculture started to become mainstream – when Wired magazine published their first issue, and suddenly technology could be hip and trendy. But I was in grad school by then.
Anyway, I’m not sure how this cultural trend filters down into education nowadays. I have worked with people who are trying to introduce more technology in education (Seymour Papert, Mitch Resnick), but I think their hearts are in the right place – they want to do this in a way that teaches students to be more in control of the technology, that helps them be thinkers and makers. My work there was touching explicitly on some of these issues (eg, how children doing programming think about agency).
I didn’t stay in the education technology field, for a bunch of reasons, one of which is I didn’t feel qualified to tell teachers what they should do. But I still feel some connection with the constructionist approach. One way of saying this: the problem is not technology, but our relationship with technology, and the best cure is to give students the tools so they become masters of the technology rather than the inverse.
I’m not sure that has anything directly connected to the question of I/It vs I/thou. The problem there seems to be institutional, not technological. I find schools to be inherently dehumanizing institutions, they still have the flavor of industrial mass-production. Good teachers break through the institutional itness and establish genuine prsonal relationships with their students and their subjects and make them come alive. I’ve had a few of these excellent teachers, my kids have also had a few, but they are the exception. Technology is probably going to make this problem worse, since the trend there is to use it as a substitute for teachers (as in MOOCs). That’s an utterly different kind of tech in education than the Papertian approach, but probably much more common.
Mike, this is a lot that is interesting in your comment above, because of that different “perspective” and history. Your age is perhaps closer to mine than I thought though still younger.
I remember trying hard to understand this “nerd” culture when I was first introduced to it most obviously at Berkeley. Then, I found it a fascinating but definitely “outsider” culture and sometimes even an “outsider take” on many accepted conventional ways of thinking, and was drawn to it as I was to the hard sciences because I found so much in the “humanities” that was disappointingly far less “human” and certainly revolutionary than I had hoped in coming to Berkeley.
Also at the same time, and not as coincidentally as people seem to think just after the end of the anti-war movement, the “hard sciences” were being promoted as the truly best promise for our “future” societies and the belief that only dominantly through them the best solutions to these more complex “relational” problems (and that is why I do indeed think the I/it as opposed to I/thou construction of problem-solving is very much relevant to these different ways to frame the problem themselves) would be inevitably “discovered”. As for me and my entire time there, I straddled these two very different ways of thinking — scientifically vs. liberal arts –as I wanted to learn more about the technological mind and what it had to offer in relation to these complex, knotty problems.
What I learned there, emotionally frightened me quite a bit. Even then, I argued that the way the technology was being taught (not necessarily the disciplines themselves, but the way the mind was reinforced in narrow, rigid and linear-only ways of learning these disciplines), the actual institutionalized educational process, was itself a way of teaching a certain “blindness” of those things the narrow “scientific” definition would not allow. I think this might be your point above. But this actuality worked to constrict social critique of the effects of these “only technical” subjects to those who could truly “understand” them, and often in still in only I/it reductionist ways. I/It learning seemed to me a demand of that structure, and anyone who could not see how it literally damaged I/thou more relational learning, certainly of the “other”, was not paying attention. Women who entered these fields certainly could see it and feel it. They were vocal in complaining about that narrow rigidity and sometimes, effective in changing it. Sometimes rarely, in the more “living” sciences – the Life Sciences – fields in which they began to make real inroads, often by thinking “differently” about them.
Not so much (and rather obviously) in technology and engineering, the “machine” sciences. Though there are some natural reasons for this (the subject matter itself and who is exposed to it early in life), there are many more “constructed” reasons for this that defined the subject matter along quite narrow lines, kept it there and kept the “priesthood” of its study in arguably relationship-destructive paths. So much is even currently still being debated along these “lines” today – about the active “misogyny” and “allowable difference” in this culture, no matter what the qualifications of the women or non-asian minorities who have studied and excelled in its subject matter (http://valleywag.gawker.com/this-is-why-there-arent-enough-women-in-tech-1221929631) – that I don’t think it is necessary to belabor this point. I think it was an aspect of the design of that education itself. Partially consciously, partially unconsciously by those who had always “owned” knowledge in the fields of science, by those who had designed its parameters and were pretty strict monitors of its “disciplines” and of what was considered valid areas of further research. And what definitely was not allowed.
How this indeed turns out to be related to neuroscience and what we now know of “wiring” the brain in grooved synapses by patterned reinforcement of how to learn or think “intelligently”, is again for me, best discussed in books like Iain Mcgilchrist ’s and many others, about what actual areas of the brain were consistently reinforced by this type of dominantly logical/rational and linear education. What I saw and argued at Berkeley, with the very hardcore physicists and engineers themselves, was that their “picture” seemed to me to only see “half the frame” – the mechanical, the separate, the non-connected and non-relationship aspects of the singular discipline rather than the obvious to me connections and inter-relatedness of all of it. It was an argument I rarely won, and those among that hardline scientific (“nerd”) culture that even partly agreed with me, rarely themselves stayed completely comfortable within it.
All to say Mike, that perspective matters a great deal but one of the things that Mcgilchrist most strongly argues is that those trained most efficiently in this way, most dominantly in the logical/rational/linear ways of understanding these subjects and thus most “scientifically” trained, are the least able often to break out to see the bigger connections to the whole. In his admittedly simplified argument, “the essence of the problem is “that the left hemisphere is not aware of what it is not aware of” and that the difficulty we are faced with is giving the right hemisphere “a fair hearing”. Whilst agreeing that beauty, spirit and art are not the sole preserve of the right hemisphere, Mcgilchrist does see a reductionism not only in science but in popular culture and a loss of “the power of art to alert us to things beyond ourselves, to the transcendent.”” Obviously this is not true of all, and Papert (who I have read and been exposed to mostly by Alec Resnick (http://alecresnick.tumblr.com) who argues very much along these same lines as you do), makes a great deal of sense when he argues “it is not the technology but our relationship to the technology.”
I ultimately agree with that statement, but find the technologists I have known, even who see this, are often themselves still the least able to see it well enough to mitigate it on the deep emotional-relational learning levels I have seen most damaged by this overwhelmingly quantitative and rigid pedagogy of excelling at “machine” knowledge. There is literally a “gap” in connecting more relational, less mechanical “ways of knowing” and of learning about other human beings, or validating that type of “knowing”. It is this “gap” that has me questioning the belief that we can actually reach “artificial intelligence” when the intelligence of those trying to model it are themselves, to me, so limited in the emotional realms they themselves did not continue to develop “intelligently”. Never considered it as a key aspect of their and everyone’s intelligence, at all. Were in fact actively encouraged not too, accepting the premise that all “emotionality” then, often gendered, was “irrational” – the worst of scientific sins. Thus that rationality was itself often developed without its balancing empathy.
But we now know, that view alone is emotionally “unintelligent”. Some of us always knew it, as we watched what these great “machine” discoveries were used in pursuit of and why it was so often, the opposite of a greater, spiritual more I/thou relationship. I think that though Papert and others are right that it isn’t technology that necessarily makes the pedagogy or its results this way but our relationship to it, technology’s tendency is to easily reinforce the institutional pedagogy along these narrow directions, and I believe it is still being used to do exactly that. What is most necessary then, aligned and literally “coupled” to it, is active and conscious emotional intelligence excavation and teaching of what technology alone can not address in those classrooms, as emotional intelligence is by definition interpersonal I/thou learning and an entirely different pedagogical way of being as well as, of teaching.
the actual institutionalized educational process, was itself a way of teaching a certain “blindness” of those things the narrow “scientific” definition would not allow.
I sympathize with this a bit. I’ve found myself battling against this tendency of the tech world in various ways, often unconsciously. My computer career has been pretty random, but often driven by an urge to transcend a certain narrowness that was expected of me.
In Artificial Intelligence (a field which I have orbited around but not actually practiced that much) the blindness is institutionalized to a staggering extent and has really held back the field. You can’t build artificial humans if your only model of humans is as some kind of rational and efficient problem solver. People are waking up to this slowly, and there’s quite a few people nowadays interested in emotion, narrative, and other aspects of the human mind. The presence of more women in the field has helped.
I don’t know if you can really blame the nerds for the dis-enchantment of the world, a phenomenon that has been going on for 400 years or so. Science is a big part of modernism, but I really think that capitalism’s displacement of traditional forms of society is more to blame. Of course they go together and can’t be readily teased apart.
The Media Lab (where Papert and I both worked) is not perfect by any means, but they have done a lot to integrate technology with more humanistic pursuits, particularly the arts. John Maeda was the director there for awhile and since went to RISD where he is championing “from STEM to STEAM”, which is trying to push this approach in education more broadly. No idea how much cultural traction that will get, but it’s an encouraging development.
I sort of detect that you see yourself fighting a losing battle against technology. I think those kind of battles are almost always a bad idea; the better fight is to try to give people better handles on technology so it is servant rather than master. That’s the Papertian approach, and reminds me, I should look at the McGilchrist book.
I want to say something about Facebook and whatnot. However, I’m rather uneasy about the terms in which the debate seem to have been framed.
Part of it is this whole notion of aspie-ness, which at times can appear to be “quiet and conscientious” re-cast as a medical diagnosis. However, I don’t really want to get into that.
Then there are all these binary oppositions: I-Thou vs I-It, reductive vs holistic understanding, rationality vs emotionality, science vs religion, STEM vs liberal arts. These abstractions seem a bit baggy and ill-fitting. I prefer specifics.
[At this point, should probably mention some of my personal investments relating to this genre of distinction. Professionally, I’m a computer programmer with a background in control engineering. However, both my parents and my wife studied English literature at university. Moreover, my father was a first-generation disciple of F R Leavis, C P Snow’s foremost adversary in the “two cultures” spat of the late 1950s.]
There is an idea going around that programming is something that can’t be taught. I’ve not seen much evidence for this. I don’t have a problem with the idea that it can be intellectually tough, and there’s a steep learning curve. I can also well believe that the actual skills of programming aren’t taught well, and that many people going into the software industry realize fairly soon that there is more money and an easier life in getting promoted out of programming. However, as far as I can see, programming is like most other things: if you practice enough, and you figure out how to learn from your mistakes, then you get good.
So when Temple Grandin says that Mark Zuckerberg must be aspie in order to be a good enough programmer to produce a social networking website, I’m not convinced. Whereas to look at the issue of how a person might find out if someone they find attractive is already in a relationship, and to see a problem requiring a technical fix – well, yes, that requires a “differently special” mindset.
This brings me to my beef with Facebook. My concern isn’t particularly that it wants to keep tabs on everyone on the planet in order to sell stuff. Rather, it’s that it is yet another thing endeavoring to deskill everyday life. We should worry less about Facebook devaluing the word “friend” and more that they would turn the process of making someone’s acquaintance into a schoolyard interaction: “will you be friends with with me?”
It is a more extreme form of deskilling that Phillip Zimbardo highlights in the TED talk to which June linked. The reason Zimbardo gives for young men having difficulty interacting with women isn’t that a cultural bias in favour of science and technology has failed to nurture their poetic sides; it’s that they spend their time playing computer games and watching porn, and therefore don’t have the time or motivation to develop their social skills. (As far as I can see, social interaction is like most other things: if you practice enough, etc, etc.)
Zimbardo also says that they get wired for instant gratification. If so, they’re unlikely to do anything non-trivial in the field of technology. See http://richwalker.me/2012/12/28/move-fast-and-make-things-up/
What’s most objectionable about Facebook is precisely that it seeks to transform relationships from intrinsic value to commodity value.
The money it extracts as advertising revenue does not come out of thin air: it comes from milking every word, every “like,” every click or other action online, for all they’re worth. And when you extract something out of something else, the latter becomes lesser as a result (otherwise you could invent perpetual motion machines). The money doesn’t come from thin air, and the commodity-infestation of relationships doesn’t leave them untouched.
The more honest business model would be to just charge a monthly fee and let people communicate as they will, in much the same manner as the telephone, or to run advertising based on the broad demographics of the user base, in much the same manner as TV and radio, without interfering or attempting to predict and control the behavior of each individual in a way that even the intimate persons in one’s life could not and would not do.
As for the remote diagnosis of Zuckerberg, I beg to differ. He’s not an aspie, he’s a narcissist.
First, it’s good to see the word “mysticism” used in its proper technical sense, meaning the branch of religion that is concerned with the direct encounter with the ground of being, unmediated by scriptural or clerical authority. This, rather than the popular but incorrect usage as a sloppy synonym for “mystification,” making mysteries where none exist. Mysticism per se does not require supernatural entities or agency, per a range of mystical philosophers from the Buddha forward, who dismissed the question of theology as secondary to another purpose such as right action and enlightenment.
Paradox is not only a property of mystical philosophy, but also of the physical universe itself. In nearly a century we have not achieved the unification of Einsteinian relativity and the quantum theory. The very nature of light is dualistic, despite our preference for reductive monisms. In biology, evolution is governed by replication (deterministic), variation (random), and selection (chaotic), so all of life as we know it resides at the intersection of these characteristics, irreducible to one alone. Reality has not fit neatly into our preferences, or our faith that our preferences are in some deep way true; paradoxes abound and even multiply.
Mystical philosophy also often makes the claim that the unitary personality is an illusion, beneath which is a myriad of conflicting identities, and above which is the all-encompassing whole. This is not to deny integrity of being, or unity of purpose and action on the part of individuals, but to remind us of the value of non-attachment, and to discourage the over-identification of self that produces selfishness.
What I think Buber is getting at is the distinction between the relationship between persons and objects, and the relationship between persons and persons. The distinction between a person and an object is obvious. An object has no will of its own and no feelings, no capacity for hurt or joy or curiosity. A person has all of these attributes, and further has the capacity to defend him/herself against objectification and other forms of abuse.
In most of the world’s religious traditions it is wrong to treat a person as an object, and in many it is wrong to treat an object as a person. Narratives about false idols, golems, and in popular culture Sorcerer’s Apprentices and magic gone bad, make the point about not treating objects as persons.
This should stand as a warning particularly in our times, when many otherwise smart people are putting their faith in new religions that promise eternal life through ascension into The Machine. The Singularity and AI Upload are two variations on the theme, each proposing its own hereafter for the Elect, and remaining strangely silent about what will happen to everyone else.
But the more important point, about not treating persons as objects, has lagged behind: Slavery persisted and still exists in isolated pockets, to this day. Genocides were one of the defining characteristics of the 20th century. The treatment of women and children as property continues, even proudly, in many places including in countries where modern technology abounds. The sheer convenience and potential profit and unrestrained pleasure in treating “thou” as “it” is a hard addiction to break, perhaps the hardest of all.
But to attribute to technology the ability to either solve for us, or irretrievably worsen, the problem of treating “thou” as “it,” for example through unlimited sources of clean energy or AIs performing all work while humans have unlimited leisure (where have we heard that before?): this is ducking the issue. Ultimately it’s our responsibility to treat persons as persons and put them ahead of objects in our scales of value. Ultimately we will evolve or devolve cognitively and culturally, based on the strengths and weaknesses of our own character. This is wholly up to us.