Until recently, I had never been conceptually attracted to the idea of an afterlife or prior lives, either as thought experiments or as aspirations. And definitely not in any religious sense. This is perhaps because I’ve never been able to imagine interesting versions of those ideas.
What has been piquing my interest over the last year is a particular notion of digital after lives/prior lives based on persistence of memory rather than persistence of agency or identity. Not only is this kind of immortality more feasible than the other two, it is actually more interesting and powerful.
We generally fail to understand the extent to which both our sense of agency and identity are a function of memory. If you could coherently extend memories either forward or backward in time, you would get a different person, but one who might enjoy a weak sort of continuity of awareness with a person (or machine) who has lived before or might live after. Conversely, if you went blind and lost your long-term memories, you might lose elements of your identity, such as your sense of your race or an interest in painting. Mathematician Paul Erdos understood the link between memory and identity:
When I was a child, the Earth was said to be two billion years old. Now scientists say it’s four and a half billion. So that makes me two and a half billion… I was asked, `How were the dinosaurs?’ Later, the right answer occurred to me: `You know, I don’t remember, because an old man only remembers the very early years, and the dinosaurs were born yesterday, only a hundred million years ago.'”
Erdos’ version of course, is based on no more than clever wordplay, but I want to consider a serious version: what if you could prosthetically attach to your own mind, the memories of somebody who died on exactly the day you were born, serving as a sort of reincarnation for that person? What if you could capture your own lived experiences as raw and transferable memories that could be carried on by somebody else, or a robot, starting the day you died, thereby achieving a sort of afterlife? Or perhaps live on somewhere in the Internet, changing and evolving?
The most interesting and unexpected consequence of any notion of immortality based on the idea of a living memory, is that notions of heaven and hell make no sense within it.
Living Memory
Memory is a constantly rewritten living thing, with evolutionary characteristics, shifting boundaries and some continuity. I began to explore the idea of immortality in my post A Beginner’s Guide to Immortality, but didn’t quite grasp the centrality of memory as opposed to what you might call, for lack of a better word, code.
The integrity and coherence of memory is central to the integrity and coherence of identity and agency itself. To paraphrase computing pioneer Frederick Brooks, reveal your thinking and hide your memories and I will understand nothing about you, hide your thinking and reveal your memories and I will understand everything about you.
Call this view the memory determinism view of identity and agency. In pure form, memory determinism can be defined as the existential position: If you’d been through what I have, you’d think as I think and do as I do.
If this seems like a very strong statement, think about just how strong the antecedent clause is: if you’d literally been through what I have, it means you would also have made the same path-dependent decisions I did. This means you would have thought and done as I thought and did every step along the way, and started at the same point. The strength of the consequent is entirely captured in the difficulty of achieving the antecedent.
I won’t try to rigorously defend memory determinism in this post, but just assume it for the purpose of exploring its consequences.
Some details matter here.
I am not a neuroscientist, but from what I understand, the way long-term memories are formed is that raw sensory memories are gradually integrated into a coherent narrative and then settle again into a sort of edited sensory memory with an associated narrative. This is why we can, with a little effort, weakly relive memories in a sensory way and also tell stories based on those memories. If you closed your eyes right now, and calmed your mind a bit, you could probably conjure up a ghostly echo of a summer vacation or party from long ago that has settled into long-term memory. You could also narrate the associated events without re-experiencing them.
There is apparently such a thing as a memorist, a tribe of Proust-like people who actively work on exploring memories. I suppose they get together and eat madelines and watch Being John Malkovich on occasion. I recently came across a quote from memorist Andre Aciman that seemed particularly clever: smell is the permanent address of memory. If you don’t instinctively get why that’s a true and important point, you might have a problem with your sense of smell.
I think I’d like to attempt writing a memoir someday, not because I think my life has been, or will have been, particularly remarkable, but because I think trying to capture the essence of your life in a form that can be experienced by another, is one of the most interesting technological challenges in the world. So far, we’ve invented only one technology that can be used to take on this challenge: reading and writing. Good memoirs, such as the Diary of Samuel Pepys enable reliving to some degree.
So for a writer, writing a memoir is the most interesting technical challenge of a career. It is the slightly creepy business of making your life relivable (but not rewritable) by others. But the challenge goes beyond writing. Writing can only memorialize a life in a static form, since it does not allow for a meaningful form of rewriting. It cannot turn a lifetime of memories into an evolvable, rewritable entity that can continue to live beyond the existence of the body that experienced them.
The real challenge for memory-based immortality is making an immortal memory independent of the bodies it passes through. There are two ways to approach this challenge, using finite and infinite game models respectively.
Immortality for Finite Games
The simplest version of this idea is a practical one: an artificial agent that captures enough about you to create a limited form of persistent agency. A blogger friend of mine, tubelite, explored this possibility in depth in this post.
Tubelite’s thought experiment involves a conventional sort of software design: one based on encoding rules and policies for governing actions, and perhaps embodying a critical dataset distilled from a life. The examples he explores are motivated by wanting to enable finite sorts of agency. For example, an investor-agent that perpetuates your investment philosophy after you are dead. In this view, the design of the after-life agent is really the main act, with the life being the startup phase, devoted to perfecting a highly survivable investment philosophy before death-time. This is like the sea squirt: a creature that swims around when young until it finds a good rock to anchor itself to, and then eats its own brain, turning into something more like a plant in a personalized heaven.
But there is something unsatisfying about this kind of eternal afterlife. The basic idea in this approach is to capture an instrumental essence of who you are through some mix of code and a rolled-up information-state at some point in time arbitrarily labeled death, and then using that essence as the starting point for continued evolution that is geared towards a particular purpose. As we’ll see, in this view, death is a bottleneck condition between life on earth and life in a designed heaven or hell (implied by the purpose encoded).
Even though the agent design is nominally for an eternal after-life, any design guided by a specific objective such as investing is necessary a finite-game design in the sense of James Carse (as regulars have probably noticed, Carse’s Finite and Infinite Games is rapidly becoming one of my core references, so you might want to read it if you plan to continue reading this blog regularly).
An extreme example of such a finite-game immortal is R. Daneel Olivaw, a robot who occurs across Asimov’s robot, spacer and Foundation stories, evolving into increasingly powerful forms, swapping many bodies and positronic brains along the way (evolved gods are a staple in Asimov’s writing). R. Daneel Olivaw’s agency is of course, driven by Asimov’s laws of robotics, which are imperative rather than descriptive in form, so capable of animating a limited kind of eternal agency. Those laws are game rules that are vast in scope (saving humans and humanity), but still finite.
To get to a design of an afterlife agent that is set up for a true infinite-game conception of a digital afterlife, you have to start with data, not code, and work the problem of memory perpetuation without reference to specific objectives.
This is an easier problem (entities such as corporations, cities, countries and cultures already solve it in limited ways), but one whose solution has broader consequences.
Immortality for Infinite Games
It seems like a trivial and obvious point, but an immortal agent must have a potentially infinite memory. What makes an infinite game infinite is not that it lasts for ever, but that it is driven by a desire to keep playing forever rather than winning. This means no finite set of unchanging rules for playing the game of life will do. Changing conditions might require changing game rules in order to continue playing.
The agent part must be capable of being born and dying within the memory as particular finite games begin and end. The immortality is carried through such birth/death bottlenecks by the continuity of memory.
This is actually a very powerful constraint. Much of our sense of identity and agency are attached to finite and unchanging game rules. My sense of identity latent in qualifiers like Indian and American for example, will be meaningless if applied to a being who exists in times when neither of those qualifiers do. My thoughts in English and Hindi would make no sense if remembered without translation by a being living in times when neither of those languages exist.
If in this life, I desire wealth or fame as a writer, those objectives will not translate to futures where wealth might be meaningless (perhaps it is a post-scarcity economy where everybody is plugged into a Nozick experience machine) and writing has been replaced by telepathy. Even sexual orientation, which creates very basic and powerful identities and motivations, is a finite game. If my memory in future might be prosthetically attached to a being with no sexuality, or a person with a different sexual orientation or gender, all the associated patterns of agency and identity become ephemeralized.
This means the only meaningful part of a lived mortal life that is suitable raw material for plugging into an immortal life is raw memory. One might argue that all memories are attached to some modality of agency or identity, so that identity/agency-independent memory is essentially the null set, but I don’t think that’s a serious argument (the idea of play to me is existence proof that we are not talking about a null set). So let us stipulate for the sake of argument that it is not.
This is enough that we can attach a name to this stream of memories and call it something more than entropic turbulence in the ocean of infinite memories. We’ve conceived of a meaningful notion of infinite-game immortality based on the persistence of memory. To borrow terminology from the philosophy of mind literature, there is something it is like to be an immortal version of me.
Specific finite-game chapters of agency within this stream might repress or surface particular memories into and out of an evolving Jungian shadow. Eating or sexual memories from one life might be repressed as disgusting in a future life. Self-esteem based on a particular aspect of identity might turn into self-loathing in a future body, but utilize the same memory in both cases.
But the sense of immortality itself will not depend on these particulars.
Heaven and Hell
The way we’ve set up our thought experiment here, it follows that heaven and hell are only meaningful concepts for finite-game immortals (this point is not an original one; Carse implicitly makes it).
To understand why, consider a famous athlete such as a future Tiger Woods, born in 2035. We equip this future Tiger Woods with sensors and brain implants that capture every stimulus he experiences through life, and every long-term memory he lays down. When he dies in 2110, we transplant his 75 year-long living memory into the most appropriate new baby (who might be a girl), or a tabula rasa robot (the question of whether the recipient will instantly age 75 years is an interesting one I’ll get to).
Now this memory-descendant of Future Tiger Woods might live for ever through multiple human and robotic bodies, and accumulate such an overwhelming historical memory and ongoing learning of golf that he/she/it becomes unbeatable in a few generations, given his/her/it lifetimes of deliberate practice. He might be beaten in individual games, but he evolves into a true all-time champion. He cannot be dethroned without changing the rules of golf so radically it becomes a structurally different game.
This is not a kind of memory immortality that interests me, because only one closed world (finite game) is ever being explored by this eternal being: the world of golf. This Tiger Woods is only as long-lived as the game of golf itself. Specifically, this is an immortal Tiger Woods designed to ascend to (and in so ascending, define) golfing heaven.
One could easily conceive finite-game immortals designed for specific hells.
I haven’t thought it through, but I suspect most versions of AIs conceived as capable of a post-Singularity immortality are finite-game constructs designed to create and inhabit specific heavens or hells. So are most visions of utopia and dystopia for humans.
This closed-world/finite-game assumption makes it possible to design simpler immortality software: you can compress the memories radically in golf-focused ways (or in the case of R. Daneel Olivaw, humanity-saving, heaven-creating ways). You do not need to explicitly retain details of specific golf swings that do not matter to golf. Some sort of machine learning algorithm that merely learns a really large vector of implicit and instrumental golf-swing parameters (which need not map directly to meaningful variables) would do the trick.
An aside: it seems to me that any notion of intelligence you might care to consider is simply a matter of enabling a particular variety of finite game immortality. Because notions of intelligence also come with associated notions of perfect intelligence, and therefore heavens won by those perfect intelligences.
Imagine though, that you want to structure the encoding of your memories for transplanting in the most open-ended manner possible. A manner that amplifies the generative potential of lives it is attached to in the future, rather than constraining it to something like golf or Asimov’s laws of robotics.
You’d approach the design very differently.
Googling Past Lives
If the goal of an afterlife mechanism is to perpetuate an evolving body of memories (that serve as prior lives to the future vessel), you get an infinite-game afterlife agent.
In designing such an agent, you’d ignore most proceduralized skill memories completely. Instead, you’d go for a Big Data approach: store as much raw memory as possible, in as unprocessed (or organically processed, with retention of the originals) a form as possible, with the ideal being no structuring at all.
Your uploaded brain at death would ideally be zero percent code, 100% data. Messy, varied, redundant data. Data that exists in various stages of digestion, from raw sensory memories to half-formed stories to settled long-term memories with canonical narratives attached. There would be abstractions weak and strong, well-worn and unfinished too. Code might exist too, but in bracketed, dormant form, rather than as active agency. Elements of the data store would be vulnerable to rewriting depending on their distance from raw sensory memory.
Note that you would make no particular attempt to “finish” your memory within your lifetime or put your memory affairs in order. Any state you stop at is a good enough state to be transplanted.
An interface to such a memory is not actually that hard to imagine. It is certainly a vastly simpler problem than imagining a human-like AI.
Imagine such a living memory being given to the transplant recipient simply as a searchable collection. You could evolve the interface for the transplant recipient as follows. Let’s call this recipient YouTwo, and the donor YouOne.
- Version 1 would simply be a Google-like searchable app on a smart phone. If YouTwo sees a cat, he/she/it searches for “cat” and sees a reverse chronological stream of cat images and associations from YouOne’s life. YouTwo can edit anything from YouOne’s past that comes up.
- Version 2 would automate the linkage as a sort of soft memory surgery. Anything YouTwo experiences would trigger a search whose results could be reviewed in peripheral vision, again in read/write mode. This would be a sort of ghostly/poltergeist haunting of YouTwo by YouOne.
- Version 3 would sensorize the search results, piping them (for instance) into an augmented reality (AR) headset that juxtaposes YouOne memories onto YouTwo experiences in a seamless way. This would not be organic though. If YouTwo meets a friend of YouOne whom he has never seen before, there would be no recognition as such. Just an explicit cue that “you knew this guy in a past life.” You might edit your experience by hitting like and dislike buttons on anything your prosthetic memory retrieves, driving a process of repression and highlighting.
- Version 4 would achieve a certain degree of coarse and organic temporal integration, and start to blur the two memories. YouTwo would interleave living his own life with reliving YouOne’s life. Perhaps for an hour every day, YouTwo simply lays back in an armchair with an AR headset on, and “remembers” random memories pulled up via association with recent experiences of his own, forming weak links in an evolving temporal memory graph over time. In this scenario, the AR headset might flash a pair of images: YouOne’s friend from the past life, and a random stranger encountered that day by YouTwo, juxtaposed in order to form an association. This association would be purely about creating a link between past and present, and not for any instrumental purpose. It would be a sort of assisted Hebbian learning. Version 4 would start to blur past and present in YouTwo’s mind. If he/she sees the friend later, he/she would begin to have doubts: “Is this somebody I’ve seen before, or is this a YouOne memory?” That blurring would be a feature rather than a bug.
- Version 5 and beyond would use neurosurgical implants to create ongoing subconscious temporal integration, and might even try to piggyback on dreaming past lives into current living memory. The grafting code would also attempt to integrate narratives rather than just sensory memories. I won’t go into this, but there are some obvious and non-mysterious ways to blur present and past stories, so you start dissolving the distinction between stories that happened to YouTwo and stories that happened to YouOne. You would also eventually have actuation of things like “gut feelings” (perhaps machine recognition of a face from YouOne’s lfe would trigger a little squirt of stomach acid or a mild electric shock to YouTwo, creating a new class of sensations correlated with past lives).
- The ultimate form would of course be a realization of the sort of memory integration envisioned in Philip K. Dick’s Total Recall.
There are plenty of other interesting details to consider and add. For instance, you probably don’t want to flood a 2-month-old’s (pre-verbal) brain with the prior-life memories of a sixty year old. Or a famous mathematician’s memories into an idiot’s brain. You’d want to achieve some synchronization within a life, bringing up age-appropriate memories through life (creating a sort of continuity and smoothing effect). You’d also attempt to do some matching of personality traits. An introvert’s memories transplanted to an extrovert host would probably just result in incoherence and poor integration. This whole category of engineering detail could be considered a case of handling impedance mismatches.
Possibly, you could test out an entire library of stored memories against a new host and transplant the best match in some sense. And of course, there are other wild possibilities such as implanting the memories of multiple people into one host, or achieving a trading-places sort of memory switch between two living people. But those don’t add much philosophical fodder to the base case. The base case of splicing the end of one living memory into the beginning of another, smoothly and with preservation of rewrite capabilities across the junction, captures much of the challenge.
Why do any of this?
Because I suspect we can. Which means, by the logic of infinite games, we probably should.
The infinite game brings with it the moral imperative to do anything that keeps the game going in a more generative way, and I think seeking memory immortality qualifies. I’d certainly like the memories of some interesting person who died the day I was born grafted onto my mind.
The human body, being a product of evolution, is not perfectly designed. Some parts of our body are under-designed: our teeth and bones aren’t really meant to last more than about 40 years, so we have to start taking strange measures to life-extend those parts.
But some parts of the human body are also over-designed. In particular, I think our memory capacity is far higher than our lifespans. Our brains seem to be designed to store and use memories for perhaps several hundred years. Perhaps thousands.
Okay, so we have a good set up: an immortal designed to play an infinite game, and a plausible way to realize such an entity through technology. It isn’t AI, it isn’t artificial sentience, it isn’t a solution to the hard problem of consciousness.
But it is at least a plausible solution to the problem of consciousness transference through continuity of memory.
One useful thing we can do with this set up is reconsider heaven and hell.
Heaven and Hell Redux
One of the basic problems with religious notions of prior and after lives is that they are derived from finite-game instrumental notions of being. So it is perhaps not surprising that religious notions of both heaven and hell are uniformly tedious depictions of states of eternal stasis. Too much time is being filled up with too little substance.
It is not clear that such stasis could be enjoyable. Certainly burning in hellfire for eternity would not be enjoyable, but would living a life of consequence-free hedonism be enjoyable forever? Or winning every game of golf? Or to head East for a moment, would an eternal karmic cycle of life and death be any good?
One of the few classics I’ve read on the subject, Dante’s Inferno, seemed like a tediously contrived way to explore a particular conception of morality, and that ultimately is what most religious notions of heaven and hell are, whether religious or atheistic, and whether situated on earth or another imagined plane. Explorations of morality within specific finite games.
A more interesting treatment, in Chapter 10 of Julian Barnes’ A History of the World in 10½ Chapters, gets at the idea that an after life defined by any unchanging condition is not something anybody would want for eternity. Any eternally unchanging set of conditions is a hell by definition: a place you would not want to experience for eternity. The only difference is that some you want to exit within seconds, others within centuries.
Curiously though, life on earth (or more generally, this universe) itself qualifies as a heaven by the natural complementary definition: a place many would want to experience for eternity (both backward and forward in time) in some way.
On the non-religious metaphysical side of things, all the explorations of the idea of an afterlife that I am aware of seem to be explicitly or loosely existentialist in character. You have Sartre’s idea in No Exit that hell is “other people,” Camus’ treatment in The Myth of Sisyphus and Beckett’s Waiting for Godot.
Two things are interesting about these works: they don’t take the distinction between pleasant and unpleasant conceptions of an afterlife seriously. They seem to implicitly realize that any unchanging situation is hellish, and explore, in different ways, how humans strive to change and find reasons to live on, in situations that stay the same. Humans don’t really seek eternal bliss or or try to avoid eternal pain. Instead, we really seek eternal interestingness.
To me, these works demonstrate one thing: if we have any capacity for immortality at all, it is only a capacity for the infinite game. Finite game heavens and hells are not worth spending eternity in unless you have no evolving memory at all. Finite-game afterlives require eternal amnesia. Like the sea squirt, we’d have to eat our brains the moment we found our heavenly rocks.
I’ve been thinking about immortality for awhile, and this post explores many of the things I’ve been thinking about in interesting ways.
The main question I have is: what about “visceral knowledge”? (What I’ve taken to calling what philosophers call Qualia).
The experience of remembering a memory it not constant in time. I think this is true for all memories but especially traumatic ones. When I reflect on my life, I view my past not only through the lens of the present but also the lens of past times I’ve reflected. Memories can feel different after many reflections, and you can remember that your experience of them has changed.
So with which lens would my next memory holder view them? As recent memories or as old ones? Both experiences at different times? If they were experienced out of chronological order I could see the experience being very confusing. I am reminded of Dr. Manhattan from Watchmen, who experiences past present and future at the same time.
You mention that only the raw memories would be saved, but I have trouble understanding what that means. I understand the visceral experience of the memory would be saved, but we experience memories with various intensity throughout our lives. Would we lose that ability?
I don’t think this kind of immortality would really get at individual qualia transfer. At best the recipient qualia would be similar but distinct from the donor qualia for the same memories, even if both occupy the ‘I’ position.
Really this is just advanced gaslighting across time.
Very much of a tangent, but I saw Erdos once, when I was a Math grad student at Ohio State in the late 70s. He was living out of a suitcase as he is famous for doing.
Doesn’t it seem extremely odd that the (so it has been asserted) most collaborative mathematician ever (i.e. he had more co-authored papers than anyone else) should have been described as “The Man Who Loved Only Numbers” (title of a pop book about him)?
> if we have any capacity for immortality at all, it is only a capacity for the infinite game
Popular conceptions of heaven and hell often shear against this insight — in fact, as you point out, most of the current canonical heavens are actually more hellish in practice; it seems to be eternal interestingness/novelty that enables true heavens.
With respect to the thrust of your memory proposal, I think in many ways it is too conservative of the discrete concept of a “life”. Why wait until death to make bulk memory transfers? Why fetishize that hard stop discontinuity of agency at all?
Perhaps what we will actually see is the increasing fidelity of real-time memory transfer until human life simply assumes the final stages in your transition outline. Recontextualizing agency and identity within such a frame will be a truly sublime and deep process, as we will need to discover fork and merge workflow strategies for the attention and self-construction processes of human consciousness.
Fascinatingly, this kind of framework seems to erode much of the experience machine objection — if we are a society of soul-threads interactively sharing imagination, perception, and perspective, it is hard to see such a shift as anything but a distillation of the deepest animating properties of our present context.
Hymns to Venkat, March 2114
lol! there’s a sci-fi idea there… immortal ghostly consultant who inhabits the internet and haunts companies by making eerie powerpoints with 2x2s appear randomly in their meetings…
I wanted to say something about the idea of memory as a form of afterlife, as thoughts along these lines have been rattling around in my head ever since the other month I read comments from Kevin and Darcey about the fear of death. However, before I get around to that, there’s some criticism I need to get out of the way first.
There are immediate practical problems with a technologically assisted afterlife. Thanks to the miracle of modern medicine, you can reasonably expect to live long enough to enjoy years of senility; so if you were intending on having the intervention take place at the point of death then this might be rather too late. On the other hand, if you plan on having things done when you’re still compos mentis then you run into the problem described by Mark Dominus: http://blog.plover.com/2010/09/.
More fundamentally, if your criterion of success is that life be infinite, you are on to a loser. You’ll need to cope not just with, say, the end of oil, but also with the death of the sun: a life of a few billion years is every bit as finite as one of threescore years and ten. Even if you manage to deal with the big, obvious catastrophes, there are any number freakishly improbable misfortunes that could wipe you out and which become, in the limit, dead certainties. Silicon and copper may have certain advantages over flesh and bone, but invulnerability isn’t one of them.
Anyway, back to the fear of death. Personally, I have a hard time understanding this. Fear of dying? Well, that makes sense, since any accident or illness serious enough to kill you is liable to be fairly unpleasant. But for a long time I didn’t realize that there really were people who found the mere fact of oblivion unsettling. I may be unusual in this: back when the threat nuclear annihilation was a more palpable my thoughts were that it would be a bit of a shame, but that as a species we’d had a pretty good innings. However, I don’t think I’m that unusual: people generally aren’t as afraid of death as is sometimes imagined.
Tying this back to the theme of memory: when someone dies it is sometimes said that they “live on in our memories”. This happens to be the only kind of afterlife that makes much sense to me personally; but it is, I suspect, the afterlife that people actually care most about. One reason for caring about the manner of your death is that it affects how you are remembered. Also, people go to a lot of trouble to ensure that they are remembered well: from the super-rich throwing themselves into philanthropic works that they hope will be their abiding legacies, to people lower down the social scale putting enough money away to pay for a decent funeral without leaving their loved ones out of pocket. Whereas posthumous infamy, like Jimmy Savile’s, is some kind of hell (albeit a rather unsatisfactory one).
I’ve got a couple of asides here. One is that “Thinking, Fast and Slow” can at one level be read as Daniel Kahneman giving the world a good memory of Amos Tversky. The other is that, if you believe Bryan Cantrill (https://www.youtube.com/watch?v=-zRN7XLCRhc&33m03s), Larry Ellison’s only philanthropic effort, other than making a large donation to Standford University so he could avoid admitting wrong-doing in an options back-dating scandal (“you stay classy”), is the creation of the Larry Ellison Institute for the Prolonging of Life (“namely his own”): it’s as if Ellison suspects people won’t look back on him fondly, and so he wants to put that day off as long as possible.
Living on in the memory of others is clearly a finite form of afterlife, since ultimately memories fade. For reasons I’ve already given, this is unavoidable. But I’d argue that it is actually a good thing anyway. Wishing never to be forgotten amounts to wishing that other people’s grief will be unending. Digesting experiences, having them lose their rawness, involves a certain amount of forgetting. Even Nietzsche said as much.
I feel I ought to say something about computers, since the kind of memory I’ve just been talking about isn’t hi-tech, and it might be inferred from what I’ve said that computers don’t fundamentally change anything. That’s not quite it. The world is your exocortex, so it would make no sense to say that how it is populated makes no difference to who you are or what you think. However, databases and search engines are clearly a refinement of earlier, paper-based technologies of bureaucratic data processing: so when the digerati tells us how happy we should be that we will all be on file forever, a degree of caution is warranted.
Larry Ellison becomes an immortal Oracle database which commands its company until it busts. After that it is said he haunts sailors by infecting their high tech equipment. Other people say it’s just sailors yarn and the monstrous narcissism of the undead Ellison is just an excuse for the sloppiness of tech vendors who invent those stories. In fact the Ellison bot still sits on his island and scares only birds and lizards he likes to hunt.
Wise words.
Rumor has it that the bot’s firmware is based on the MindForth system Arthur T Murray developed back when everyone thought he was crazy. Lizards, you say. Yes … I suppose that would be the thing … fits the pattern. Thanks for the update. Someone really ought to tell David Icke the good news.
That’s a funny thought.
Pretty interesting and a topic that has really interested me for many years. Probably since a combination of Being John Malkovich and Ghost In The Shell (and later to a lesser extent Harry Potter).
I agree that memory is the true expression of immortality, and I would go further and state that all we are is a collection of memories. If we could verbatim duplicate those memories would our consciousness remain in the new body? That’s an interesting question.
I have always thought however that this is not a new idea at all, we just haven’t recognised it as such. One of the most important evolutions was the evolution of parenting and I believe that the reason for this is exactly what you are proposing. Parenting is in fact the primitive mode of memory replication. By being a good parent we can hopefully convince our children to maintain our memories within them, both nostalgically (eg “remember that story dad told us”) and intrinsically (eg “so glad dad taught me how to prepare food”). I don’t just agree that it’s something we should be aiming to develop…I believe it’s inevitably the next evolutionary step, further refining this process until we achieve a quasi immortality (immortality in the mind if not in the flesh).
I don’t think I’d go that far and claim memories alone can create consciousness. Interesting thought though.
Parenting is definitely too weak a form of continuity though.
I will confess I didn’t read this through very carefully, but this is possibly because a lot of it was counter-intuitive to me, and perhaps I did not quite agree with the premise. Memory is not just data. It is governed by emotion. There is the imprint of sensory images in our consciousness and sub-consciousness, and the various narratives we make of them that exist simultaneously, often which are contradictory, depending upon our state of mind. The formation of narratives is an unreliable process. The subjectiveness and unreliability of memory is of course an old theme in literature.
Check out Fun Theory [http://lesswrong.com/lw/y0/31_laws_of_fun/] – it’s very similar to what you hint at in terms of eternal interestingness.
Hi Venkatesh,
Thank you for this post. What you’re describing as a possibility for immortality by memory continuity is so interesting to me, because you’re approaching many of the ideas I’m interested in, but from a completely different framework. I’m wondering how familiar you are with Vedanta and yoga philosophy? What you describe as your hypothetical “version 3” is basically what we think humans actually already are.
You say: “Version 3 would sensorize the search results, piping them (for instance) into an augmented reality (AR) headset that juxtaposes YouOne memories onto YouTwo experiences in a seamless way. This would not be organic though. If YouTwo meets a friend of YouOne whom he has never seen before, there would be no recognition as such. Just an explicit cue that “you knew this guy in a past life.” You might edit your experience by hitting like and dislike buttons on anything your prosthetic memory retrieves, driving a process of repression and highlighting.”
Well, this is basically exactly what we believe is already happening now in the human perceptual apparatus. According to the teachings of Vedanta, the karana sarira, or the causal or “seed” body that reincarnates and forms another physical body, does exactly this. It is a bundle of the raw data of past lives without the metanarratives, exactly as you describe. These decompressed past life impressions are called samskaras. They create grooves or patterns of behavior in our current lives and form our seamless concept of reality by influencing our selective attention in filtering sensory information and helping us build a narrative that allows us to form a self-identity as an individual. (“This is enough that we can attach a name to this stream of memories and call it something more than entropic turbulence in the ocean of infinite memories.”) What you call “like and dislike buttons” is called “raga-dvesha”, literally “likes and dislikes” or “attraction and repulsion,” depending on translation. This is recognized to be the fundamental process of the mind, and the likes and dislikes bundle to form “I” concept or the ego. The ego or self-assertive principle is the temporal self-identity (the mistake of an infinite being identifying with a finite game–i.e. the body and mind or birth and death) and is considered to be the the root cause of the ignorance–because this narrative-making self-concept is exactly what filters out the “raw data” and therefore limits our knowledge. This blending of new and old impressions is so seamless that we normally fail to recognize any confirmation bias, but it’s influential enough that there is a continuous flow of karma from one life to another.
You say, “One of the basic problems with religious notions of prior and after lives is that they are derived from finite-game instrumental notions of being.”
and,
“to head East for a moment, would an eternal karmic cycle of life and death be any good?”
Actually, I don’t think either of these statements apply when it comes to Vedanta. Karma doesn’t have to be eternal. It’s not about heaven and hell. Though there is the concept that if you do enough bad actions you will end up on a demonic loka for a lifetime, and if you do enough good actions you’ll end up in the realm of the devas, both of these are said to be only temporary until the good or bad karma is exhausted, and then you’ll incarnate into a human body, in which you’ll again have the chance to become liberated from the cycle of births and deaths by (guess what?) not identifying with the finite (the body and mind, or upadhis) and realizing your true nature as infinite and unlimited consciousness.
I realize you’re an atheist and a materialist, so this all probably sounds quite spooky-mooky, but it’s interesting to me to see how much you may be influenced by your culture without realizing it. Or maybe you do realize it, you’re just reframing these ideas in a way that is pragmatic, applicable technologically, and will appeal to your readers?
Either way, your writing is fantastic and I really enjoy exploring some of these ideas in a totally new way. Keep up the amazing work and thanks for thinking like you do.
Best,
Lilith
I am broadly familiar with that tradition since I grew up around it, but don’t see it as more than a loosely analogous trail of metaphysical thought. I don’t think it is anything like a a description of physical reality. No exact correspondences of the sort you are suggesting. At best a loose connection such as that between alchemy and chemistry. I don’t think the notion of reincarnation hangs together at all, either as metaphysics or experiential evidence (in the sense of testimony of subjective experiences of memories of past lives being accepted as reliable), let alone as science. Anymore than the idea of transmutating elements in the alchemical sense.
Atheist and materialist are just labels in the end that don’t convey much, as is ‘spooky mooky.’ I don’t set much store by them. I do, however, set a lot of store by very precise models of argumentation and evidence that we describe with the label ‘science.’ I am perfectly happy to play with highly speculative metaphysics all day, be it Vedanta or Jungian or Taoist. That’s a very useful mode of thought and reflection for many questions. But I am pretty conservative with respect to accepting specific ideas as descriptions of reality.
I am quite aware of how much I am influenced by my culture, but there is definitely no attempt to “reframe” Indian ideas in ways that are “pragmatic, applicable technologically and will appeal to [my] readers.” There is a real distinction here between a scientific sensibility and a metaphysical one that applies to what you see as “exactly the same.” In short, what you and I are talking about are not exactly the same thing in ways that go far deeper than any individual intellectually tastes I may have or rhetorical choices I may have made.
Hey Venkat,
Thanks for the reply. I was under the impression that your post was philosophical and in the realm of abstraction rather than science, since there isn’t any science of memory continuity (yet) and you’re not referring to any concrete form of memory transfer, except calling it ‘digital.’
I don’t believe I used the words you put in quotes–“exactly the same”–but my point was that you describe a model for immortality that strikingly resembles an ancient Vedantic model.
Of course there is major difference in that (it seems) you’re interested in future technological development and Vedantins believe this is already how consciousness works without the need for any auxiliaries–but I’m surprised you don’t find it interesting to note the similar shape of the model, regardless of whether you believe listening to seers and rishis is a reasonable method of truth-seeking.
Actually, yoga is a practice, not so much in the realm of metaphysics. And the reason why this is important and relevant to what you wrote is because yogis have been investigating the nature of consciousness and playing with methods of hacking the psyche for thousands of years. You use the language of software development to talk about the brain or the mind, but as I’m sure you know, the brain is many orders of magnitude more complex than anything we can simulate with computers, and doesn’t seem to function in a linear fashion or compare much to how a computer works at all, especially when it comes to memory storage. I doubt using a software model to describe human memory or identity is much more scientific than referring to the discoveries of those who have been closely observing and documenting how the mind and body work for thousands of years through such methods as prolonged meditation, breath control, sensory deprivation, and logic applied to identity (Vedantic self-inquiry).
I hope I wasn’t offensive in labeling you a materialist, but I do find it useful in terms of efficiency to label schools of thought when having interdisciplinary type conversations. I think your writing and your topics transcend spiritual or material frames of reference, which is why I’m writing to you, but I find that most people I know with backgrounds in the sciences have a materiality bias. Meaning, if we can’t measure something we feel comfortable ignoring the possibility that it exists. We agree that information exists, though it is immaterial, but insist it must be recorded materially to be of consequence.
If units of (human) information can be independent of one’s brain or mind (you seem to be willing to consider this even talking about memory continuity across multiple hosts) and are immaterial in the sense that they’re patterns or sequences independent of the material they’re recorded in, it’s not much more of a stretch to consider that there may be a method (or several) of information transfer happening already between hosts in ways we’re not yet aware of.
Neither a neuron nor an electrical impulse are units of consciousness or memory, obviously. “Digital” only refers to numbers, which could be represented by any material. So if we do not know what a unit of consciousness would look like or how it would be transferred, how is your way of modeling ‘reincarnation’ or ‘immortality’ any less metaphysical than mine?
Imagine such a thing as pure information unmitigated by substance. It requires no hardware to exist; it is already connected to itself everywhere. We can contain it within boundaries for a time, but those boundaries are impermanent, and porous, it’s never fully autonomous or restrained. It can’t be broken up into individual parts, but because we cannot observe it holistically, we identify it with objects (people) and define it by those objects.
But if an object is destroyed, its information remains in some way. What we call the seed body or causal body, is the signature or metapattern or non-physical code that remains in formation but can move freely through immaterial (information) space and become identified with a different set of physical objects.
Forgive me if this is uninteresting. For me scientific approaches to truth are nearly as fascinating as spiritual approaches, but I realize the reverse is rarely the case for scientists.
I think a claim that one approach is comparable to alchemy and the other to chemistry is a bit flawed though, considering we don’t have much in terms of a hard science of consciousness. That analogy could be true decades down the road in retrospect, but it’s a bit early to be sure.
Thanks again for your inspiring writing. ॐ
This is a longer and more complex conversation than is possible to conduct in the comments to this post. Suffice it to say that I have rather heterodox views of ancient mystic traditions and metaphysical ideas and don’t really buy their own accounts of themselves. I get where you’re coming from though.
And no, I am neither offended nor flattered by labels like “materialist” (or “philosophical” or “abstract” for that matter). They are too monolithic to be of much use in my thinking/writing.
This whole post strikes me as a Transhumanist fever dream aimed at imagining the most likely way to cheat death via technology. It’s bizarre in both its ambitions and reliance on vaporware. A few points:
Your conflation of identity with memory conjured up the particle/wave dichotomy, only here substitute instead fixed/fluid. Whatever self or soul may really be, its radical reduction to a data set (admittedly highly disorganized) transmissible through some imagined up- and then download process doesn’t really capture the essence of experience, with is permanently fluid, to craft an obvious oxymoron. Further, our analog version of (quasi-)persistent memory (memoir is your example) would require significant compression not only in its digitized form but in the replay experience by a potential YouTwo, who, BTW, would surely not understand context any more than we now understand the minds of, say, medieval alchemists. Without that compression, one would potentially lose oneself in simple re-experience. That’s one reason why, in popular fiction, enlightenment is always nearly instantaneous; real time takes too long even for immortals.
Extension of human capacity taken to the illogical extreme presented by Transhumanism suggests Theseus’s paradox. It’s not clear whether you take Asimov’s starting point, the maintenance and service of humanity, or would prefer emergence of a new manifestation of identity and agency. I suspect the latter, which usually stems from two things: deep-seated misanthropy and sheer boredom. Yoking yourself to synthetic memory, however, may well forestall new experiences in actuality that in turn become more data/memories lost in the disorganization of the mind. Although impossible know, considering how it’s all hypothetical, but I suggest that way lies madness.
Hello Venkat,
Your articles are thought provoking and I appreciate your time in creating and sharing them with the world.
If memories were transferred to another person, either fed at age-appropriate times or searchable, how would that affect the “next person’s” individual development? It could be seen as a kind of super-parenting, but if the next person has not made a mistake or had an experience from which to draw wisdom, how would the next person know what to search for from the memory store?
If the next person wanted to put their hand on a hot plate, they might know to search “will this hurt me”, but could the transferred memories contain a richness to convey insights or realizations? When I read these articles, I have many eye opening AHA moments because my thoughts have not followed ideas down the same conceptual paths. I would think the transferred memories of the highest value would be the AHA moments, but that takes a set of life experiences and circumstances to make the person receptive to the new perspective.
On the idea of parenting being a poor memory transfer method, I agree in its current form. It seems that parenting today tries to steer the child down a broad corridor in hopes they will “turn out right”. However, the child repeats old historical errors because the conceptual gains from the parent are not being conveyed, or not conveyed effectively. Is this a parenting issue, or is the child not “taught how to learn” conceptual lessons, only rote ones?
As an aside, if people learn (or would hopefully learn) their strongest lessons through failure, how well equipped is the straight A student set to learn life lessons if they have not been exposed to a broad spectrum of failure? Could we somehow incorporate lessons such that the student is set up to fail, where they get the concept but without the deep emotional scars that are possible from learning the lesson the hard way by stumbling into it while living their life?
I would like to thank not only Venkat, but the commenters as well, for providing insights and new points of view.
I think the solution to the specific issue you point out is actually very simple and already exist: push over pull, recommendations over searches. So your prosthetic memory would sort of space share your Google Glass and throw up (say) warning flags if a hot plate came into the field of view.