Moore’s Law was first proposed in 1965, then again in revised form in 1975. Assuming an 18-month average doubling period for transistor density (it was ~1 year early on, and lately has been ~3y) there have been about 40 doublings since the first IC in 1959. If you ever go to Intel headquarters in San Jose, you can visit the public museum there that showcases this evolution.
The future of Moore’s law seems uncertain, but it looks like we’ll at least get to 1-3 nanometer chips in the next decade (we were at 130nm at the beginning of the century, and the first new computer I bought had a 250nm Celeron processor). Beyond 1-3nm, perhaps we’ll get to different physics with different scaling properties, or quantum computing. Whatever happens, I think we can safely say Gen X (1965-80) will have had lives nearly exactly coincident with Moore’s Law (we’ll probably die off between 2045-85).
While there have been other technologies in history with spectacular price/performance curves (interchangeable parts technology for example), there is something special about Moore’s Law, since it applies to a universal computing substrate that competes with human brains.
GenXers are Moore’s Law people. We came of age during its heyday. Between 1978-92 or so, the personal computer (pre-internet) was growing up along with us. The various 6502-based home computers, and the 8088, 286 “AT”, 386, 486, and Pentium were milestones of my childhood and teenage years. During that period, performance was synonymous with frequency, so there was a single number to pace our own adolescence. Those computers were underpowered enough that we could feel the difference from power increases even with simple applications. Today, you have to design stress tests with hungry apps to detect the performance limits of new computers.
After the Pentium, things got complicated, and growth was no longer a simple function of frequency. There was register size, watts, core count, RISC vs. CISC…
Life also got complicated for X-ers, and growth was no longer about growing taller and buying higher-frequency computers. Moore’s Law shifted regimes from micrometers to nanometers (in a decade, it should be in the picometer regime)
There’s an Apple event going on today, featuring Apple’s own silicon for the Mac for the first time. The M1 5nm chip. But Moore’s Law is not in the spotlight. Apple’s design is.
I think some of the message of the silicon medium rubbed off on us Gen X’ers. We got used to the primary thing in our lives getting better and cheaper every single year. We acquired exponential-thinking mindsets. Thinking in terms of compounding gains came naturally to us. For me personally, it has shown up most in my writing. At some level, I like the idea of producing more words per year (instructions per cycle, IPC?) with less effort (watts). This is why anytime a new medium appears that seems to make it easier to pump up quantity — Twitter, Roam research — I jump on it. Quantity has a quality all its own, as Stalin said. We are lucky to live in an age when we can expect the fundamental tradeoffs of writing to change several times in a single lifetime. A few centuries ago, you could live an entire lifetime without writing technology changing at all.
But like Moore’s Law, I too am slowing down. The natural instinct when you feel yourself slowing down is to switch gears from quantity to quality. I think this is a mistake, at least for me. Quantity is still the most direct road to quality, as the parable of the pottery class suggests. But as with semiconductors, it doesn’t just happen. You have to consciously develop the next “process node” (like the upcoming jump from 5nm to 3nm), work out the kinks in it, increase “yield rate” (the number of usable chips you get out of a wafer of silicon, a function of defects, design, etc), and then architect for that scale. Each jump to a new process node takes longer, and you face new tradeoff curves.
But each jump begins the same way: stripping away complexity from your current process and going back to the basics of words each time. You can’t add volume to complexity. You can only add complexity to volume.
For writing, sometimes the new “process node” is a whole new medium (moving from blogs to twitter threads), other times, it is primarily a matter of rearchitecting the way you write, like with my blogchains and now this headline-free Julian-date-numbered shtick. It’s always about pushing quantity, not quality. Right now, I’m trying to engineer my next “process.” I don’t think I’ll ever produce the sheer volume of words I used to 10 years ago, but I suspect can be vastly more efficient with my energy if I arrange the toolchain right. More words per year per watt, that’s the thing to shoot for.
theres a sweet spot. unless you want to develop several different brands and talk about different stuff in a probably different voice.