michael-dean-k/

Topic

taste

5 pieces

The consolation of taste

· 177 words

Allergic to the term "assistant." Just got an email from Typefully on their new "editorial assistant," and it's filled with all the expected hedges ("we didn't just slap AI onto this," etc.), but it's all anchored in a wrong premise on writing: that writers have a voice, a vibe, a signature style. I think this really accelerated with the whole "taste" discourse. As in, if AI does everything, what's left? Well, my taste!? This is a very lazy thing to anchor your identity in. Technically, every person has some combination of sources that they can point to, likely from lazily curating their inputs, and calling that "taste." But it's something like a false pride. And so these tools just further play you into that illusion: that you have your taste, and your taste is great, and if only you have some algorithm that could capture it. Testimonial (in essence): "It turns my unstructured thoughts into absolutely sick bangers, written exactly as I would." But is your voice that predictable? That's another assumption, that your voice is unchanging.

Makers and the Managerial Goon Loop

· 390 words

Paul Graham’s idea of makers/managers is helpful when thinking about AI agents. The cost of being unreasonably productive is that all your time will go into management. I’ve heard people celebrate this, as if elevating above the work itself and only making high-leverage decisions based on taste is the place we want to be. Disagree. Without actually being in the weeds and making thousands of unbearably slow decisions, you won’t develop taste, and (probably) won’t be a great manager either. I guess the ideal (for me) is to be in maker mode as often as possible, and then let my synthetic managers come in to process my deep work. (Currently have a “proseOS” where I can riff 5k words into a daily note, and then agents come in to route my logs to different interfaces). Ideally, you build the manager once and forget about it. But realistically, a maker can find fun in making manager bots and management apps, and it’s quite easy to slip into a managerial goon loop. What I mean is, similar to masturbating with no intention of ever finishing (aka gooning), it’s very possible to make your own task manager app, and a writing app, and an idea Kanban linked to Obsidian, and why not a new personal website, and a 1,000 day calendar because you can, and seriously anything you can think of, and it’s very possible to just numb out over how unbelievable it is that code, markdown, and interface are now liquids that shape around your every intention, but actually, you never quite finish anything. PKM procrastination is timeless, except now it’s multiplied to new levels. The brute velocity of execution means you’re bound to make many little mistakes, which eventually compound into your own megamachine that traps you with endless bugs and feature ideas and system decay. This is all quite dramatic. I love Claude Code and insist everyone IRL and IFL try it. But now that it’s shockingly trivial to build your own personal software for free, I imagine there will be all sorts of unanticipated psychic costs. For one, it’s dangerous if building your own tools is equal to or more fun than the work the tools are for. I’m sure that wears off. But I generally think this all leads to both extremes: individuals who are unbelievable prolific, and individuals stuck in a goon loop who feel unbelievably prolific.

→ source

Taste as effort

· 170 words

Will had a point that intelligence is just one vector of human cognition, and things like taste and judgment aren't captured by machines. I made a solid counterpoint. Let's say an agent decides to read/re-read Paradise Lost for 5,000 hours straight. It has more than a surface level understanding of it from it's training data. It is looping over it, and maybe it had unique interactions with online communities and individuals around Paradise Lost, which it brought to its own extensive studies. After those 200+ days of study, this agent will have a singular understanding of Paradise Lost unlike any other AI/human, which is the essence of taste.

The core point here is that taste is not a preference, it is earned through sustained, intense effort. A LLM does not have taste because it read each work only once at a blazing space. It turns each work into a statistical pattern, but doesn't truly understand it because it hasn't recursively looped over it with force and singular intention.

Invisible cannon

· 1030 words

Every generation needs to find its invisible canon to solve its crises:

The last 2 years have been a deep dive into essay composition, but I want to think harder about taste. Of course, I believe fundamentals come first. If you don’t have fluency to express thoughts, then it doesn’t matter what your taste it. Taste without articulation is something like a status trap. People take pride in sitting at the intersection of three particular aesthetics, and using it as a razor to justify their artistic decisions, an excuse to avoid the militaristic discipline required to learn the fundies.

I’m sure there are proper terms for this, but I’m going to riff on taste and derive it all from scratch. Could be fun to read back on this in 10 years.

Yes, anyone can have a taste developed through circumstance, but that’s “narrow taste.” Algorithms make it easier to fall into taste traps. You see the same thing over and over; you are a Substack psychographic; confident in your uniqueness, but you’ve been force fed the same slop as 1.2 million other people.

And then there’s “wide taste,” which is a lifelong practice of reading from odd, competing, singular, idiosyncratic silos. Only by being well-read can you actually build proper maps of a culture. There really isn’t a shortcut to cultivate taste, it takes tremendous time and effort; without it you’ll only be able to cling to feeble, flimsy opinions.

But it’s not enough to read widely; there’s “discerning taste,” the ability to selectively pluck out a small percent of the things you’ve read and deem them as special. 

Ultimately there are questions on what to read, and well-read people tend to point to old books, the canon, but that feels like outsourcing your discernment. What good is the canon? Sure, if it's survived for centuries, there's probably something to it, but it risks turning you into a homogenized intellectual if that's your only source (and yet also, it helps to know the classics so you can speak that language, but it's probably best to supplement with 50% nn-canonical sources).

The question behind the question is this: what is the point of a serious reading habit? I’d argue that you read to understand the range of ways that words can move you, and to accumulate ideas and lenses that help you navigate the circumstance of your life and generation. The western canon might have some overlap, but not all Great Books are the books you need. The western canon is helpful as a history of literature, a record of how the species bursted through with original linguistic concepts and forms. That matters! That’s worth studying if you want to understand your heritage, your species, the norms of older times, and the outer limits of language.

But from a perspective of “renaissance” or “revival,” to surface old ideas to help our current situation, that’s a very different canon. So the word “canon” is flexible. You hear people making “personal canons” all the time now, which are effectively, just the books you like. There are also "tech cannons" and even the "China tech cannon." But you could argue that as society mutates, each generation has their own invisible canon, some combination of obscure books, that if discovered could help them navigate the narrow passage oftheir time.

Can AI have taste in this kind of canon creation? Maybe a culture progressively rots if each generation is unable to find the scattered canon that’s destined to them, and maybe AI can help reverse our fumblings. The question then is, what do humans lose? What matters in the act of canon creation? The orientation (the thesis on what’s worth finding), the mapping (selecting the books), the reading (digesting old books), or the synthesis (making new things from old readings)?

I asked as AI about what we lose, and here's what it said, which I don't buy:

But Taste—true, earned taste—is a byproduct of the inefficiency of finding those things yourself. When you hunt for the “Generational Canon” manually, you have to wade through trash. You have to read ten books that don’t resonate to find the one that vibrates in your hand. That wasted time isn’t waste; it’s calibration. It provides the contrast necessary for “discerning taste.” If an AI hands you a perfect platter of 10/10 bangers that align perfectly with your soul, you lose the ability to detect why they are good. You become a connoisseur who has never tasted a bad wine, which is to say, you aren’t a connoisseur at all; you’re just a consumer of high-quality inputs.

I think there is enough discernment and active reading within a book that helps with calibration. ie: I'd rather read through the right recommended book 5 teams, then wastefully read 4 books that were trash, so that I can find the right book and read it once. My gut says that the beginning and end of the workflows are most important: orientation and synthesis. The mapping work is for specialized canon makers, which could be humans or agents. Even when AI provides you a map, there's still research to do on each book, and discernment on where to plunge.

The reading part is more nuanced. Of course, when you don't read, you can't synthesize. But maybe AI can assist us finding the right things in a given book. As in, maybe Infinite Jest is just so thick that I'm going to procrastinate on starting for a decade. But maybe there's a 50-page excerpt in the middle that is hyper-relevant to the month I'm open to having AI summarize the beginning and end, so that I can dive in and experience the right passage at the right time. This doesn't replace reading the full thing, and maybe that will happen in a future stage of my life. This feels like a middle ground—I'm not saying I want to extract summaries and factoids for other purposes; I do want to immerse in the text for 10-20 hours, I just don't have 100-200 hours in that given month, and so in this case AI is doing what a college professor does: curate.

Could AI capture the intangibles of quality?

· 234 words

Will AI ever be able to capture the intangibles of quality?

Davey sent me a voice note, loosely around if it would be possible for AI to handle all of the branches of quality. I’m skeptical that it would work, and even if so, I think there’s value in having humans read essays and make these decisions. Still, he triggered three questions in me:

  1. Might unconscious machines actually be able to better determine cultural transcendence than humans? I’ve made a team of judges that is well-rounded, but it’s limited to the people I know and trust. The categories are good, but is it really representative of the whole Internet? How would I know? In the future, you could have scrapers read every Substack post in real-time and create a living map of cultural vectors, and then simulate all new essay against past/present/future vectors. (Or, better yet, the bots could read Substack, understand the psychographics of readers, and then elect human judges to still keep humans in the loop.)

  2. Might some element of essay evaluation, if it wants to be “perfect and total” require a machine with simulated consciousness? This got me to think about the taste category. I think that you could potentially map the canon, and then have it make conclusions that only a lifelong reader could come to. But there is an element of ‘somatic reaction’ that would probably not translate. Even if a machine had some sense of qualia (which I think it can), it would likely be significantly different from a human’s. 

  3. Even if machines could do the entirety of evaluation, and create anthologies of human-written essays (and machine-written essays, but in a separate collection), might there still be value in including humans in the process? Could be valuable both in terms of determining the winner, and the emerging culture from involving humans in that process. I like to think that if we ever have a “best machine essays of 2028” that humans will play a critical role in the eval of that.