Through the uncanny valley of AI

First published · Knut Melvær ·

I prompted an eslint plugin into existence in a day. So why do I feel weird about it? On the strange mix of empowerment and nihilism in the AI era.

Contents

“What will “developer marketing” even mean at the end of 2026?”

As someone with “developer marketing” in my job title, this question is partly existential[1]. Not since I was digging myself deep into the academic world of the humanities in my mid-20s have I felt this strange mix of urgency, creativity, frustration, and intellectual fatigue.

Especially when Claude Code and Opus 4.5 (aka Clopus) entered the scene. Last time I felt this energy was when stuff like Express.js, Flexbox, and React came about and got everyone excited and frustrated.

My former colleague and developer-education extraordinaire, Simeon, told me he increasingly views his job as teaching agents how to use PlanetScale. When talking to some of the brilliant engineers I know, they report that even with Opus and Claude Code, it’s a mixed bag: some get a lot of mileage out of it for certain things, while for the more specialized or less represented problems in the training, it’s not working as well for them.

And then they also tell me about the experience of skill decay and that it removes the joy of figuring out a problem in code. While others tell me that it removes everything they don’t like about coding.

I can relate to all of it. And it’s making my head spin.

When my head spins, it has always helped to write about it.

This is what you are reading now.

It feels like we’re traveling through an uncanny valley between two opposing mountain sides, looking to our left, there is all of this amazing opportunity to be discovered, and looking to our right, there is pain, disruption, and uncertainty.

The side of hyper-creativity

On the sunny side of this valley, you walk the paths from ideas to proof of concepts with the greatest ease. You find yourself doing stuff with programming languages and technology that you didn’t before. Because the cost of entry was to darn high.

I was able to prompt a schema-aware eslint plugin, Sanity Lint, that wraps a colleague’s Rust code into WASM into life in a day (while doing my day job). This would have taken me at least a week or two in the before days. It even set up the npm publishing pipeline for me. Which I never want to do manually ever again. It feels very empowering.

I made it build a private finance planner tool on top of a csv of transactions.

I started making something akin to Clawd Bot myself over the holidays, but got stuck in the bureaucracy trying to get a phone number to work with Twilio (yes, would have been trivial to set it up with Signal, but I wanted to just iMessage it).

Clopus feels so powerful that I almost feel guilty for not having more ideas to constantly throw against the large language model wall.

I can offload a lot of the boring bits of my day-to-day marketing work, like gathering and organizing research, project reporting, and so on, and focus on the creative parts.

And with MIRIAD (an agent orchestration tool I’ve been using, vibe-coded by Simen Svale), I can deploy a small team of specialized agents and have them figure out a lot of the stuff between them, instead of me running around with markdown files, doing context management for the clankers.

In this mode, I find that what I bring to the table is providing the right cues and input to make the LLM figure it out quicker and better.

I also provide the “taste,” that is, knowing what good looks like.

When this work with LLMs “clicks,” it feels exhilarating and powerful.

It’s democratizing (for those who have the means to pay for tokens) in allowing more people to create with technology that only a couple of years ago felt beyond their reach.

The side of creative nihilism

But there is the other side of the valley where the shadows lurk. It’s the sneaking feeling that everything I spent two and a half decades learning is now… pointless? At least practically. It’s not like AI took away the joy I once felt when I figured out how looping over arrays work, or when I finally grasped the mental model of what a callback is.

But if anyone in principle can make something like Sanity Lint (the eslint plugin I mentioned) if they need it, why should I bother?

It also feels that so much of the engineering we’re doing around the LLMs is so temporary. We’re basically just figuring out how to put markdown files with instructions in the right place at the right time. And finding new ways of infusing the somewhat outdated model on how to write React anno 2026. It was “custom GPTs,” “projects,” MCPs, “sub agents,” and this month, it’s all about “skills.”

So it feels a bit pointless to dive too much into these techniques, because they will certainly be gone within a year, hopefully folded into abstractions where LLMs are orchestrated to understand the semantic field of your ask, and just load into context what they need to know about.

AI feels like toothpaste you can’t get back in the tube, but that was built by breaking the (western) cultural contract of intellectual ownership of your creations. It’s a bargain we didn’t consciously make.

Before the AI hike: some backstory

Back in the early 2010s, I was on a PhD fellowship trying to bring the digital humanities into the study of religion. My research asked if the huge corpus of digitized Norwegian newspapers from the 1700s until today (yes, really) could be used to tease out how the public used and thought about concepts like “religious” and “spiritual.”

Starting out, I had to look to computational linguistics for methods to run through the 70,000 articles that mentioned these concepts. I taught myself Python and became pretty good at regex. I installed undocumented Java applications to run statistical models. There was a lot of banging my head against the wall.

But all the methodologies were pretty much founded in the same principles: identify the word (aka “token”), decide how many words before and after you want to look at, and run your usual suspects of statistical methods (t-tests, chi-square, ANOVA, and so on) in various creative and layered ways.

For someone not trained in computational linguistics, discovering that these methods boiled down to counting word occurrences and calculating averages and variances felt like a dead end.

From means to fields…

And then I found folks working with kernel density estimation to map semantic fields of a corpus. Going from deductive statistics to inductive statistics let you explore a bunch of text in new visual ways. Instead of hunting for confidence levels and p-values, I could now look at my corpus as a two-dimensional map of a landscape.

The semantic landscape of War and Peace. Rendered by David McClure

…and into the multi-dimensional vector space

And then Google published word2vec. Introducing the idea that you could translate a corpus into multi-dimensional vectors, and then use the vector distance between the embeddings of two words to find similar relationships.

The famous example is that this model could predict that if you took the vector distance between man and king, then going from woman would take you close to queen. This worked for other patterns like small→smaller would match big→bigger. Or Norway→Oslo would give you France→Paris.

The kicker was that these patterns emerged from just training on the dataset itself. In other words, you could dig up these inherent relationships in language that emerge from language in a way we couldn’t easily do before.

It gave you a different lens at culture and human communication.

A Venn diagram comparing the semantic neighborhoods of “spirituell” (spiritual) and “religiøs” (religious) in Norwegian Wikipedia using word2vec embeddings. The overlap reveals shared mystical and ascetic concepts, while the distinct regions show how “spiritual” clusters with psychological and New Age terminology, and “religious” with institutional and tradition-bound language.

This is when I burned out. Not because of the research, but the brutal and unforgiving nature of academic institutions.

I don’t know how the computational linguists who’d spent entire careers counting words felt when embeddings arrived. But I suspect it rhymes with what I’m feeling now.

Because that was when I left academia and made technology my full-time job.

Climbing the strata of abstractions

And now I’m here again. With my profession being redefined by many metric tonnes of words cast into multi-dimensional vector space to be coaxed into prediction machines that force a conversation.

And like my academic days, it feels like we will have to figure out what the new layer of abstractions are.

Working with AI is a new skill set. And doing so without losing your professional self-worth takes a mindset shift.

We’re getting new jargon and methodologies. Getting stuff done with AI is easier and less frustrating when you understand what it really is and what the constraints are. Right now, it’s useful to know about context windows and how to work around them. Just as it was useful to really understand memory allocation back in the days (still is in some lines of programming, of course).

Yes, giving over to vibe-coding will corrode some of your development skills (because you’re not rehearsing them anymore), but it will also give you time to learn new skills, still to be uncovered.

Yes, using AI to generate content will make some of your writing skills atrophy, but it will also allow you to develop a sharper editorial eye and a deeper understanding of what makes content resonate with humans.

And what will be forever true: human institutions and innovations will remain messy, predictable, unevenly distributed, creative, and formulaic.

I’m still in the valley. The view from here is strange. But I’ve hiked out of valleys before.

Notes

  1. 1.Yes, existential in a fairly narrow sense and certainly from privileged position.
33 readers have lingered on this prose since Jan 2026

Down the rabbit hole