Fear and Skepticism in AI


Thomas Ptacek penned an article asserting that his AI skeptic friends are all nuts, and I see his point. I think it’s pretty clear that in 2025 the rise of AI hasn’t been exactly even, but LLMs-as-coding-assistants is an area in which they excel. It’s due in no small part to the syntactic rigidity of programming languages, coupled with the fact that Stack Overflow was a terrific source for training data that soon found itself almost entirely supplanted by the models it informed. Let’s face it: an awful lot of us would have seen significantly less utility from Stack Overflow if they’d disabled copy and paste in the browser. Now it’s barely an afterthought. What happens when AI consumes its own training sources? It took decades to gather, and days to consume and refine into an LLM. That’s not a sustainable pattern.

I can’t shake the feeling that a lot of AI skepticism is rooted in fear for one’s own skillset. I get it. I feel it. What does it mean when AI becomes good enough to write convincingly in my voice, with similar levels of insight as to the potential downstream consequences of various tech industry happenings? There’s a natural defensive reaction of “well, it’ll never get to that level of insight” and other protestations, but when I really think about it, is that being driven by anything more than fear for my professional future? It’s a troubling question.

The first time I saw GenAI write code, it beat together a script that hit a bunch of AWS endpoints to display NAT Gateway pricing APIs. It was transformative, so I showed it to a few folks at work. The immediate first reaction from a senior engineer was one of criticism. “Sure, it’ll replace a junior dev, but there’s no way it can replace a senior engineer.” It was an instinctive defensive reaction, and it’s one I’m seeing an awful lot of in the industry.

Think about what a constructive form of skepticism would look like, instead of a knee-jerk defensive response. What that constructive answer looks like is going to vary, but I can tell you it’s not going to come from a place of fear, and it’s not going to come within five seconds of seeing a new AI capability.

This certainly isn’t helped by a near-mindless race to the bottom. Take today’s LLMs. Five years ago you’d have paid virtually any price for one. Today they’re grounded in “about twenty bucks a month,” to the point where Anthropic’s $200 a month plan feels relatively spendy. This has anchored the value perception of these tools, and led to scenarios where fabulous amounts of money have been invested, without a whole lot plan for what recouping that investment would look like. OpenAI or Anthropic release a massively upgraded model or feature, and the other one matches it within weeks. Then something like Deep Seek comes out and is nearly good enough for a small fraction of the cost. It seems to me that creating these models might not be a terrific business model.

Perhaps it’s a customer acquisition strategy, wherein they then expand the offering and raise the price later–but I don’t think that holds water. Local inference on LLMs running on my laptop are both good enough and fast enough to be a reasonable replacement for many tasks, with significant upsides in being both free and private.

What does the future look like here? This is clearly today an accelerator for people writing code, but not a replacement. When does that tip over, if at all? I can’t see the future, but my gut tells me that it’s time to rethink our career trajectories. I’ve seen this pattern before, when people were reflexively anti-cloud. Whenever a displacing new technology comes out, there are vocal detractors. I’ve yet to see a scenario where they’ve been proven correct.

When the tide starts rising, best learn to swim.