It's hard to believe that the job market has become even more hostile since I was last shoved into it three years ago. Then, employers were much more open to remote workers; LLMs weren't invading all of the spaces where technology meets creativity; AI models weren't scraping your resume and cover letter for specific phrases before unceremoniously redirecting your application to the digital garbage bin.

Unfortunately, like so many in the world, my occupation lies at the intersection of tech and wordsmithing: specifically, I'm a technical writer by trade. This is one of the first areas that venture capital-funded Silicon Valley companies want to get rid of. Why? Because it's cheaper to burn through the world's limited resources to "write" your documentation with an LLM. It'll get most of the way there, so what's the harm?

Beyond the obvious impact on the environment, this is just bullshit. What an LLM can give you is its best guess as to the words you want to see on the screen. If it's trained on a company's internal data, you might get more lucky than just using one out of the box. But when B2B tech companies are trying to iterate at ever-higher speeds, the robot isn't going to be able to keep up.


In my most recent job, I was encouraged to use the company's LLM to search for information to include in monthly patch notes. Theoretically, this model had access to all of the company's code repositories and therefore could give me accurate information about what had changed over the month.

Predictably, that isn't how it worked. The first time, the model repeatedly informed me that May 2025 hadn't happened when it was already June. The next time I resorted to trying it, the LLM isolated the first five results to the month I had asked for...then went back four years for irrelevant information.

And yet, for some reason, my colleagues continued crowing about how useful it was. Why didn't I use it in my daily writing? It would make updating the documentation seamless! Think about how it could automate the API spec!

They didn't believe me when I told them the robot made more work for me.


Beyond the "hallucinations" we all know and love from generative AI models, they just don't keep up with new information well. If we think of a request to an LLM as creating a word cloud, the most-used terms are going to be "larger" than the rest. This means that when you're looking to document a change to software that occurred in the last week, its associated information isn't going to be highly represented in that word cloud yet. So what does the LLM do? It provides outdated information.

Despite the billions of dollars wasted and the hours of my life I've had ripped away from me listening to tech bros drone on and on about how generative AI is going to revolutionize everything, it's done nothing in my experience other than lessen my job prospects dramatically. Companies will only slowly come to realize their documentation or customer service is lacking when their sales numbers go down. The best way to do this stuff requires a human touch: sitting down with the developers and asking "Okay, what exactly is this meant to do?"

But the tech gets them most of the way there, so does it really matter?