I put an essay I'm working on through Google's Gemini. I have access to the 2.5 Pro model with my student email address. I've used it before, pretty much exclusively to spit out boilerplate Python code and a deeply inefficient bit of JavaScript for the portfolio page on my neocities site. I'm not an absolutist on LLMs broadly. I think these aforementioned programming contexts are at least somewhat compelling. That said, there remain a host of ethical and ecological problems with the training and usage of these models, but I don't think these tools are strictly evil in the abstract.
Back in the real world; I gave this essay to Gemini with a prompt asking it to clean up my phrasing and make the essay more readable. I think it did an approaching almost-okay job at that. This is the practical endpoint of these AI tools for creating anything of substance. Gemini made the essay more readable, sure, but largely because it now looked like 90% of text on the internet nowadays. It painted over my actual voice. It took my essay and turned it into one of thousands of em dash-laden nothingburgers these tools love to spit out.
I don't know if my original work is objectively superior to what Gemini gave me. I really don't. I don't think of myself as a particularly good writer. All I know is this: that original essay is mine, and I'd rather produce a bad, genuine essay than a deeply generic and dishonest one.
A weighted average of all text on the internet masquerading as an intelligent assistant is not a replacement for real, thoughtful editing. LLMs save time, sure, but they do not produce meaningful writing or prose. I'm not letting machines do the thinking and creating for me, and I suggest you don't either.
I don't think I'll use Gemini again.
6/14/25