The AI problem is still in an earlier stage at my job, but I've already witnessed in a code review that code was pointed out as questionable, and then it was justified with what amounted to "the AI generated this, it wasn't me". I really don't like where this is going.
nightsky
AI will see a sharp decline in usage as a plot device
Today I was looking for some new audiobooks again, and I was scrolling through curated^1^ lists for various genres. In the sci-fi genre, there is a noticeable uptick in AI-related fiction books. I have noticed this for a while already, and it's getting more intense. Most seem about "what if AI, but really powerful and scary" and singularity-related scenarios. While such fiction themes aren't new at all, it appears to me that there's a wave of it now, although it's possible as well that I am just more cognisant of it.
I think that's another reason that will make your prediction true: sooner or later demand for this sub-genre will peak, as many people eventually become bored with it as a fiction theme as well. Like it happened with e.g. vampires and zombies.
(^1^ Not sure when "curation" is even human-sourced these days. The overall state of curation, genre-sorting, tagging and algorithmic "recommendations" in commercial books and audiobooks is so terrible... but that's a different rant for another day.)
If someone creates the world's worst playlist, that would play right after RMS's free software song.
For the part on generative AI skills as job requirement: just came across this, and it's beautiful. Made even better by the answer post from an audiobook narrator.
Amazon publishes Generative AI Adoption Index and the results are something! And by "something" I mean "annoying".
I don't know how seriously I should take the numbers, because it's Amazon after all and they want to make money with this crap, but on the other hand they surveyed "senior IT decision-makers".. and my opinion on that crowd isn't the highest either.
Highlights:
- Prioritizing spending on GenAI over spending on security. Yes, that is not going to cause problems at all. I do not see how this could go wrong.
- The junk chart about "job roles with generative AI skills as a requirement". What the fuck does that even mean, what is the skill? Do job interviews now include a section where you have to demonstrate promptfondling "skills"? (Also, the scale of the horizontal axis is wrong, but maybe no one noticed because they were so dazzled by the bars being suitcases for some reason.)
- Cherry on top: one box to the left they list "limited understanding of generative AI skilling needs" as a barrier for "generative AI training". So yeah...
- "CAIO". I hate that I just learned that.
I'm not sure I want to know, but what is the relation from beef tallow to fascism, is it related to the whole seed oil conspiracy? Or is it one of these imagined ultra manly masculine man things for maxxing the intake of meat? (I'm losing track of all the insane bullshit, there's just too much.)
The myth of the "10x programmer" has broken the brains of many people in software. They appear to think that it's all about how much code you can crank out, as fast as possible. Taking some time to think? Hah, that's just a sign of weakness, not necessary for the ultra-brained.
I don't hear artists or writers and such bragging about how many works they can pump out per week. I don't hear them gluing their hands to the pen of a graphing plotter to increase the speed of drawing. How did we end up like this in programming?
Update on my comment from yesterday: it seems I fell for satire (?). (I don't know the people involved, so no idea, but it seems plausible.)
I hate this position so much, claiming that it's because "the left" wanted "too much". That's not only morally bankrupt, it's factually wrong too. And also ignorant of historical examples. It's lazy and rotten thinking all the way through.
Oh! Wasn't aware of that podcast. Yeah, could be!
Warning: you might regret reading this screenshot of elno posting a screenshot. (cw: chatbots in sexual context)
oh noooo no no no
...but that brings me back to questions about "what does interaction with LLM chatbots do to human brains".
EDIT: as pointed out by Soyweiser below, the lower reply in the screenshot is probably satire.
It doesn't have to be IMO, in particular when it's an older work.
I don't mind at all to rewatch e.g. AI-themed episodes of TNG, such as the various episodes with a focus on Data, or the one where the ship computer gains sentience (it's a great episode actually).
On the other hand, a while ago I stopped listening to a contemporary (published in 2022) audiobook halfway throuh, it was an utopian AI scifi story. The theme of "AI could be great and save the world" just bugged me too much in relation to the current real-world situation. I couldn't enjoy it at all.
I don't know why I feel so differently about these two examples. Maybe it's simply because TNG is old enough that I do not associate it with current events, and the first time I saw the episodes was so long ago. Or maybe it's because TNG plays in a far-future scenario, clearly disconnected from today, while the audiobook plays in a current-day scenario. Hm, it's strange.
(and btw queer loneliness is an interesting theme, wonder if I could find an audiobook involving it)