this post was submitted on 04 May 2025
18 points (100.0% liked)

TechTakes

1842 readers
123 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 6 points 8 hours ago* (last edited 8 hours ago) (1 children)

A minor controversy over the Touhou 20 demo containing generative AI background textures has been making the rounds on ~~the weeaboo parts of~~ social media: https://www.reddit.com/r/touhou/comments/1kf4h6e/touhou_20_spellcard_backgrounds_might_be/

https://imgur.com/a/curious-th20-textures-xq8mm5i

I have made it my whole life without really figuring out what Touhou is all about, but it seems like at least the English fandom isn't really a fan of generative AI. A lot of them disappointed (or hoping it was an accident due to using stock art websites). Touhou 19 had an arguably anti-AI afterword.

[–] bitofhope@awful.systems 4 points 8 hours ago

Disappointing if true. The omake for TH19 hits the nail on the head on why genAI as it exists today is antithetical to the ethos of the series.

[–] BlueMonday1984@awful.systems 5 points 18 hours ago

New piece from Brian Merchant: Four bad AI futures arrived this week, taking a listicle-ish approach to some of the horrific things AI has unleashed upon us.

[–] BlueMonday1984@awful.systems 8 points 23 hours ago (3 children)

Ran across a Bluesky thread which caught my attention - its nothing major, its just about how gen-AI painted one rando's views of the droids of Star Wars:

Generative AI has helped me to understand why, in Star Wars, the droids seem to have personalities but are generally bad at whatever they're supposed to be programmed to do, and everyone is tired of their shit and constantly tells them to shut up

Threepio: Sir, the odds of successfully navigating an asteroid field are 3720 to one!

Han Solo (knowing that Threepio just pulls these numbers out of Reddit memes about Emperor Palpatine's odds of getting laid): SHUT THE FUCK UP!!!!!!

"Why do the heroes of Star Wars never do anything to help the droids? They're clearly sentient, living things, yet they're treated as slaves!" Thanks for doing propaganda for Big Droid, you credulous ass!

With that out the way, here's my personal sidenote:

There's already been plenty of ink spilled on the myriad effects AI will have on society, but it seems one of the more subtle effects will be on the fiction we write and consume.

Right off the bat, one thing I'm anticipating (which I've already talked about before) is that AI will see a sharp decline in usage as a plot device - whatever sci-fi pizzazz AI had as a concept is thoroughly gone at this point, replaced with the same all-consuming cringe that surrounds NFTs and the metaverse, two other failed technologies turned pop-cultural punchlines.

If there are any attempts at using "superintelligent AI" as a plot point, I expect they'll be lambasted for shattering willing suspension of disbelief, at least for a while. If AI appears at all, my money's on it being presented as an annoyance/inconvenience (as someone else has predicted).

Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble's wide-spread harms (environmental damage, widespread theft, AI slop, misinformation, etcetera) any notion of AI being value-neutral as a tech/concept has been equally undermined.

With both of those in mind, I expect any positive depiction of AI is gonna face some backlash, at least for a good while.

(As a semi-related aside, I found a couple of people openly siding with the Mos Eisley Cantina owner who refused to serve R2 and 3PO [Exhibit A, Exhibit B])

[–] blakestacey@awful.systems 8 points 21 hours ago

I've noticed the occasional joke about how new computer technology, or LLMs specifically, have changed the speaker's perspective about older science fiction. E.g., there was one that went something like, "I was always confused about how Picard ordered his tea with the weird word order and exactly the same inflection every time, but now I recognize that's the tea order of a man who has learned precisely what is necessary to avoid the replicator delivering you an ocelot instead."

Notice how in TNG, everyone treats a PADD as a device that holds exactly one document and has to be physically handed to a person? The Doylist explanation is that it's a show from 1987 and everyone involved thought of them as notebooks. But the Watsonian explanation is that a device that holds exactly one document and zero distractions is the product of a society more psychologically healthy than ours....

[–] nightsky@awful.systems 4 points 20 hours ago (1 children)

AI will see a sharp decline in usage as a plot device

Today I was looking for some new audiobooks again, and I was scrolling through curated^1^ lists for various genres. In the sci-fi genre, there is a noticeable uptick in AI-related fiction books. I have noticed this for a while already, and it's getting more intense. Most seem about "what if AI, but really powerful and scary" and singularity-related scenarios. While such fiction themes aren't new at all, it appears to me that there's a wave of it now, although it's possible as well that I am just more cognisant of it.

I think that's another reason that will make your prediction true: sooner or later demand for this sub-genre will peak, as many people eventually become bored with it as a fiction theme as well. Like it happened with e.g. vampires and zombies.

(^1^ Not sure when "curation" is even human-sourced these days. The overall state of curation, genre-sorting, tagging and algorithmic "recommendations" in commercial books and audiobooks is so terrible... but that's a different rant for another day.)

[–] blakestacey@awful.systems 5 points 18 hours ago (1 children)

Back in the twenty-aughts, I wrote a science fiction murder mystery involving the invention of artificial intelligence. That whole plot angle feels dead today, even though the AI in question was, you know, in the Commander Data tradition, not the monstrosities of mediocrity we're suffering through now. (The story was also about a stand-in for the United States rebuilding itself after a fascist uprising, the emotional aftereffects of the night when shooting the fascists was necessary to stop them, queer loneliness and other things that maybe hold up better.)

[–] nightsky@awful.systems 4 points 17 hours ago

That whole plot angle feels dead today

It doesn't have to be IMO, in particular when it's an older work.

I don't mind at all to rewatch e.g. AI-themed episodes of TNG, such as the various episodes with a focus on Data, or the one where the ship computer gains sentience (it's a great episode actually).

On the other hand, a while ago I stopped listening to a contemporary (published in 2022) audiobook halfway throuh, it was an utopian AI scifi story. The theme of "AI could be great and save the world" just bugged me too much in relation to the current real-world situation. I couldn't enjoy it at all.

I don't know why I feel so differently about these two examples. Maybe it's simply because TNG is old enough that I do not associate it with current events, and the first time I saw the episodes was so long ago. Or maybe it's because TNG plays in a far-future scenario, clearly disconnected from today, while the audiobook plays in a current-day scenario. Hm, it's strange.

(and btw queer loneliness is an interesting theme, wonder if I could find an audiobook involving it)

[–] Soyweiser@awful.systems 3 points 23 hours ago* (last edited 21 hours ago) (1 children)

That is a bit weird, as iirc the robots in star wars are not based on LLMs, the robots in SWs can think, and can be sentient beings but are often explicitly limited. (And according to Lucas this was somewhat intentional to show that people should be treated equally (if this was the initial intent is unclear as Lucas does change his mind a bit from time to time), the treatment of robots as slaves in SW is considered bad). What a misreading of the universe and the point. Also time flows the other way folks, LLMs didn't influence the creation of robots in SWs.

Also if the droids were LLMs, nobody would use them to plot hyperspace lanes past stars. Somebody could send a message about Star Engorgement and block hyperspace travel for weeks.

But yes, the backlash is going to be real.

E: ow god im really in a 'take science fiction too seriously' period. (more SW droids are not LLMs stuff)

[–] BlueMonday1984@awful.systems 6 points 21 hours ago (1 children)

E: ow god im really in a ‘take science fiction too seriously’ period.

People taking sci-fi too seriously was how LessWrong and the AI bubble happened, I'd say you're pretty far from taking it too seriously :P

[–] Soyweiser@awful.systems 3 points 21 hours ago* (last edited 20 hours ago)

looks up from recording my new mathematically speaking 'a podcast for the new thinking man' podcast

A phew, I was worried for a moment there.

E: Apologies, it is real I should have googled it. I know nothing of the podcast, I just tried to make a 'this is what a Rationalist/logic bro would name their podcast' joke. Ow god he even has an episode about conflict theory (but in contrast to Scotts post on conflict theory he actually talks about a historical mathematician so not the same thing, but that was a moment of double take). There is also an Adam Allred who is a 'masculinty speaker' or something, who is also into AI, Maga and everything else of course, but not sure if they are the same person (nope different people, turns out if you are called Adam Allred you are forced to make a podcast). But the math podcast Adam seems to be a good guy who is pro lgbt/BLM etc. (He did get his twitter account hacked which is now spamming people).

[–] Soyweiser@awful.systems 6 points 22 hours ago (1 children)

Unrelated to my recent posts on sciencefiction, and not sure if this is something I should ask here publically but it is the easiest place I could think off. But @dgerard@awful.systems is Rationalwiki dead or not?

[–] dgerard@awful.systems 7 points 18 hours ago (1 children)

it's very tired and shagged out after a long squawk. Just today I did [REDACTED] to alleviate the DDOS scraper bots and it's ticking along nicely right now. Keeping an eye on things.

[–] Soyweiser@awful.systems 7 points 18 hours ago

Ah so that is good news, worried it might be offline due to legal troubles, but it is technical troubles. Thanks for all your (and everybody else involved) service an all that.

[–] fullsquare@awful.systems 7 points 1 day ago* (last edited 1 day ago) (1 children)

Derek Lowe comes in with another sneer at techbro-optimism of collection of AI startup talking points wearing skins of people saying that definitely all medicine is solved, just throw more compute at it https://www.science.org/content/blog-post/end-disease (it's two weeks old, but it's not like any of you read him regularly). more relevantly he also links all his previous writing on this topic, starting with 2007 piece about techbros wondering why didn't anyone brought SV Disruption™ to pharma: https://www.science.org/content/blog-post/andy-grove-rich-famous-smart-and-wrong

interesting to see that he reaches some of pretty much compsci-flavoured conclusions despite not having compsci background. still not exactly there yet as he leaves some possibility of AGI

[–] Soyweiser@awful.systems 5 points 1 day ago (1 children)

it’s not like any of you read him regularly

Of course not he is a capitalist pigdog! A traitor to the cause! Bla bla. ;)

I posted his work here before, despite thinking he isnt totally correct about his stance on capitalism stuff. He seems to be a good source on the whole medical chemistry science field. And quite skeptical and hype resistance. (Prob also why he I could make de self deprecating joke above). He wrote also negatively about the hackers who do homemade meds thing.

[–] fullsquare@awful.systems 4 points 20 hours ago (1 children)

He wrote also negatively about the hackers who do homemade meds thing.

i've heard about them before and got reminded of their existence against my will recently. (do you know that somebody made a recommendation engine for peertube? can you guess which CCC talk from last winter was on top of pile in their example?)

you know, i think they have a bit of that techbro urge to turn every human activity into series of marketable ESP32 IOT-enabled widgets, except that they don't do that to woo VCs, they say they do that for betterment of humanity, but i think they're doing it for clout. because lemmy has only communist programmers and no one else, not much later i stumbled upon an essay on how trying to make programming languages easier in some ways is doomed to fail, because the task of programming itself is complex and much more than just writing code, and if you try, you get monstrosities like COBOL. i'm not in IT but it seems to me that this take is more common among fans of C and has little overlap with type of techbros from above.

so in some way, they are trying to cobolify backyard chemistry. the thing that is stupid about it is that it has been done before, and it's a very useful tool, and also it does something completely opposite than what they wanted to do. it's called solid phase peptide synthesis, and it replaces synthetic process that previously has been used in liquid phase (that is, like you do usually in normal solutions in normal flasks). (there's also a way to make synthetic DNA/RNA in similar way. both have a limitation that only a certain number of aminoacids/bases is actually practical). the thing about SPPS is that it can be automated, and you can just type in sequence of a peptide you want to get, and machine handles part of the rest.

what you gotta give it to them is that automated synthesis allows for a rapid output of many compounds. but it's also hideously expensive, uses equally expensive reagents, and requires constant attention and maintenance from two, ideally more, highly trained professionals in order to keep it running, and even then syntheses still can fail. in order to figure out what got wrong you need to use analytical equipment that costs about as much as that first machine, and then you have to unfuck up that failed synthesis in the first place, which is something that non-chemist won't be able to do. and even when everything goes right, product is still dirty and has to be purified using some other equipment. and even when it works, scaleup requires completely different approach (the older one) because it just doesn't scale well above early phase research amounts.

what i meant to say is that while automation of this kind is good because it allows humans to skip mind-numbingly repetitive steps (and allows to focus on "the everything else" aspect of research, like experiment planning, or parts of synthesis that machine can't do - which tend to be harder and so more rewarding problems) this all absolutely does not lead to deskilling of synthesis like this bozo in camo vest wanted to, i'd say it's exactly the opposite. there's also the entire aspect of how they don't do analysis or purification of anything, and this alone i guess will kill people at some point

[–] Soyweiser@awful.systems 4 points 20 hours ago (3 children)

on how trying to make programming languages easier in some ways is doomed to fail

This is prob right, but the 'in some ways' part does a lot of work here. Think the issue is that some complexity can be removed without problem, and some absolutely cannot. And the problem of figuring out which is which is hard. (Which if you squint, seems to be similar to the chemistry stuff you describe here). With software it (as far as I can tell) is also quickly that bigger projects need bigger teams, and that adds a lot of communication problems, and as a non-stacking process you can't just add more programmers to make stuff go faster (compared to for example building a building, which can be sped up a lot more with just more workers) as these communication problems remain. From what I heard is that this, and the problem of maintaining software on a large scale is what Java was trying to fix. Which is why all programmers love Java. It is a language for enterprise scale projects. (On that note, which is also why a lot of reason people hate Java for the wrong reasons, a lot of the hated stuff makes sense if you recall it is made for enterprise scale projects/teams etc. It is an attempt to make those projects easier (lets leave it in the middle if that attempt worked or not (Do think it is amusing that Minecraft of all things was coded in Java by a single person (initially))).

Interesting our community seems to attract a few outspoken chemistry people. Not something I know much about, know somebody who does something with crystal chemistry machines, and when he technically talks about it I'm happy I understand about 30% :).

[–] pikesley@mastodon.me.uk 5 points 5 hours ago (1 children)

@Soyweiser @fullsquare

"With software it (as far as I can tell) is also quickly that bigger projects need bigger teams, and that adds a lot of communication problems, and as a non-stacking process you can’t just add more programmers to make stuff go faster"

I bought two copies of that Fred Brooks book so I could read it twice as fast

[–] cstross@wandering.shop 4 points 5 hours ago

@pikesley @Soyweiser @fullsquare IIRC the coordination problem afflicts all engineering disciplines: with large tech projects like the LHC and ITER, costs scale as something like the fourth power of the energies they're working with, and a large part of the reason is that managing the project is insanely difficult. I'd love to see some numbers for how the management complexity of large software projects (eg. operating systems, LLMs) compares to this.

[–] fullsquare@awful.systems 2 points 7 hours ago* (last edited 6 hours ago)

Think the issue is that some complexity can be removed without problem, and some absolutely cannot. And the problem of figuring out which is which is hard. (Which if you squint, seems to be similar to the chemistry stuff you describe here).

Well, i'm not exactly sure about it, but what i can do is describe how this process works in terms of operations and you can draw your own conclusions. There's not that much complexity in peptides in the first place, because synthetically, all you have to do is to make a lot of amide bonds, and this is a solved problem. Slightly bigger problem is to make it in controlled way, which is reason why protecting groups are used, but this is also a thing that has been around for decades.

The trick is to bind the thing you want to get to resin, which makes it always insoluble and therefore your product always stays in reactor. This can be an actual dedicated automated reactor or a syringe with a filter. We start with

Resin-NH-Fmoc

Fmoc is a protecting group that falls off when flushed with a base, so we do so, wash resin, and get

Resin-NH2

Then we can add coupling agent and protected aminoacid, for example leucine, then wash again, and this gets us

Resin-NH-Leu-Fmoc

then repeat. All operations are add reagent or wash solvent, stir, wait, drain, repeat. Deprotection, solvent, coupling, solvent, repeat. It's all very amenable to automation and it was explicitly designed this way. When all is done, resin is treated with acid which releases peptide, and because resin can be washed there are no leftover reagents.

Of course it can be all done by hand, and this allows for doing things like putting a couple of aminoacids on resin on a big scale, then splitting it into a couple of batches and attaching different things on top of that in parallel. Machine can't do this (at least machine like we have). Machine can instead run all of this while hot and this makes it fast, but sometimes things break this way, and also machine can run unattended for more than one shift (when it's not broken). Sometimes things fail to work anyway and it's a job of specialist to figure it out and unfuck it up. Sometimes peptide folds on itself in such a way that -NH2 end is hidden inside and next residue can't be attached. This can be fixed by gluing two aminoacids in flask and then using a pair instead of a single one in machine, bypassing that problematic step. Or in a couple of other different ways, and picking the right one requires knowing what are you doing.

Solution phase synthesis looks different because every step requires purification after it, which is sometimes a thing you can wing and sometimes not. The advantage is that when you need lots of product, you can just use bigger flasks, while bigger machine (and large amounts of resin) gets prohibitively expensive. Ozempic was made in solution (at least once) for example. Again, doing things by hand gets you extra flexibility, because machine can only make peptide from start to finish in single run, but if it's done in solution instead, you can start from, say, five points and then put pieces together (which starts to look like convergent synthesis, and this also makes it better for large scale). Machine can't do that (unless these pieces are provided, but at this point most of the work is done)

Looking back at these people, even when operations are simplified, there's no deskilling of operators that they aimed for, it's just throughput that increases. They also don't have the benefit of that "keeping the important things always in reactor" thing

[–] fullsquare@awful.systems 3 points 19 hours ago* (last edited 19 hours ago) (1 children)

i can't find that essay now, but i think it was written in latex, and also complained on top of cobol about java, ada (in military context) and a kind of non-programming where block diagram made by non-programmers was turned into executable

[–] Soyweiser@awful.systems 3 points 18 hours ago

it was written in latex

Of course it was. (I enjoy LaTeX myself but still, it is a type of person)

[–] BlueMonday1984@awful.systems 11 points 1 day ago

In other news, SoundCloud's become the latest victim of the AI scourge - artists have recently discovered the TOS allows their work to be stolen for AI training since early 2024.

SoundCloud's already tried to quell the backlash, but they're getting accused of lying in the replies and the QRTs, so its safe to say its not working.

[–] Soyweiser@awful.systems 11 points 1 day ago* (last edited 1 day ago) (1 children)

Artist notices that his horror creations get listed by AI bots as real. Decides to troll. It works. 2 hours, 1 source. We are so cooked.

[–] antifuchs@awful.systems 7 points 1 day ago (1 children)

The self-fulfilling prophecy machine will ensure that engorgement will become a household term

[–] Soyweiser@awful.systems 6 points 1 day ago

Yes, of course this stuff isn't new, google bombing (or looking for the specific words where there suddenly are more black than white people on a google image search) has been a thing for a while now. But this has the added authority of googles AI saying it. And it also needs just 1 source (which always was a fiction account) which is what makes it scary.

The whole trust me im lying multi step program of getting something into the media can be tossed out. It is now even easier to lie online.

[–] BlueMonday1984@awful.systems 8 points 1 day ago

Quick update from Brian Merchant: he's looking for stories about AI screwing people over.

If you've got any, send them to AIKilledMyJob@pm.me

[–] rook@awful.systems 16 points 2 days ago (5 children)

Here’s a fun one… Microsoft added copilot features to sharepoint. The copilot system has its own set of access controls. The access controls let it see things that normal users might not be able to see. Normal users can then just ask copilot to tell them the contents of the files and pages that they can’t see themselves. Luckily, no business would ever put sensitive information in their sharepoint system, so this isn’t a realistic threat, haha.

Obviously Microsoft have significant resources to research and fix the security problems that LLM integration will bring with it. So much money. So many experts. Plenty of time to think about the issues since the first recall debacle.

And this is what they’ve accomplished.

https://www.pentestpartners.com/security-blog/exploiting-copilot-ai-for-sharepoint/

[–] dgerard@awful.systems 3 points 1 day ago (1 children)

Is that this one or a different instance of the same bug?

[–] rook@awful.systems 4 points 1 day ago

I think that these are different products? I mean, the underlying problem is the same, but copilot studio seems to be “configure your own llm front-end” and copilot for sharepoint seems to be an integration made by the sharepoint team themselves, and it does make some promises about security.

Of course, it might be exactly the same thing with different branding slapped on top, and I’m not sure you could tell without some inside information, but at least this time the security failures are the fault of Microsoft themselves rather than incompetent third party folk. And that suggests that copilot studio is so difficult to use correctly that no-one can, which is funny.

[–] rwg@aoir.social 6 points 2 days ago

@rook @BlueMonday1984 wow. Why go to all the trouble of social engineering a company when you can just ask Copilot?

[–] Soyweiser@awful.systems 4 points 1 day ago

Abusing privileged identities like this to do things is apparently a thing the younger hackers are quite good at so this will all be fun.

[–] ysegrim@furry.engineer 5 points 2 days ago

@rook @BlueMonday1984 Maybe they have asked CoPilot to write the code that restricts access for CoPilot?

(Sometimes this future feels like 2001 A Space Odyssey, just as a farce. And without benevolent aliens.)

load more comments (1 replies)
[–] rook@awful.systems 13 points 2 days ago (4 children)

They’re already doing phrenology and transphobia on the pope.

(screenshot of a Twitter post with dubious coloured lines overlaid on some photos of the pope’s head, claiming a better match for a “female” skull shape)

[–] mountainriver@awful.systems 3 points 1 day ago (1 children)

I've never looked into how they do the phrenology but was immediately struck by the "female" skull having larger forehead. So they say women are big brained?

[–] Soyweiser@awful.systems 4 points 1 day ago

The Male vs Female skull. Science!

[–] bitofhope@awful.systems 6 points 1 day ago

I think this mostly proves that Leo XIV is a moe anime character.

[–] Soyweiser@awful.systems 9 points 2 days ago

Painting a cross on the skull of the pope and then claiming this is wrong is a whole new kind of heresy.

[–] froztbyte@awful.systems 7 points 2 days ago

.....I was unprepared for reading this post

[–] BlueMonday1984@awful.systems 7 points 2 days ago

Got a nice and lengthy sneer from film blog That Final Scene: the uncanny valet is not your friend (and other AI stories)

Beyond being an utter castigation of AI bros' "attempts" at aping art, its also wonderfully written from start to finish. Go check it out.

[–] gerikson@awful.systems 11 points 2 days ago (1 children)

New eugenics conference just dropped

https://www.lesswrong.com/posts/8ZExgaGnvLevkZxR5/attend-the-2025-reproductive-frontiers-summit-june-10-12

"Chatham House rules" so they can happily be racist without anyone pointing fingers at them.

[–] Architeuthis@awful.systems 10 points 2 days ago

the genomic emancipation of humanity

ffs, the euphemisms keep piling on today.

load more comments
view more: next ›