istewart

joined 7 months ago
[–] istewart@awful.systems 9 points 2 days ago

As I noted on the YouTube video, this is doubly heinous as a lot of CA community college instructors are "freeway flyers" - working at multiple campuses, sometimes almost 100 miles apart, just to cobble together a full-time work schedule for themselves. Online, self-paced, forum-based class formats were already becoming popular even before the pandemic, and I've been in such classes where the professor indicated that I was one of maybe 3 or 4 students who bothered to show up to in-person office hours. I have to wonder if that will end up being a hard requirement at some point. The bottom rung on the higher-education ladder is already the most vulnerable, and this just makes it worse.

[–] istewart@awful.systems 6 points 6 days ago (2 children)

I have to agree. There are already at least two notable and high-profile failure stories with consequences that are going to stick around for years.

  1. The Israeli military's use of "AI" targeting systems as an accountability sink in service of a predetermined policy of ethnic cleansing.
  2. The DOGE creeps wanting to rewrite bedrock federal payment systems with AI assistance.

And sadly more to come. The first story is likely to continue to get a hands-off treatment in most US media for a few more years yet, but the second one is almost certainly going to generate Tacoma Narrows Bridge-level legends of failure and necessary restructuring once professionals are back in command. The kind of thing that is put into college engineering textbooks as a dire warning of what not to do.

Of course, it's up to us to keep these failures in the public spotlight and framed appropriately. The appropriate question is not, "how did the AI fail?" The appropriate question is, "how did someone abusively misapply stochastic algorithms?"

[–] istewart@awful.systems 11 points 1 week ago

Would you invest in commercial real estate, knowing there was a non-zero chance your tenants might come in one day to discover a thoroughly intoxicated JD Vance in a compromising position with the break-room furniture?

[–] istewart@awful.systems 9 points 2 weeks ago (1 children)

Mesa-optimization... that must be when you rail some crushed-up Adderall XRs, boof some modafinil for good measure, and spend the night making sure your kitchen table surface is perfectly flat with no defects abrasions deviations contusions...

[–] istewart@awful.systems 15 points 2 weeks ago* (last edited 2 weeks ago)

couldn't help myself, there are seldom more perfect opportunities to use this one

[–] istewart@awful.systems 11 points 2 weeks ago (1 children)

Another thread worth pulling is that biotechnology and synthetic biology have turned out to be substantially harder to master than anticipated, and it didn't seem like it was ever the primary area of expertise for a lot of these people anyway. I don't have a copy of any of Kurzweil's books at hand to look at his predicted timelines for that stuff, but they're surely way off.

Faulty assumptions about the biological equivalence of digital neural network algorithms have done a lot of unexamined heavy lifting in driving the current AI bubble, and keeping the harder stuff on the fringes of the conversation. That said, I don't doubt that a few refugees from the bubble-burst will attempt to inflate the next bubble on the back of speculative biotech, and I've seen a couple of signs of that already.

[–] istewart@awful.systems 7 points 2 weeks ago (2 children)

"This Is What Yudkowsky Actually Believes" seems like a subtitle that would get heavy use in a future episode of South Park about Cartman dropping out after one semester at community college.

[–] istewart@awful.systems 9 points 2 weeks ago (2 children)

Just had a video labeled "auto-dubbed" pop up in my YouTube feed for the first time. Not sure if it was chosen by the author or not. Too bad, it looks like a fascinating problem to see explained, but I don't think I'm going to trust an AI feature that I just saw for the first time to explain it. (And perhaps more crucially, I'm a bit afraid of what anime fans will have to say about this.)

[–] istewart@awful.systems 6 points 2 weeks ago (5 children)

Notwithstanding the subject matter, I feel like I've always gotten limited value from these Oxford-style university debates. KQED used to run a series called Intelligence Squared US that crammed it into an hour, and I shudder to think what that's become in the era of Trump and AI. It seems like a format that was developed to be the intellectual equivalent of intramural sports, complete with a form of scoring. But that contrivance renders it devoid of nuance, and also means it can be used to platform and launder ugly bullshit, since each side has to be strictly pro- or anti-whatever.

Really, it strikes me as a forerunner of the false certainty and point-scoring inherent in Twitter-style short-form discourse. In some ways, the format was unconsciously pared down and plopped online, without any sort of inquiry into its weaknesses. I'd be interested to know if anyone feels any different.

[–] istewart@awful.systems 7 points 4 weeks ago (2 children)

There aren't really many other options besides Springer and self-publishing for a book like that, right? I've gotten some field-specific article compilations from CRC Press, but I guess that's just an imprint of Routledge.

[–] istewart@awful.systems 7 points 4 weeks ago

Considering Tesla's well-documented issues with functional door handles, this may be more accurate than you think

view more: next ›