scruiser

joined 2 years ago
[–] scruiser@awful.systems 4 points 1 week ago (5 children)

which I estimate is going to slide back out of affordability by the end of 2026.

You don't think the coming crash is going to drive compute costs down? I think the VC money for training runs drying up could drive down costs substantially... but maybe the crash hits other aspects of the supply chain and cost of GPUs and compute goes back up.

He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

Yeah this shit grates so much. Copyright is so often a tool of capital to extract rent from other people's labor.

[–] scruiser@awful.systems 10 points 1 week ago (4 children)

I have two theories on how the modelfarmers (I like that slang, it seems more fitting than "devs" or "programmers") approached this...

  1. Like you theorized, they noticed people doing lots of logic tests, including twists on standard logic tests (that the LLMs were failing hard on), so they generated (i.e. paid temp workers) to write a bunch of twists on standard logic tests. And here we are, with it able to solve a twist on the duck puzzle, but not really better in general.

  2. There has been a lot of talk of synthetically generated data sets (since they've already robbed the internet of all the text they could). Simple logic puzzles could actually be procedurally generated, including the notation diz noted. The modelfarmers have over-generalized the "bitter lesson" (or maybe they're just lazy/uninspired/looking for a simple solution they can tell the VCs and business majors) and think just some more data, deeper network, more parameters, and more training will solve anything. So you get the buggy attempt at logic notation from synthetically generated logic notation. (Which still doesn't quite work, lol.)

I don't think either of these approaches will actually work for letting LLM's solve logic puzzles in general, these approaches will just solve individual cases (for solution 1) and make the hallucinations more convincing (for 2). For all their talk of reaching AGI... the approaches the modelfarmers are taking suggest a mindset of just reaching the next benchmark (to win more VC, and maybe market share?) and not of creating anything genuinely reliable much less "AGI". (I'm actually on the far optimistic end of sneerclub in that I think something useful might be invented that lasts the coming AI winter... but if the modelfarmers just keep scaling and throwing more data at the problem, I doubt they'll even manage that much).

[–] scruiser@awful.systems 9 points 1 week ago

With a name like that and lesswrong to springboard it's popularity, BayesCoin should be good for at least one cycle of pump and dump/rug-pull.

Do some actual programming work (or at least write a "white paper") on tying it into a prediction market on the blockchain and you've got rationalist catnip, they should be all over it, you could do a few cycles of pumping and dumping before the final rug pull.

[–] scruiser@awful.systems 11 points 1 week ago (6 children)

I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

From here:

(My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

It starts with some rationalist jargon to say the author agrees but one year later...

AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn't explain before. Also, the reason AI 2027 didn't have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate...

No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers...

...but not by much.

Nope, they still have insane expectations.

Most of my disagreements are quibbles

Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it's December 2027 and none of AI 2027's predictions are true. They'll exaggerate their "quibbles" into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn't include any call out of that in AI 2027.

[–] scruiser@awful.systems 7 points 1 week ago

Doom feels really likely to me. […] But who knows, perhaps one of my assumptions is wrong. Perhaps there’s some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.

This line actually really annoys me, because they are already set up for moving the end date on their doomsday prediction as needed while still maintaining their overall doomerism.

[–] scruiser@awful.systems 7 points 1 week ago* (last edited 1 week ago) (4 children)

Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.

Instrumental convergence.... I think Bostrom coined that one?

The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?

[–] scruiser@awful.systems 8 points 1 week ago* (last edited 1 week ago)

Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.

Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.

[–] scruiser@awful.systems 14 points 1 week ago* (last edited 1 week ago) (1 children)

I can use bad analogies also!

  • If airplanes can fly, why can't they fly to the moon? It is a straightforward extension of existing flight technology, and plotting airplane max altitude from 1900-1920 shows exponential improvement in max altitude. People who are denying moon-plane potential just aren't looking at the hard quantitative numbers in the industry. In fact, with no atmosphere in the way, past a certain threshold airplanes should be able to get higher and higher and faster and faster without anything to slow them down.

I think Eliezer might have started the bad airplane analogies... let me see if I can find a link... and I found an analogy from the same author as the 2027 ~~fanfic~~ forecast: https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity

Eliezer used a tortured metaphor about rockets, so I still blame him for the tortured airplane metaphor: https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

[–] scruiser@awful.systems 18 points 1 week ago* (last edited 1 week ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[–] scruiser@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago)

As to cryonics... for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

  • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

  • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity

  • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

  • no exocortex, just over priced google glasses and a hallucinating LLM “assistant”

  • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.

[–] scruiser@awful.systems 5 points 1 month ago

I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

[–] scruiser@awful.systems 6 points 1 month ago (1 children)

Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven't touched yet.

view more: ‹ prev next ›