scruiser

joined 2 years ago
[–] scruiser@awful.systems 8 points 4 days ago

This week's southpark makes fun of prediction markets! Hanson and the rationalists can be proud their idea has gone mainstream enough to be made fun of. The episode actually does a good job highlighting some of the issues with the whole concept: the twisted incentives and insider trading and the way it fails to actually create good predictions (as opposed to just getting vibes and degenerate gambling).

[–] scruiser@awful.systems 5 points 5 days ago (3 children)

and the person who made up the "math pets" allegation claimed no such source

I was about to point out that I think this is the second time he claimed math pets had absolutely no basis in reality (and someone countered with a source that forced him to) but I double checked the posting date and this is the example I was already thinking of. Also, we have supporting sources that didn't say as much directly but implied it heavily: https://www.reddit.com/r/SneerClub/comments/42iv09/a_yudkowsky_blast_from_the_past_his_okcupid/ or like, the entire first two thirds of the plot of Planecrash!

[–] scruiser@awful.systems 6 points 1 week ago

So us Americans do get some of "grabbed guns and openly fought" in the history of our revolutionary war, but its taught in a way that doesn't link it to any modern movements that armed themselves. And the people most willing to lean into guns and revolutionary war imagery/iconography tend to be far right wing (and against movement for worker's rights or minorities' rights or such).

[–] scruiser@awful.systems 10 points 1 week ago (2 children)

So, to give the first example that comes to mind, in my education from Elementary School to High School, the (US) Civil Rights movement of the 1950s and 1960s was taught with a lot of emphasis on passive nonviolent resistance, downplaying just how disruptive they had to make their protests to make them effective and completely ignoring armed movements like the Black Panthers. Martin Luther King Jr.'s interest and advocacy for socialism is ignored. The level of organization and careful planning by some of the organizations isn't properly explained. (For instance, Rosa Parks didn't just spontaneously decide to not move her seat one day, they planned it and picked her in order to advance a test case, but I don't think any of my school classes explained that until High School.) Some of the level of force the federal government had to bring in against the Southern States (i.e. Federal Marshals escorting Ruby Bridges) is properly explained, but the full scale is hard to visualize so. So the overall misleading impression someone could develop or subconsciously perceive is that rights were given to black people through democratic processes after they politely asked for them with just a touch of protests.

Someone taking the way their education presents the Civil Rights protests at face value without further study will miss the role of armed resistance, miss the level of organization and planning going on behind pivotal acts, and miss just how disruptive protests had to get to be effective. If you are a capital owner benefiting from the current status quo (or well paid middle class that perceives themselves as more aligned with the capital owners than other people that work for a living), then you have a class interest in keeping protests orderly and quiet and harmless and non-disruptive. It vents off frustration in a way that ultimately doesn't force any kind of change.

This hunger strike and other rationalist attempts at protesting AI advancement seems to suffer from this kind of mentality. They aren't organized on a large scale and they don't have coherent demands they agree on (which is partly a symptom of the fact that the thing they are trying to stop is so speculative and uncertain). Key leaders like Eliezer have come out strongly against any form of (non-state) violence. (Which is a good thing, because their fears are unfounded, but if I actually thought we were doomed with p=.98 I would certainly be contemplating vigilante violence.) (Also, note form the nuke the datacenter's comments, Eliezer is okay with state level violence.) Additionally, the rationalist often have financial and social ties to the very AI companies they are protesting, further weakening their ability to engage in effective activism.

[–] scruiser@awful.systems 8 points 1 week ago (4 children)

The way typical US educations (idk about other parts of the world) portray historical protests and activist movements has been disastrous to the ability of people to actually succeed in their activism. My cynical assumption is that is exactly as intended.

[–] scruiser@awful.systems 3 points 1 week ago

So if I understood NVIDIA's "strategy" right, their usage of companies like Coreweave is drawing in money from other investors and private equity? Does this mean, that unlike many of the other companies in the current bubble, they aren't going to lose money on net, because they are actually luring in investment from other sources in companies like Coreweave (which is used to buy GPU and thus goes to them), whileleaving the debt/obligations in the hands of companies like Coreweave? If I'm following right this is still a long term losing strategy (assuming some form of AI bubble pop or deflation we are all at least reasonably sure of), but the expected result for NVIDIA is more of a massive drop in revenue as opposed to a total collapse of their company under a mountain of debt?

[–] scruiser@awful.systems 7 points 1 week ago (1 children)

Side note: The way I've seen clanker used has been for the AIs themselves, not their users. I've mostly seen the term in the context of star wars memers eager to put their anti-droid memes and jokes to IRL usage.

[–] scruiser@awful.systems 3 points 2 weeks ago

It's like a cargo cult version of bootstrapping or monte carlo methods.

[–] scruiser@awful.systems 8 points 2 weeks ago (1 children)

That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention "quirky Harry Potter fanfiction" with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.

[–] scruiser@awful.systems 5 points 2 weeks ago

I'm at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.

[–] scruiser@awful.systems 10 points 2 weeks ago (2 children)

The Oracle deal seemed absurd, but I didn't realize how absurd until I saw Ed's compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

I've been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI's choices don't make any long term sense if AGI isn't coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his "real beliefs" are gives him too much credit.

Just to remind everyone: the market can stay irrational longer than you can stay solvent!

[–] scruiser@awful.systems 3 points 3 weeks ago (1 children)

This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.

 

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›