200fifty

joined 2 years ago
[–] 200fifty@awful.systems 1 points 1 year ago (2 children)

ngl his stuff always felt a bit cynical to me, in that it seemed to exist more to say "look, video games can have a deep message!" than it did to just have such a message in the first place. Like it existed more to gesture at the concept of meaningfulness rather than to be meaningful itself.

[–] 200fifty@awful.systems 1 points 1 year ago (1 children)

Anyone can copy it, recreate with it, reproduce with it

Ew... stay away from my content, you creep!

[–] 200fifty@awful.systems 2 points 1 year ago

If you think of LLMs as being akin to lossy text compression of a set of text, where the compression artifacts happen to also result in grammatical-looking sentences, the question you eventually end up asking is "why is the compression lossy? What if we had the same thing but it returned text from its database without chewing it up first?" and then you realize that you've come full circle and reinvented search engines

[–] 200fifty@awful.systems 0 points 1 year ago (2 children)

Even with good data, it doesn't really work. Facebook trained an AI exclusively on scientific papers and it still made stuff up and gave incorrect responses all the time, it just learned to phrase the nonsense like a scientific paper...

[–] 200fifty@awful.systems 1 points 1 year ago

"I know not with what technology GPT-6 will be built, but GPT-7 will be built with sticks and stones" -Albert Einstein probably

[–] 200fifty@awful.systems 0 points 1 year ago (1 children)

I think they were responding to the implication in self's original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is 'cheating.' But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it's laundered by the LLM first, which is like a high-school level mistake.

[–] 200fifty@awful.systems 0 points 1 year ago (2 children)

ok but for real... it's not great for finding actual answers to queries, but I find like 800x more interesting results with search.marginalia.nu than any other search engine. It's the only search engine that I find actively fun to just browse around on recreationally.

[–] 200fifty@awful.systems 0 points 2 years ago* (last edited 2 years ago) (2 children)

When I was a kid (Nat Nanny)[https://en.wikipedia.org/wiki/Net_Nanny] was totally and completely lame, but the whole millennial generation grew up to adore content moderation. A strange authoritarian impulse.

Me when the mods unfairly ban me from my favorite video game forum circa 2009

(source: first HN thread)

[–] 200fifty@awful.systems 0 points 2 years ago (1 children)

For real though, we must have reached Peak Ad at some point, or at least we're deep into the realm of diminishing returns. This can't go on forever, right? I mean there's a finite number of things that need to be advertised and a finite number of people with a finite amount of time and patience to look at ads. How long until it all collapses?

[–] 200fifty@awful.systems 2 points 2 years ago (1 children)

I like how the assumption seems to be that the thing users object to about "websites track your browsing history around the web in order to show you targeted ads" is... the "websites" part

view more: ‹ prev next ›