YourNetworkIsHaunted

joined 1 year ago
[–] YourNetworkIsHaunted@awful.systems 10 points 12 hours ago (1 children)

What does the “better” version of ChatGPT look like, exactly? What’s cool about ChatGPT? [...] Because the actual answer is “a ChatGPT that actually works.” [...] A better ChatGPT would quite literally be a different product.

This is the heart of recognizing so much of the bullshit in the tech field. I also want to make sure that our friends in the Ratsphere get theirs for their role in enabling everyone to pretend there's a coherent path between the current state of LLMs and that hypothetical future where they can actually do things.

That's probably true, but it also speaks to Ed Zitron's latest piece about the rise of the Business Idiot. You can explain why Wikipedia disrupted previous encyclopedia providers in very specific terms: crowdsourced production to volunteer editors cuts costs massively and allows the product to be delivered free (which also increases the pool of possible editors and improves quality), and the strict* adherence to community standards and sourcing guidelines prevents the worse loss of truth and credibility that you may expect.

But there is no such story that I can find for how Wikipedia gets disrupted by Gen AI. At worst it becomes a tool in the editor's belt, but the fundamental economics and structure just aren't impacted. But if you're a business idiot then you can't actually explain it either way and so of course it seems plausible

At least the lone crypto bro is getting appropriately roasted. They're capable of learning.

[–] YourNetworkIsHaunted@awful.systems 14 points 1 day ago (1 children)

As the bioware nerd I am it makes my heart glad to see the Towers of Hanoi doing their part in this fight. And it seems like the published paper undersells how significant this problem is for the promptfondlers' preferred narratives. Given how simple it is to scale the problem complexity for these scenarios, it seems likely that there isn't a viable scaling-based solution here. No matter how big you make the context windows and how many steps the system is able to process it's going to get out scaled by simply increasing some Ns in the puzzle itself.

Diz and others with a better understanding of what's actually under the hood have frequently referenced how bad Transformer models are at recursion and this seems like a pretty straightforward way to demonstrate that and one that I would expect to be pretty consistent.

[–] YourNetworkIsHaunted@awful.systems 7 points 1 day ago (1 children)

That would be the best way to actively catch the cheating happening here, given that the training datasets remain confidential. But I also don't know that it would be conclusive or convincing unless you could be certain that the problems in the private set were similar to the public set.

In any case either you're doubledipping for credit in multiple places or you absolutely should get more credit for the scoop here.

[–] YourNetworkIsHaunted@awful.systems 15 points 1 day ago (3 children)

The thing that galls me here even more than other slop is that there isn't even some kind of horrible capitalist logic underneath it. Like, what value is this supposed to create? Replacing the leads written by actual editors, who work for free? You already have free labor doing a better job than this, why would you compromise the product for the opportunity to spend money on compute for these LLM not-even-actually-summaries? Pure brainrot.

As usual the tech media fails to consider the possibility that part of the reason for Anthropic poaching people with promises of more money and huffable farts is to get this exact headline to try and get another round of funding from the VCs.

🎶 We didn't start the fire

We just tried to profit

From our own new market

We didn't start the fire

Though I see why we might've

I did not ignite it 🎶

I also appreciate how many of the "transformative" actions are just "did a really good thing... with AI!"

HR reduced time-to-hire by 30%! How? They told Jerry to stop hand-copying each candidate's resume (I sleep). Also we tried out an LLM for... something (Real shit).

Like, these are not examples of how AI adoption can benefit your organization and why being on board is important. They're split between "things you can do to mitigate the flaws in AI" and "things that would be good if your organization could do" and an implication that the two are related.

[–] YourNetworkIsHaunted@awful.systems 12 points 5 days ago (2 children)

A) "Why pay for ChatGPT when you could get a math grad student (or hell an undergrad for some of the basics) to do it for a couple of craft beers? If you find an applied math student they'd probably help out just for the joy of being acknowledged." -My wife

B) I had not known about the cluster fuck of 2016, but I can't believe it was easier for the entire scientific establishment to rename a gene than to get Microsoft to introduce an option to disable automatic date detection, a feature that has never been actually useful enough to justify the amount it messes things up. I mean, I can believe it, butI it's definitely on the list of proofs that we are not in God's chosen timeline.

[–] YourNetworkIsHaunted@awful.systems 12 points 5 days ago (1 children)

Possibly OT, but fits in with the "finance ruins everything" motif we've got going here:

My wife and I have been playing Stardew Valley again, and now the algorithms occasionally find us things like this

The continuing presence of stories like this is making me reevaluate my assessment that GenAI will never be good enough to replace creatives, not by estimating that the tech will be better but by adjusting down the level of competency that is apparently permissible. Like, anyone in a vaguely creative sphere who wants to start phoning shit in as aggressively as possible should probably do it if they aren't already.

view more: next ›