helopigs

joined 1 year ago
[–] helopigs@lemmy.world 0 points 3 weeks ago (1 children)

yeah, I think the OP's take is really naive

the tools and models will get a lot better, but more importantly the end products that succeed will make measured, judicious use of AI.

there always has been slop, and people will always misuse tools and create abominations, but the heights of greatness that are possible are increasing with AI, not decreasing

[–] helopigs@lemmy.world 9 points 1 month ago (7 children)

I think 10x is a reasonable long term goal, given continued improvements in models, agentic systems, tooling, and proper use of them.

It's close already for some use cases, for example understanding a new code base with the help of cursor agent is kind of insane.

We've only had these tools for a few years, and I expect software development will be unrecognizable in ten more.

[–] helopigs@lemmy.world 4 points 1 month ago

Essentially, yes. Great point! I think it needs more features to function more like a social network (transitive topic-based sharing, for one)

[–] helopigs@lemmy.world 5 points 1 month ago (2 children)

Hah, I designed one as well!

I think the flow of information has to be fundamentally different.

In mine, people only receive data directly from people they know and trust in real life. This makes scaling easy, and makes it impossible for centralized entities to broadcast propaganda to everyone at once.

I described it at freetheinter.net if you're interested

[–] helopigs@lemmy.world 1 points 3 months ago

the issue is that foreign companies aren't subject to US copyright law, so if we hobble US AI companies, our country loses the AI war

I get that AI seems unfair, but there isn't really a way to prevent AI scraping (domestic and foreign) aside from removing all public content on the internet

[–] helopigs@lemmy.world 1 points 3 months ago

Sorry for the late reply - work is consuming everything :)

I suspect that we are (like LLMs) mostly "sophisticated pattern recognition systems trained on vast amounts of data."

Considering the claim that LLMs have "no true understanding", I think there isn't a definition of "true understanding" that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what's relevant, and that's solved.

Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.

I think it's quite relevant that the Turing Test has essentially been passed by machines. It's our instinct to gatekeep intellect, moving the goalposts as they're passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.

There is still progress to be made before we're obsolete, but I think it will be just a few years, and then it's just a question of cost efficiency.

Anyways, we'll see! Thanks for the thoughtful reply

[–] helopigs@lemmy.world 3 points 3 months ago

niche communities are still struggling due to the chicken-and-egg problem (and reddit dominance), but it's improving

if there is a party, it's about lemmy's inevitable growth amidst reddit enshittification

[–] helopigs@lemmy.world 1 points 3 months ago (2 children)

relative to where we were before LLMs, I think we're quite close

[–] helopigs@lemmy.world 30 points 4 months ago

the extent that Trump has gone to remove barriers to committing atrocities likely corresponds to the extent he intends to commit them

[–] helopigs@lemmy.world 1 points 5 months ago

Peer to peer.

I've spent a bit of time developing some related ideas, but haven't had time to start building it.

It's a bit rough still, but I'd love some feedback! https://freetheinter.net/

[–] helopigs@lemmy.world 7 points 5 months ago

we have to use trust from real life. it's the only thing that centralized entities can't fake

[–] helopigs@lemmy.world 4 points 5 months ago

I think we have to build systems that use real-life interpersonal trust networks so that centralized entities cannot just outspend and bot their way to prominence.

view more: next ›