backgroundcow

joined 2 years ago
[–] backgroundcow@lemmy.world 10 points 1 week ago (3 children)

What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data

Prove to me that this isn't exactly how the human mind -- i.e., "real intelligence" -- works.

The challenge with asserting how "real" the intelligence-mimicking behavior of LLMs is, is not to convince us that it "just" is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.

The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isn't relying on statistical models whose inner principles of working are, essentially, the same.

All these claims that human creativity is so outstanding that it "obviously" will never be recreated by deterministic statistical models that "only" interpolates into new contexts knowledge picked up from observation of human knowledge: I just don't see it.

What human invention, art, idé, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativity--held as one of the pinnacles of human intelligence--has clear connections to what came before. If you read Einstein's works he is actually very good at explaining how he worked it out in increments from models and ideas - "what happens with a meter stick in space", etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.

To me, the argument in the linked article reads a bit as "LLM AI cannot be 'intelligence' because when I introspect I don't feel like a statistical machine". This seems about as sophisticated as the "I ain't no monkey!" counter- argument against evolution.

All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just don't think we have anything to gain from the "I ain't no statistical machine" line of argument.

[–] backgroundcow@lemmy.world 9 points 2 weeks ago

That's perfect. You already know your lines!

[–] backgroundcow@lemmy.world 2 points 3 weeks ago

After having a lot of sysvinit experience, the transition to setting up my own systemd services has been brutal. What finally clicked for me was that I had this habit of building mini-services based on shellscripts; and systemd goes out of its way to deliberately break those: it wants a single stable process to monitor; and if it sniffs out that you are doing some sketchy things that forks in ways it disapproves of, it is going to shut the whole thing down.

[–] backgroundcow@lemmy.world 4 points 4 weeks ago

It is fully possible, quite likely even, for models to both be "more accurate than humans" on average while at the same time suffer occasional "accuracy collapses".

[–] backgroundcow@lemmy.world 2 points 1 month ago

Crush both apples with the blunt side of the knife. Divide applesauce equally.

[–] backgroundcow@lemmy.world 1 points 1 month ago

I very much understand wanting to have a say against our data being freely harvested for AI training. But this article's call for a general opt-out of interacting with AI seems a bit regressive. Many aspects of this and other discussions about the "AI revolution" remind me about the Mitchell and Web skit on the start of the bronze age: https://youtu.be/nyu4u3VZYaQ

[–] backgroundcow@lemmy.world 25 points 2 months ago

John Oliver had a segment on this that may help convince people that it is real: https://youtu.be/3kEpZWGgJks

[–] backgroundcow@lemmy.world 7 points 2 months ago* (last edited 2 months ago)

What you describe is more or less the Nordic economic model, except the basic income. Corporate abuse is low, because it is not unthinkable to "not work" in response to such abuse, but also because unions are strong. Nevertheless, a lot of people still work a lot, so it doesn't completely change the work/life balance oddity op is posting about.

[–] backgroundcow@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

These two are not interchangeable or really even comparable though?

For GNU Make, yes they are. These are fully comparable tools for writing sophisticated dynamic build systems. "Plain make", not so much.

[cmake] makes your build system much, much more robust, far easier to maintain, much more likely to work on other systems than your own, and far easier to integrate with other dependent projects.

This is absolutely incorrect. I assume (although I have never witnessed it) that a true master of cmake could use it to create a robust, maintainable, transferable build system. Very much like there are people who are able to make delicate ice sculptures using a chainsaw. But in no way does these properties follow from the choice of cmake as a build system (as insinuated in your post), rather, the word we are looking for here is: despite using cmake.

I apologize for my inflammatory language. I may just have a bit of PTSD from having to build a lot of other people's software through multiple layers of meta build systems. And cmake comes back, time and time again, as introducing loads of obstacles.

[–] backgroundcow@lemmy.world 27 points 2 months ago (3 children)
 
 
view more: next ›