Perspectivist

joined 3 weeks ago
[–] Perspectivist@feddit.uk 1 points 19 minutes ago

Buddhists probably had figured out a lot of things about the workings of the human mind way before science did.

[–] Perspectivist@feddit.uk 2 points 1 hour ago (1 children)

Emojis are for chatting/texting. I don't even want to see them here.

[–] Perspectivist@feddit.uk 1 points 8 hours ago

Samsung default one. It's funny when I'm at the hardware store buing supplies for work in the morning and a phone rings there's like 5 builders who all check their phones.

[–] Perspectivist@feddit.uk 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

I hear you - you're reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.

But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It's a side effect of having been trained on a lot of correct information, not the result of human-like understanding.

So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.

[–] Perspectivist@feddit.uk 8 points 18 hours ago (1 children)

Things were different during the pandemic because there was a real risk of vastly overwhelming the healthcare system. Airborne diseases never went away, but the difference is that if you catch one now, you can get treated - which wasn’t always the case back then. That's why we urged people to wear masks and get vaccinated; that we all wouldn't get sick at the same time.

[–] Perspectivist@feddit.uk 11 points 19 hours ago* (last edited 17 hours ago) (1 children)

I think a huge issue online is just how incredibly mean people can be to each other - and the fact that they don’t even see themselves as mean. They’ve built a story around how their behavior is justified, so they keep doing it, completely oblivious to the fact that they’re part of the problem.

[–] Perspectivist@feddit.uk 10 points 19 hours ago (1 children)

that no one’s doing anything about

The Ocean Cleanup

[–] Perspectivist@feddit.uk 0 points 19 hours ago (1 children)

A linear regression model isn’t an AI system.

The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.

[–] Perspectivist@feddit.uk 0 points 19 hours ago (3 children)

I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.

An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.

[–] Perspectivist@feddit.uk 1 points 22 hours ago* (last edited 22 hours ago) (5 children)

What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s an artificial system capable of performing a cognitive task that’s normally done by humans: generating language.

[–] Perspectivist@feddit.uk 1 points 23 hours ago

The chess opponent on Atari is AI too. I think the issue is that when most people hear "intelligence," they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.

[–] Perspectivist@feddit.uk 1 points 1 day ago* (last edited 23 hours ago) (15 children)

AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it's the correct term nevertheless.

 

Now how am I supposed to get this to my desk without either spilling it all over or burning my lips trying to slurp it here. I've been drinking coffee for at least 25 years and I still do this to myself at least 3 times a week.

146
submitted 6 days ago* (last edited 6 days ago) by Perspectivist@feddit.uk to c/til@lemmy.world
 

A kludge or kluge is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. Its only benefit is that it rapidly solves an important problem using available resources.

 

I’m having a really odd issue with my e‑fatbike (Bafang M400 mid‑drive). When I’m on the two largest cassette cogs (lowest gears), the motor briefly cuts power ~~once per crank revolution~~ when the wheel magnet passes the speed sensor. It’s a clean on‑off “tick,” almost like the system thinks I stopped pedaling for a split second.

I first noticed this after switching from a 38T front chainring to a 30T. At that point it only happened on the largest cog, never on the others.

I figured it might be caused by the undersized chainring, so I put the original back in and swapped the original 1x10 drivetrain for a 1x11 and went from a 36T largest cog to a 51T. But no - the issue still persists. Now it happens on the largest two cogs. Whether I’m soft‑pedaling or pedaling hard against the brakes doesn’t seem to make any difference. It still “ticks” once per revolution.

I’m out of ideas at this point. Torque sensor, maybe? I have another identical bike with a 1x12 drivetrain and an 11–50T cassette, and it doesn’t do this, so I doubt it’s a compatibility issue. Must be something sensor‑related? With the assist turned off everything runs perfectly, so it’s not mechanical.

EDIT: Upon further inspection it seem that the moment the power cuts out seems to perfectly sync with the wheel speed magnet going past the sensor on the chainstay so I'm like 95% sure that a faulty wheel speed sensor is the issue here. I have a spare part ordered so I'm not sure yet but unless there's a 2nd update to this then it solved the issue.

EDIT2: I figured it out. It wasn't the wheel sensor but related to it: I added a second spoke magnet for that sensor on the opposite side of the wheel and the problem went away. Apparently on low speeds the time between pulses got too long and the power to the motor was cut. In addition to this I also used my Eggrider app to tweak the motor settings so that it knows there's two magnets and not just one. The setting I tweaked is under "Bafang basic settings" and I changed the "Speed meter signal" from 1 to 2 to tell it that there's two magnets.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›