this post was submitted on 30 Jun 2025
244 points (100.0% liked)

TechTakes

2027 readers
217 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] JennyLaFae@lemmy.blahaj.zone 13 points 2 days ago (1 children)

They're forcing employees to use these tools because the employees are training it through use.

They know it doesn't work. We're being used to train it .

[–] Architeuthis@awful.systems 11 points 2 days ago (3 children)

Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.

At this point it's an even bet that they are doing this because copilot has groomed the executives into thinking it can't do wrong.

[–] Cataphract@lemmy.ml 10 points 2 days ago

Huh, I've been following the ai chat-psychosis articles and videos being produced now but never connected the executives as being also manipulated and pushing ai as their "savior". Totally makes more sense in that light.

[–] HedyL@awful.systems 8 points 2 days ago

At this point it’s an even bet that they are doing this because copilot has groomed the executives into thinking it can’t do wrong.

This, or their investors (most likely both).

[–] o7___o7@awful.systems 3 points 2 days ago

What if we made a human centipede with a homeopathic quantity of human in it?

[–] Architeuthis@awful.systems 98 points 4 days ago (14 children)

Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”

who talks like this

[–] sp3ctr4l@lemmy.dbzer0.com 27 points 3 days ago* (last edited 3 days ago)

People who work at Microsoft.

Source: Me, I used to, was driven moderately insane by their highly advanced and pervasive outbreak of corpospeak.

They are impressed by LLMs because they reproduce their inane dialect.

[–] V0ldek@awful.systems 26 points 3 days ago (2 children)

A company that forces you to write a "Connect" every half-year where you reflect on your performance and Impact™ : (click here for the definition of Impact™ in Microsoft® Sharepoint™)

[–] QueenMidna@lemmy.ca 6 points 2 days ago

Gods I hated writing my connect

load more comments (1 replies)
[–] Part4@infosec.pub 13 points 3 days ago
[–] Saleh@feddit.org 62 points 3 days ago (2 children)

By creating a language only they are able to speak and interpret, the managerial class is protecting its existence and self reproduction, while keeping people of other classes out or only let them in after passing through a proper reeducation camp, e.g. MBA program.

[–] explodicle@sh.itjust.works 10 points 3 days ago (2 children)
[–] korydg@awful.systems 10 points 2 days ago

Gotta come to Latin's defense here -- you can write proper literature, science, poetry in Latin, and people did so for thousands of years. This stuff? Nah. (I am not literate in Latin.)

[–] Ensign_Crab@lemmy.world 5 points 2 days ago

More like a version of Esperanto with proprietary extensions to make it incompatible with standard Esperanto.

[–] Korhaka@sopuli.xyz 10 points 3 days ago

People that add no value to society...

[–] Pyr_Pressure@lemmy.ca 18 points 3 days ago

Business grads who think it makes them sound smart. I have to deal with way too many of them. It's infuriating, because behind it all I know just how dull most of them truly are.

[–] TinyTimmyTokyo@awful.systems 24 points 4 days ago

I have no doubt that a chatbot would be just as effective at doing Liuson's job, if not moreso. Not because chatbots are good, but because Liuson is so bad at her job.

load more comments (8 replies)
[–] sailor_sega_saturn@awful.systems 62 points 4 days ago (6 children)

Before LLMs came along no one cared what tools I did or didn't use at work. Hell will freeze over before I let a text predictor write code for me even if that eventually costs me a job. I'm the sort who can't stand any sort of auto-completion or other typing "help", much less spending all my time reviewing LLM output.

[–] EldenLord@lemmy.world 4 points 2 days ago

Wait until you experience AI "assistants" who extract info from your e-mails and texts to hallucinate wildly wrong appointments and summaries which then are cluttered throughout your calender and notes apps.

At this point any respectable workplace should just throw the whole Microsoft nonsense out and hire a sysadmin who knows what an immutable linux distro is.

[–] Rhaedas@fedia.io 32 points 4 days ago

LLMs are the next wave of popups here in the second quarter of the 21st century. I've become skilled at removing all the requests to let AI help me in whatever I'm actively doing. I about lost it recently when Excel threw one at me at work. NO, I DON'T WANT YOUR HELP!

Having a better guided search in a help feature I don't mind. But stop pushing it in everything, just have a way to get to it (and have it WORK when I use it!)

load more comments (4 replies)
[–] HedyL@awful.systems 58 points 4 days ago (12 children)

FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple "tests" to try out whether an AI's answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don't want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don't need "plausible deniability" regarding plagiarism or anything like this.

Yet, we are being pushed to "embrace AI" as well, we are being told we need to "learn to prompt" etc. This is frustrating. My biggest fear isn't to be replaced by an LLM, not even by someone who is a "prompting genius" or whatever. My biggest fear is to be replaced by a person who pretends that the AI's output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what's expected, apparently.

[–] scruiser@awful.systems 13 points 3 days ago (4 children)

Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not.

So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won't actually get you good working code.

AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don't say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

I've noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

[–] buddascrayon@lemmy.world 6 points 2 days ago (1 children)

I think they meant that coders can run their code and find out if it works or not but lawyers would have to stand in front of a judge or other legally powerful entity and discover the hard way that the LLM outputted statements were essentially gobbledygook.

[–] zogwarg@awful.systems 6 points 1 day ago

But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.

load more comments (3 replies)
[–] paequ2@lemmy.today 35 points 4 days ago (4 children)

I work in a field that is mostly related to law and accounting... My biggest fear is to be replaced by a person who pretends that the AI’s output is smart

Aaaaaah. I know this person. They're an accountant. They recently learned about AI. They're starting to use it more at work. They're not technical. I told them about hallucinations. They said the AI rarely wrong. When he's not 100% convinced, he says he asks the AI to cite the source.... 🤦 I told him it can hallucinate the source! ... And then we went back to "it's rarely wrong though."

[–] buddascrayon@lemmy.world 7 points 2 days ago

My sister does this. She apparently uses ChatGPT to write small code she uses for output on her company's website. Since I left the IT field she lords over me that I "don't know how good it is cause I don't have the need to use it for work". I just roll my eyes and am waiting for the day when her GPT code ends up failing and crashing the corporate site.

So glad I'm not in IT anymore to tell the truth. Cause it's looking more and more like an AI driven shitshow every day.

[–] HedyL@awful.systems 24 points 4 days ago (3 children)

And then we went back to “it’s rarely wrong though.”

I am often wondering whether the people who claim that LLMs are "rarely wrong" have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

I'm not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly "wrong" with it. Not one word about the missing semicolon, though.

I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots' output at all.

[–] dgerard@awful.systems 22 points 3 days ago (15 children)

AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it's a scam. When you claim you can tell good from bad, they think you're lying.

load more comments (15 replies)
load more comments (2 replies)
load more comments (2 replies)
[–] diz@awful.systems 17 points 3 days ago* (last edited 3 days ago)

I was writing some math code, and not being an idiot I'm using an open source math library for doing something called "QR decomposition", and its efficient, and it supports sparse matrices (matrices where many numbers are 0), etc.

Just out of curiosity I checked where some idiot vibecoder would end up. AI simply plagiarizes from some shit sample snippets which exist purely to teach people what QR decomposition is. It's actually unusable, due to being numerically unstable.

Who in the fuck even needs this shit to be plagiarized, anyway?

It can't plagiarize a production quality implementation, because you can count those on the fingers of one hand, they're complex as fuck and you can't just blend a few together to try to pretend you didn't plagiarize.

The answer is, people who are peddling the AI. They are the ones who ordered plagiarism with extra plagiarism on top. These are not coding tools, these are demos to convince the investors to buy the actual product, which is company's stock. There's a little bit of tool functionality (you can ask them to refactor the code), but it's just you misusing a demo to try to get some value out of it.

And to that end, the demos take every opportunity to plagiarize something, and to talk about how the "AI" wrote the code from scratch based on its supposed understanding of fairly advanced math.

And in coding, it is counter productive to plagiarize. Many of the open source libraries can be used in commercial projects. You get upstream fixes for free. You don't end up with some bugs or worse yet security exploits that may have been fixed since the training cut-off date.

No fucking one in the right mind would willingly want their product to contain copy pasted snippets from stale open source libraries, passed through some sort of variable-renaming copyright laundering machine.

Except of course the business idiots who are in charge of software at major companies, who don't understand software. Who just failed upwards.

They look at plagiarized lines and count them as improved productivity.

[–] Saleh@feddit.org 22 points 3 days ago

I have the same worries in engineering. We had a presentation of some AI "consultancy" firm that was telling us that now is the time to stop hesitating and start doing with LLMs and gave some examples of companies "they" found in regards to our industry. When i asked, if they know any company that is willing to take the legal risks if their designs turn out hazardous, there was the sound of crickets. And just with that, LLMs are completely useless for any design tasks. If i still have to check the design to be in adherence with all relevant laws, norms and other standards, i might just do the design myself.

That is not to say, that there wouldn't be useful tools that fall into what is called "AI" these days. But these tools are designed for specific purposes, by people who do understand the specific purpose and its caveats.

load more comments (8 replies)
[–] MonkderVierte@lemmy.zip 17 points 3 days ago* (last edited 2 days ago)

Translation: they want to ramp up the useless feature rate and quality control can go from bad to shit.

[–] Archangel1313@lemmy.ca 34 points 4 days ago (2 children)

We should expect some enterprising Microsoft coder to come up with an automated AI agent system that racks up chatbot metrics for them — while they get on with their actual job.

Lol!

load more comments (2 replies)
load more comments
view more: next ›