Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 15 points 3 days ago

That github copilot has a pls don't be obvious about stealing shit flag in the settings will never not be endlessly amusing to me.

Does it work? Who knows!

[–] Architeuthis@awful.systems 11 points 4 days ago* (last edited 4 days ago)

Lasker himself is known to go by a pseudonym with a transphobic slur in it.

That the TPO moniker is basically ungoogleable appears to have been a happy accident for him, according to that article by Rachel Adjogah his early posting history paints him as an honest-to-god chaser.

[–] Architeuthis@awful.systems 24 points 5 days ago

So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it’s still a net gain.

thats-not-how-any-of-this-works.webm

[–] Architeuthis@awful.systems 5 points 5 days ago

I completely missed that, thanks.

[–] Architeuthis@awful.systems 18 points 5 days ago* (last edited 5 days ago) (2 children)

CEO of a networking company for AI execs does some "vibe coding", the AI deletes the production database (/r/ABoringDystopia)

xcancel source

Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.

We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?

No. Instead, it lied. It made up a report than almost all systems were working.

And it did it again and again.

What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter

Then, when it agreed it lied -- it lied AGAIN about our email system being functional.

I asked it to write an apology letter.

It did and in fact sent it to the Replit team and myself! But the apology letter -- was full of half truths, too.

It hid the worst facts in the first apology letter.

He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn't follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company's production database too much.

[–] Architeuthis@awful.systems 9 points 1 week ago* (last edited 1 week ago)

also here https://awful.systems/post/4995759

The long and short of it is motherjones discovered TPOs openly nazi alt.

[–] Architeuthis@awful.systems 10 points 1 week ago (5 children)

using their lingo and their crit-hype terminology strengthens them

We live in a world where the US vice president admits to reading siskind AI fan fiction, so that ship has probably sailed.

[–] Architeuthis@awful.systems 17 points 1 week ago (2 children)

Nah, he's just talking to an LLM.

“I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”

And I don't think you can brute force physics in general, having to experimentally confirm or disprove every random-ass intermediary hypothesis the brute force generator comes up with seems like quite the bottle neck.

[–] Architeuthis@awful.systems 20 points 2 weeks ago (2 children)

It's kind of the opposite, GenAI is downstream of machine learning which is how artificial neural networks rebranded after the previous AI winter ended.

Also after taking a look there I don't think lemmy.ml has anything in particular to do with machine learning, it looks more like a straight attempt at a /r/all clone.

[–] Architeuthis@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

It's possible we may be catching sight of the first shy movements towards a pivot to robotics:

https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit/

https://techcrunch.com/2025/07/09/hugging-face-opens-up-orders-for-its-reachy-mini-desktop-robots/

Both developer kits, because it's always a maybe the clients will figure something out type of business model these days.

[–] Architeuthis@awful.systems 3 points 2 weeks ago* (last edited 2 weeks ago)

Also Microsoft spontaneously deciding that they will just turn vscode into free cursor can't be helping their prospects.

[–] Architeuthis@awful.systems 8 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Why are people paying for it… is it only for some fancy editor integration so it sends off/reads back the code as needed?

Even the vscode socials are taking the piss:

($10 is the copilot subscription apparently)

 

Kind of sounds like ultimately it would have been very illegal to do.

"We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California," OpenAI board chairman Bret Taylor said in a statement.

Asked about Musk's suit on a call with reporters, Altman said, "You all are obsessed with Elon, that's your job — like, more power to you. But we are here to think about our mission and figure out how to enable that. And that mission has not changed."

 

The types of information processed includes names, dates of birth, gender and ethnicity, and a number that identifies people on the police national computer.

Also to be shared – and listed under “special categories of personal data” - are “health markers which are expected to have significant predictive power”, such as data relating to mental health, addiction, suicide and vulnerability, and self-harm, as well as disability.

archive is

 

copy pasting the rules from last year's thread:

Rules: no spoilers.

The other rules are made up aswe go along.

Share code by link to a forge, home page, pastebin (Eric Wastl has one here) or code section in a comment.

 

AI Work Assistants Need a Lot of Handholding

Getting full value out of AI workplace assistants is turning out to require a heavy lift from enterprises. ‘It has been more work than anticipated,’ says one CIO.

aka we are currently in the process of realizing we are paying for the privilege of being the first to test an incomplete product.

Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s chief information and digital officer.

I mean, imagine all the non-obvious stuff it must be getting wrong at the same time.

He said the company is regularly updating and refining its data to ensure accurate results from AI tools accessing it. That process includes the organization’s data engineers validating and cleaning up incoming data, and curating it into a “golden record,” with no contradictory or duplicate information.

Please stop feeding the thing too much information, you're making it confused.

Some of the challenges with Copilot are related to the complicated art of prompting, Spataro said. Users might not understand how much context they actually need to give Copilot to get the right answer, he said, but he added that Copilot itself could also get better at asking for more context when it needs it.

Yeah, exactly like all the tech demos showed -- wait a minute!

[Google Cloud Chief Evangelist Richard Seroter said] “If you don’t have your data house in order, AI is going to be less valuable than it would be if it was,” he said. “You can’t just buy six units of AI and then magically change your business.”

Nevermind that that's exactly how we've been marketing it.

Oh well, I guess you'll just have to wait for chatgpt-6.66 that will surely fix everything, while voiced by charlize theron's non-union equivalent.

 

An AI company has been generating porn with gamers' idle GPU time in exchange for Fortnite skins and Roblox gift cards

"some workloads may generate images, text or video of a mature nature", and that any adult content generated is wiped from a users system as soon as the workload is completed.

However, one of Salad's clients is CivitAi, a platform for sharing AI generated images which has previously been investigated by 404 media. It found that the service hosts image generating AI models of specific people, whose image can then be combined with pornographic AI models to generate non-consensual sexual images.

Investigation link: https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

 

For thursday's sentencing the us government indicated they would be happy with a 40-50 prison sentence, and in the list of reasons they cite there's this gem:

  1. Bankman-Fried's effective altruism and own statements about risk suggest he would be likely to commit another fraud if he determined it had high enough "expected value". They point to Caroline Ellison's testimony in which she said that Bankman-Fried had expressed to her that he would "be happy to flip a coin, if it came up tails and the world was destroyed, as long as if it came up heads the world would be like more than twice as good". They also point to Bankman-Fried's "own 'calculations'" described in his sentencing memo, in which he says his life now has negative expected value. "Such a calculus will inevitably lead him to trying again," they write.

Turns out making it a point of pride that you have the morality of an anime villain does not endear you to prosecutors, who knew.

Bonus: SBF's lawyers' list of assertions for asking for a shorter sentence includes this hilarious bit reasoning:

They argue that Bankman-Fried would not reoffend, for reasons including that "he would sooner suffer than bring disrepute to any philanthropic movement."

 

rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

ssc reddit thread

quantian's short writeup on the birdsite, will post screens in comments

pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

edit: added additional details to the pdf descriptions.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

view more: next ›