TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
I hate I'm so terminally online I found out about the rumor that Musk and Stephen Miller's wife are bumping uglies through a horrorfic parody account
https://mastodon.social/@bitterkarella@sfba.social/114593332907413196
Midnight Pals is pretty great.
New artcle from Brian Merchant: An 'always on' OpenAI device is a massive backlash waiting to happen
Giving my personal thoughts on the upcoming OpenAI Device^tm^, I think Merchant's correct to expect mass-scale backlash against the Device^tm^ and public shaming/ostracisation of anyone who decides to use it - especially considering its an explicit repeat of the widely clowned on Humane AI Pin.
headlines of Device^tm^ wearers getting their asses beaten in the street to follow soon afterwards. As Brian's noted, a lot of people would see wearing an OpenAI Device^tm^ as an open show of contempt for others, and between AI's public image becoming utterly fouled by the bubble and Silicon Valley's reputation going into the toilet, I can see someone seeing a Device^tm^ wearer as an opportunity to take their well-justified anger at tech corps out on someone who openly and willingly bootlicks for them.
Part of me wonders if this is even supposed to be a profitable hardware product or if they're sufficiently hard-up for training data that "put always-on microphones in as many pockets as possible" seems like a good strategy.
It's not, both because it's kinda evil and because it's definitely stupid, but I can see it being used to solve the data problem more quickly than I can see anyone think this is actually a good or useful product to create.
What does solving the data problem supposed to look like exactly? A somewhat higher score in their already incredibly suspect benchmarks?
The data part of the whole hyperscaling thing seems predicated on the belief that the map will magically become the territory if only you map hard enough.
I fully agree, but as data availability is one of the primary limits that hyperscaling is running up against I can see the true believers looking for additional sources, particularly sources that aren't available to their competitors. Getting a new device in people's pockets with a microphone and an internet link would be one such advantage, and (assuming you believe the hyperscaling bullshit) would let OpenAI rebuild some kind of moat to keep themselves ahead of the competition.
I don't know, though. Especially after the failure of at least 2 extant versions of the AI companion product I just can't imagine anyone honestly believing there's enough of a market for this to justify even the most ludicrously optimistic estimate of the cost of bringing it to market. It's either a data thing or a straight-up con to try and retake the front page for another few news cycles. Even the AI bros can't be dumb enough for it to be a legit effort.
When I get a minute, I intend to do a back of the napkin calc to figure out how many words 100 million of these things would hear on an average day.
100 million sounds like a target that was naively pooped out by some other requirement, like "How much training data do we need to scale to GPT-5 before the money runs out, assuming the dumbest interpolation imaginable?"
Loose Mission Impossible Spoilers
The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI's main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI's that ultimately have only moderate power.
Adding to the post-LLM hype predictions: I think post LLM bubble popping, "Terminator" style rogue AI movie plots don't go away, but take on a different spin. Rogue AI's strength's are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less "failed to comprehend love" or "cleverly constructed logic bomb breaks its reasoning" and more "forgets what it was doing after getting drawn into too long of a conversation". For human actions it will be less "its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement" and more "its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster".
Interesting (in a depressing way) thread by author Alex de Campi about the fuckery by Unbound/Boundless (crowdfunding for publishing, which segued into financial incompetence and stealing royalties), whose latest incarnation might be trying to AI their way out of the hole they’ve dug for themselves.
From the liquidator’s proposals:
We are also undertaking new areas of business that require no funds to implement, such as starting to increase our rights income from book to videogaming by leveraging our contacts in the gaming industry and potentially creating new content based on our intellectual property utilizing inexpensive artificial intelligence platforms.
(emphasis mine)
They don’t appear to actually own any intellectual property anymore (due to defaulting on contracts) so I can’t see this ending well.
Original thread, for those of you with bluesky accounts: https://bsky.app/profile/alexdecampi.bsky.social/post/3lqfmpme2722w
currently reading https://arxiv.org/abs/2404.17570
this is PsiQuantum, who are hot prospects to build an actually-quantum computer
no, they have not yet factored 35
but they are seriously planning qubits on a wafer and they think they can make a chip with 1m noisy qubits
anyone know more about this? does that preprint (from last year) pass sniff tests?
(my interest is journalistic, and also the first of these companies to factor 35 gets all the VC money ever)