this post was submitted on 19 Jun 2023
11 points (92.3% liked)

Technology

36095 readers
61 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
all 23 comments
sorted by: hot top controversial new old
[–] schnapsidee@feddit.de 7 points 2 years ago (3 children)

Decisions like this just prove how massive the market for a self-hostable alternative is. They're not banning it because it's a bad tool, they're banning it because they're concerned about what happens to the source code their engineers paste into it.

There are already a bunch of OSS attempts, and it likely won't take long until we have something of comparable quality to ChatGPT is available for companies to host on their own hardware.

[–] saplyng@kbin.social 4 points 2 years ago (1 children)
[–] schnapsidee@feddit.de 2 points 2 years ago

As I said, there are some self-hostable alternatives, but nothing even remotely enterprise ready yet. I'm keeping a pretty close eye on this because my boss wants to train a support chatbot on company data and run it on our own hardware. (And an alternative to copilot would be great too, as that's banned for internal use.) There are some great tools to tinker around with, but I haven't found anything that I would call production ready.

[–] Mon0@kbin.social 3 points 2 years ago (1 children)

No, this just proves what everybody knows that has worked with ChatGPT. It is a nice tool if you want to write a story but everything else is just a time waste. Contrary to the media belief 99% of ChatGPTs answers to business related questions (including coding) produce a partially wrong or completely wrong answer.
You rly can‘t trust the answers ChatGPT gives you at all.
And coding … Copilot is already not good (in coding but very useful for auto completion) but ChatGPT is actually worse. ChatGPT fails even on easy coding tasks in most languages and even the JS solutions are mostly horrible.

Sure the code is also a problem, but in the here and now the biggest problem are devs that just believe whatever ChatGPT prints out and in the end you have a PR full of code (including deprecated extensions and packages) from yesteryear.

But self hosted models would be awesome nonetheless.

[–] yske@kbin.social 2 points 2 years ago

if you want to write a mediocre story, anyway

agreed otherwise

[–] eight_byte@feddit.de 2 points 2 years ago* (last edited 2 years ago)

Companies are also banning ChatGPT because its unclear from where the code it spits out was stolen and how it’s licensed. Copy and pasting code from AI tools is an enormous risk for a software company.

[–] MargotRobbie@lemmy.world 2 points 2 years ago

Well of course, ChatGPT has already leaked Samsung Semiconductor's internal information earlier, and Apple is infamous for being secretive about their design.

[–] MentalEdge@sopuli.xyz -5 points 2 years ago* (last edited 2 years ago) (3 children)

How to neuter your own ability to compete: ban your workers from using the latest tool for boosting employee performance.

[–] ulu_mulu@lemmy.world 4 points 2 years ago (2 children)

Leaking industry secrets is a much bigger concern that boosting productivity a little bit.

We're talking about very specialized engineering work, it's not something you can totally rely on a bot to do, though it might help sometimes, it's fully understandable for specialized companies to want to ban GPT internally, until there's a way for them to host a totally internal one.

[–] quirzle@kbin.social 1 points 2 years ago

We're talking about very specialized engineering work,

We're not though. This isn't a policy preventing them from disclosing them from talking about specific company IP (which is almost certainly covered by existing NDAs already). This prevents them from using it internally at all.

I use ChatGPT at work all the time, usually for getting very specific information about products I have to integrate with, quickly parsing new API documentation, and learning about unfamiliar processes at a conceptual level before I have to dive deeper for a project. It's more the context around which I'll be building the specialized IP. It's the sort of stuff I can learn via Googling (or sometimes Stack Exchange), but can learn it faster in a more targeted manner by asking detailed questions to the chatbot.

[–] MentalEdge@sopuli.xyz 0 points 2 years ago (1 children)

On this I agree entirely. The potential for corporate espionage because of unwitting employees using an LLM through unofficial means is huge.

At the very least, the corporation itself would have to be the customer, so that watertight terms might be negotiated, not the employee.

[–] ulu_mulu@lemmy.world 1 points 2 years ago* (last edited 2 years ago)

I don't think being a customer would work either, language models are still on the training, noone knows exactly how users queries are used, that's a big no no for every company having to protect their secrets.

A self-hosted instance is a much better solution, if not the only "safe" one from that point of view, we'll get there.

[–] RupeThereItIs@kbin.social 2 points 2 years ago (1 children)

It's a MASSIVE security risk. What you tell ChatGPT is not private, if you knowingly or unknowingly tell ChatGPT secret information you have no control over where that information may go. Especially for a company for Apple that lives & breaths on surprise product releases.

[–] MentalEdge@sopuli.xyz 1 points 2 years ago* (last edited 2 years ago)

This is true, but if you understand that queries don't necessarily need to also become training data, what you tell it could absolutely be kept secret, provided the necessary agreements and changes were to be made. Nothing about an LLM means you can't make it forget things you've told it. What you can't make it forget, without re-training it from the ground up with that piece of information omitted, is what you told it in the training data.

But queries, do not suffer this limitation.