JumpyWombat

joined 1 week ago
[–] JumpyWombat@lemmy.ml 0 points 2 days ago

I don’t think that controversy about Trump is concerning in any way. The AI could be interesting instead.

[–] JumpyWombat@lemmy.ml 3 points 2 days ago (2 children)

My main issue with iCloud is that it’s American and that they may open my data to institutional monitoring upon request. Great in general, but it’s not designed for privacy.

[–] JumpyWombat@lemmy.ml 5 points 2 days ago (8 children)

To my knowledge Proton doesn’t sell your data and there were no leaks in the past. It is also true for a lot of its competitors though.

Note: I use Proton for some things.

[–] JumpyWombat@lemmy.ml 13 points 2 days ago (6 children)

Which is great, but limited to smaller models with slower response time (provided that you have a GPU, ofc)

[–] JumpyWombat@lemmy.ml 11 points 2 days ago (1 children)

That, or a data leak.

Consider also that if they send 10.000 mails, some will happen to be perfectly aligned for pure chance.

[–] JumpyWombat@lemmy.ml -4 points 2 days ago* (last edited 2 days ago)

Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?

If instead of giving passive aggressive replies you would spend a moment to reflect on what I wrote you would understand that ChatGPT reflect the reality, including any bias. In short the answer is yes with high probability.

[–] JumpyWombat@lemmy.ml -1 points 3 days ago (2 children)

LLMs do not give the correct answer, just the most probable sequence of words based on the training.

That kind of studies (because there are hundreds) highlight two things:

1- LLMs could be incorrect, biased, or give fake information (the so called hallucinations). 2- the previous point stems from the training material proving the existence of bias in the society.

In other words, having an LLM recommending lower salaries for women is a proof that there is a gender gap.

[–] JumpyWombat@lemmy.ml 6 points 3 days ago

Almost got fired once when a close colleague spread rumors to put her failures on me. I never got to know precisely what she said but it was "extremely bad" and in the realm of harassment (not sexual, but still...).

The management was not sure and did not involve HR to make it formal. I felt under scrutiny for a while, so I kept all the communication to a minimum, strictly professional, not even an emoji or a joke about the weather, all in writing when possible and including other people every time it was possible. It was horrible and stressful. I considered to quit or to ask to be moved since interacting with her was part of a daily routine, but I feared that it could be seen as an admission of guilt.

Eventually she was fired in a round of layoffs and that was the end of it. Later I discovered from some colleagues that they never believed that shit, but nobody stepped in to say anything.

[–] JumpyWombat@lemmy.ml 2 points 3 days ago

That's the reason. At the time HTTPS was not a thing.

[–] JumpyWombat@lemmy.ml 7 points 3 days ago (4 children)

Possibly one of the few survivors of that period: http://www.milliondollarhomepage.com/

[–] JumpyWombat@lemmy.ml 1 points 3 days ago

You sort of described RAG. It can improve alignment, but the training is hard to overcome.

See Grok that bounces from “woke” results to “full nazi” without hitting the mid point desired by Musk.

view more: ‹ prev next ›