"There is no evidence that this didn't happen."
This line of reasoning is the same way religions "argue".
There is also no evidence that this did happen.
So I assume that it's wrong until undeniably proven otherwise by the scientific method.
"There is no evidence that this didn't happen."
This line of reasoning is the same way religions "argue".
There is also no evidence that this did happen.
So I assume that it's wrong until undeniably proven otherwise by the scientific method.
For dipping your toes into a new topic I think it's perfectly fine. It helps to provide useful pointers for further "research" (in a sense that would meet your requirement) and also manages to provide mostly accurate overviews. But sure, to really dive into this, LLMs like ChatGPT and co. are just some low-level assistants at best and one should go through the material themselves.
Perplexity does a good job as LLM-search-engine-combo.
Ich trage einen Fahrradhelm, um mein Verletzungsrisiko zu senken und meine Überlebenswahrscheinlichkeit zu steigern.
Und wie die Religion, die dieses Kreuz repräsentiert, wirklich alle Menschen und deren Würde achtet. Unabhängig von Herkunft, sexueller Orientierung, Geschlecht oder oder.
Now tell me again about how peaceful religions are.
Yes. But if the machine has proven to work reliably it will usually do so for its lifetime, while humans are prone to e multitude of errors. Especially in the medical field.
USA is namba waan!
I trust a good machine much more than any human.
That's such a fucking stupid idea.
Care to elaborate why?
From my point of view I don't see a problem with that. Or let's say: the potential risks highly depend on the specific setup.
Inwiefern erspart eine Diagnose solche Situationen?