This is all your fault for sending those dmails :/
smiletolerantly
Oh, and while I'm at it: do not trust any food recommendation written in English. Good or bad.
If you travel to Japan, honestly just... Skip Kyoto. It is so full of tourists (national and international), you cannot possibly imagine unless you've seen it.
Sure, there's a lot of impressive temples there. But so is the rest of the country.
We were lucky enough to spend 4 weeks in Japan earlier this year, and if I could do the trip again, I would straight-up skip Kyoto and Osaka.
Rent a car, drive in some random direction. You'll he a lot happier, it it will actually be your trip. By far the best memories coke from places not in any travel guide.
Depends - was the assault comment directed at assailants or victims?
Thanks for the recommendation! That looks interesting indeed.
This entire topic is probably a sinkhole of complexity. It's great to have somewhere to look for inspiration!
Yeah those are good points. Also noticed the CDN thing, it's a bit annoying for a privacy-first project... But should be an easy fix 😄
Stirling's backend is Java. So, yeah, heavy and slow sounds about right.
The one exception here: it's great to have it installed on your parents' PC when you're the one doing the update once in a while when you are around. Rock solid in between, no nagging, and if something did break, easy to roll back.
Ah, thanks for mentioning. Yep, they have a docker image; as mentioned, a nixpkg will be available soonTM; and frankly, you can just build / download the release artifacts and put them on any static host.
Please read the title of the post again. I do not want to use an LLM. Selfhosted is bad enough, but feeding my data to OpenAI is worse.
Yep, that's the idea! This post basically boils down to "does this exist for HASS already, or do I need to implement it?" and the answer, unfortunately, seems to be the latter.
Thanks, had not heard of this before! From skimming the link, it seems that the integration with HASS mostly focuses on providing wyoming endpoints (STT, TTS, wakeword), right? (Un)fortunately, that's the part that's already working really well 😄
However, the idea of just writing a stand-alone application with Ollama-compatible endpoints, but not actually putting an LLM behind it is genius, I had not thought about that. That could really simplify stuff if I decide to write a custom intent handler. So, yeah, thanks for the link!!
well... at least he realizes that was bullshit...?