Codeberg has been stable enough for my small usage. It does have a CI, woodpecker, that requires manual approval. I haven't used their CI yet
starshipwinepineapple
Tflops is a generic measurement, not actual utilization, and not specific to a given type of workload. Not all workloads saturate gpu utilization equally and ai models will depend on cuda/tensor. the gen/count of your cores will be better optimized for AI workloads and better able to utilize those tflops for your task. and yes, amd uses rocm which i didn't feel i needed to specify since its a given (and years behind cuda capabilities). The point is that these things are not equal and there are major differences here alone.
I mentioned memory type since the cards you listed use different versions ( hbm vs gddr) so you can't just compare the capacity alone and expect equal performance.
And again for your specific use case of this large MoE model you'd need to solve the gpu-to-gpu communication issue (ensuring both connections + sufficient speed without getting bottlenecked)
I think you're going to need to do actual analysis of the specific set up youre proposing. Good luck
The table you're referencing leaves out CUDA/ tensor cores (count+gen) which is a big part of the gpus, and also not factoring in type of memory. From the comments it looks like you want to use a large MoE model. You aren't going to be able to just stack raw power and expect to be able to run this without major deterioration of performance if it runs at all.
Don't forget your MoE model needs all-to-all communication for expert routing
- Custom DNS servers specified on the device to circumvent the pihole
- dns over https or tls
- hotspot from approved device
- alternative YouTube front ends
These are just off the top of my head. Best case scenario the blocking does work and the teen never tries to bypass it. They'll still just move onto "wasting" time on something else. This is treating the symptom and not the root cause.
Pihole can set up "groups" for different blocklists. You specify client by IP or MAC address so it doesnt matter what the dhcp server is, so long as there's a static IP or static MAC address. My pihole server doesn't have dhcp set up and I'm able to do this fine
Though from personal experience this just becomes a game of cat and mouse, and if you have a motivated teenager then they will find a way to circumvent this. For example android can rotate MAC addresses, and IP addresses are trivial to spoof as well.
Haven't used all of those but my recommendation would be to just start trying them. Start small, get a feel for it and expand usage or try a different backup solution. You should be able to do automatic backups for any of them either directly or setting up your own timer/cron jobs (which is how i do it with rsync).
I submitted a response but if i may give some feedback, the second portion brings up:
I am willing to pay a substantial amount for hardware required for self-hosting.
This seemed out of place because there were no other value related questions (iirc). Such as:
- I believe self hosting saves me money in the short term
- i believe self hosting saves me money in the long run
I'm sure you could also think of more. But i think it's pretty important because between cloud service providers and any non-free apps you want to use, it can be quite costly compared to the cost of some hardware and time it takes to set things up.
The rest of my responses don't change but if you're wanting to understand the impact of money in all of this, i think some more questions are needed
Best of luck!
I use vscodium and it is available on AUR (vscodium / vscodium-bin). Supposedly there are some plugins not available for it, but i don't use a ton of plugins and the ones I used in vscode were available in vscodium when i switched.
Why Not Use…?
I am aware that there are many other git “forge” platforms available. Gitea, Codeberg, and Forgejo all come to mind. Those platforms are great as well. If you prefer those options instead of SourceHut that’s fine! Switching to any of those would still be a massive improvement over GitHub.
Unfortunately, I find the need to have an account in order to contribute to projects a deal breaker. It causes too much friction for no real gain. Email based workflows will always reign supreme. It’s the OG of code contributions.
Ive been using codeberg(a public forgejo) and it felt more familiar coming from github/gitlab. Sourcehut wasn't bad, but it did feel quite a bit different and i admittedly didn't get too far past that. I do like the idea of contributing without an account though. i know that it's a git feature to create a patch file but having a forge support it is neat.
Semi related, I do look forward to federation of forgejo which i think helps the "needing an account" somewhat. I think it's less unreasonable to expect someone to have an account on -any federated forge- than to have an account at the specific forge my project is on.
Good article though. It did help make sourcehut make more sense than the first time i looked at it
On one hand it is nice to see companies give back.
On the other hand, their revenue was $249 million in 2023 and their income after expenses/taxes was 8 million.
- 120,000 / 249,000,000 = 0.048% of revenue.
- 120,000 / 8,000,000 = 1.5% of their income.
It just seems like a small amount to give back for how much they are bringing in.
You're not connected to wifi or vpn from the looks of it. jellyfin is hosted on your local network. You need to be connected to that network for any device you want to access it. The most direct way is to connect via wifi. If you want access from outside your house you'll need to look into opening a remote connection via something like cloudflare tunnel
Ah, my apologies!