tatterdemalion

joined 2 years ago

Well yea, ideally colonization would not have happened, but it did. Would you rather the indigenous people had received no consolation at all?

[–] tatterdemalion@programming.dev 3 points 20 hours ago (2 children)

I just want to say that ethnicity is not necessarily related to genetics in any way, it's about your heritage and upbringing. A person can be born and raised in Japan without having indigenous parents. In this way, they are ethnically Japanese if they choose to identify with the culture they were raised in.

So it's not suddenly racist to appreciate someone's ethnicity. It's racist to assume someone's ethnicity based on how they look.

And what you say about ethnicity being irrelevant to the government is quite controversial. For example, there are various treaties in effect between colonizers and indigenous peoples. Do you think such treaties are not valuable because they are based on ethnicity?

[–] tatterdemalion@programming.dev 15 points 23 hours ago

Honestly who fucking cares if DOGE is actually efficient. Elon is an appointed executive branch official who hasn't been confirmed by the senate. He is wielding power unconstitutionally and needs to be removed immediately.

[–] tatterdemalion@programming.dev 21 points 1 day ago (3 children)

Doubt it. Why would a maintainer intentionally self sabotage their own API stability? Cutting off one's nose to spite the face.

[–] tatterdemalion@programming.dev 20 points 2 days ago* (last edited 2 days ago)

He's also stealing our tax dollars through government contracts. Trump does the same thing.

His last special SuperNature had some jokes making fun of trans women and the people who defend them.

I won't lie and say I didn't laugh at the final punchline. But yeah these jokes are pretty counterproductive and just reinforce bigotry.

https://youtu.be/WID6w4_gtwo

Agreed. Take a look at the cachestat tool to measure how well the page cache is working for cargo builds.

https://www.brendangregg.com/blog/2014-12-31/linux-page-cache-hit-ratio.html

[–] tatterdemalion@programming.dev 0 points 6 days ago* (last edited 6 days ago) (1 children)

I definitely cannot get behind the "no recursion" rule. There are plenty of algorithms where the iterative equivalent is significantly harder and less natural. For example, post-order DFS.

I guess maybe when lives depend on it. But they should be testing and fuzzing their code anyway, right?

EDIT: I can't even find in the NASA PDF where it mentions recursion.

I got whooshed then. Maybe because I only skimmed the article to try to figure out what their point was.

Basically just tmux + Helix + fish shell.

 
 

I ask because it would be nice to use the "I2P mixed mode" features of qbittorrent, but I want to keep my clearnet traffic on the VPN.

Background

I have I2PD running only on my home gateway for better tunnel uptime.

To ensure that torrent traffic never escapes the VPN tunnel, I have configured qbittorrent to use only the VPN Wireguard interface.

Problem

I think this means qbittorrent I2P traffic will flow into the VPN tunnel, but then the VPN host won't know how to route back to my home gateway where the SAM bridge is running.

 

I've configured my i2pd proxy correctly so things are somewhat working. I was able to visit notbob.i2p. But sometimes Firefox really likes to replace "http" with "https" when I click on a link or even enter the URL manually into the bar. I have "HTTPS-only mode" turned off, and I also have "browser.fixup.fallback-to-https" set to "false" and "network.stricttransportsecurity.preloadlist" to false.

I tried spying on the HTTP traffic in web dev tools, and I see the request gets NS_ERROR_UNKNOWN_HOST. This does not happen when using the xh CLI HTTP client, so Firefox is doing something weird with name resolution. I made sure to turn off the Firefox DNS over HTTPs setting as well, but it didn't seem to make a difference.

I assume that name resolution needs to happen in i2pd. How can I force Firefox to let that happen?

Update: Chrome works fine.

Update: I started fresh and simplified the setup and it seems fixed. I'm not entirely sure why. The only things I've changed from default are DoH and the manual HTTP proxy.

 

I was just reading through the interview process for RED, and they specifically forbid the use of VPN during the interview. I don't understand this requirement, and it seems like it would just leak your IP address to the IRC host, which could potentially be used against you in a honeypot scenario. Once they have your IP, they could link that with the credentials used with the tracker while you are torrenting, regardless of if you used VPN while torrenting.

 

I'm preparing for a new PC build, and I decided to try a new atomic OS after having been with NixOS for about a year.

First I tried Kinoite, then Bazzite, but even though KDE has a lot of features, I found it incredibly buggy, and it even had generally poor performance, especially in Firefox. I don't really have time to diagnose these issues, so I figured I would put in just a little more effort and migrate my Sway config to Fedora Sway Atomic.

I'm glad I did. The vanilla install of Fedora Sway is awesome. No bloat and very usable. I haven't noticed any bugs. Performance is excellent. And it was very straightforward to apply my sway config on top without losing the nice menu bar, since Fedora puts their sway config in /usr/share/sway.

I'm also quite happy with the middle ground of using an OSTree-based Linux plus Nix and Home Manager for my user config. I always thought that configuring the system-level stuff in Nix was the hardest part with the least payoff, but it was most productive to have a declarative config for my dev tools and desktop environment.

I originally tried NixOS because I wanted bleeding edge software without frequent breakage, and I bought into the idea of a declarative OS configuration with versioned updates and rollback. It worked out well, but I would be lying if I said it wasn't a big time investment to learn NixOS. I feel like there's a sweet spot with container images for a base OS layer then Nix and Home Manager for stuff that's closer to your actual workflows.

I might even explore building my own OS image on top of Universal Blue's Nvidia image.

Hope this path forward stays fruitful! I urge anyone who's interested in immutable distros to give this a try.

 

I've never felt the urge to make a PL until recently. I've been quite happy with a combination of Rust and Julia for most things, but after learning more about BEAM languages, LEAN4, Zig's comptime, and some newer languages implementing algebraic effects, I think I at least have a compelling set of features I would like to see in a new language. All of these features are inspired by actual problems I have programming today.

I want to make a language that achieves the following (non-exhaustive):

  • significantly faster to compile than Rust
  • at least has better performance than Python
  • processes can be hot-reloaded like on the BEAM
  • most concurrency is implemented via actors and message passing
  • built-in pub/sub buses for broadcast-style communication between actors
  • runtime is highly observable and introspective, providing things like tracing, profiling, and debugging out of the box
  • built-in API versioning semantics with automatic SemVer violation detection and backward compatible deployment strategies
  • can be extended by implementing actors in Rust and communicating via message passing
  • multiple memory management options, including GC and arenas
  • opt-in linear types to enable forced consumption of resources
  • something like Jane Street's Ocaml "modes" for simpler borrow checking without lifetime variables
  • generators / coroutines
  • Zig's comptime that mostly replaces macros
  • algebraic data types and pattern matching
  • more structural than nominal typing; some kind of reflection (via comptime) that makes it easy to do custom data layouts like structure-of-arrays
  • built-in support for multi-dimensional arrays, like Julia, plus first-class support for database-like tables
  • standard library or runtime for distributed systems primitives, like mesh topology, consensus protocols, replication, object storage and caching, etc

I think with this feature set, we would have a pretty awesome language for working in data-driven systems, which seems to be increasingly common today.

One thing I can't decide yet, mostly due to ignorance, is whether it's worth it to implement algebraic effects or monads. I'm pretty convinced that effects, if done well, would be strictly better than monads, but I'm not sure how feasible it is to incorporate effects into a type system without requiring a lot of syntactical overhead. I'm hoping most effects can be inferred.

I'm also nervous that if I add too many static analysis features, compile times will suffer. It's really important to me that compile times are productive.

Anyway, I'm just curious if anyone thinks this would be worth implementing. I know it's totally unbaked, so it's hard to say, but maybe it's already possible to spot issues with the idea, or suggest improvements. Or maybe you already know of a language that solves all of these problems.

 
 

Who are these for? People who use the terminal but don't like running shell commands?

OK sorry for throwing shade. If you use one of these, honestly, what features do you use that make it worthwhile?

 

More specifically, I'm thinking about two different modes of development for a library (private to the company) that's already relied upon by other libraries and applications:

  1. Rapidly develop the library "in isolation" without being slowed down by keeping all of the users in sync. This causes more divergence and merge effort the longer you wait to upgrade users.
  2. Make all changes in lock-step with users, keeping everyone in sync for every change that is made. This will be slower and might result in wasted work if experimental changes are not successful.

As a side note: I believe these approaches are similar in spirit to the continuum of microservices vs monoliths.

Speaking from recent experience, I feel like I'm repeatedly finding that users of my library have built towers upon obsolete APIs, because there have been multiple phases of experimentation that necessitated large changes. So with each change, large amounts of code need to be rewritten.

I still think that approach #1 was justified during the early stages of the project, since I wanted to identify all of the design problems as quickly as possible through iteration. But as the API is getting closer to stabilization, I think I need to switch to mode #2.

How do you know when is the right time to switch? Are there any good strategies for avoiding painful upgrades?

 
48
submitted 2 years ago* (last edited 2 years ago) by tatterdemalion@programming.dev to c/general@lemmy.world
 

I just commented on this post and it got removed very quickly. Then I noticed that all of the comments had been removed and the post is locked.

I cannot understand why this happened, as the comments section had seemed pretty reasonable to me.

This seems like bad moderation and I'm now less inclined to post or comment in the world news community. What should I do?

I tried messaging a mod that is seemingly online and actively posting, but I got no response.

view more: next ›