yboutros

joined 2 years ago
[–] yboutros@infosec.pub 3 points 6 days ago

The actual box for Linux should be:

dmesg -l err

sudo journalctl

Then google search the errors with your distribution

[–] yboutros@infosec.pub 3 points 3 weeks ago

Real chads flush tiny screw bits down their landlords toilet

[–] yboutros@infosec.pub 4 points 1 month ago

Better yet, name Canada North Mexico

[–] yboutros@infosec.pub 2 points 1 month ago

Ahh jumped one step too far from dozen -> doesn't -> dont

[–] yboutros@infosec.pub 4 points 1 month ago* (last edited 1 month ago) (3 children)

Oml I thought it was Clown~~s~~ ~~don't~~ doesn't even phase [me]

[–] yboutros@infosec.pub 1 points 2 months ago

Hell you could erase the circles and just color in the whole page at that point

[–] yboutros@infosec.pub 1 points 3 months ago

Somewhere in the world is a billionaire that is passionate over the same fight you're passionate about. That's the billionaire you want to work with

[–] yboutros@infosec.pub 5 points 3 months ago (3 children)

Wouldn't it make more sense to kill him instead of just yourself? Strictly hypothetically speaking

 

Meaning, VMs with Xen and hardware virtualization support

The system VM/Qube for USBs is isolated, the Network VM/Qube is separate and isolated, the windowing system and OS housing the qubes is isolated....

And being able to configure all of those with Nix would be a wet dream come true

[–] yboutros@infosec.pub 1 points 4 months ago

Thanks for the feedback! I also asked a similar question on the ai stack exchange thread and got some helpful feedback there

It was a great project for brushing up on seq2seq modeling, but I decided to shelve it since someone released a polished website doing the same thing.

The idea was the vocabulary of music composition are chords and the sentences / paragraphs that are measures are sequences of chords or sequences of measures

I think it's a great project because the limited vocab size and max sequence length are much shorter than what is typical for transformers applied to LLM tasks like digesting novels for example. So for consumer grade harder (12GB VRam) it's feasible to train a couple different model architectures in tandem

Additionally, nothing sounds bad in music composition, it's up to the musician to find a creative way to make it sound good. So even if the model is poorly trained, so long as it doesn't output EOS immediately after BOS, and the sequences are unique enough, it's pretty hard to find something that isn't different that still works.

It's also fairly easy to gather data from a site like iRealPro

The repo is still disorganized, but if you're curious the main script is scrape.py

https://github.com/Yanall-Boutros/pyRealFakeProducer

[–] yboutros@infosec.pub 3 points 4 months ago

Underrated comment

Everyone's conspiring folks. What's hard to measure, is who's conspiring

[–] yboutros@infosec.pub 9 points 5 months ago (1 children)

Laplacian Edge detection? beautiful meme

[–] yboutros@infosec.pub 38 points 6 months ago (8 children)

I wish more guys just said they didn't know something instead of clearly not knowing what they're talking about and running their mouth based on vibes

 

When training a transformer on positionally encoded embeddings, should the tgt output embeddings also be positionally encoded? If so, wouldn't the predicted/decoded embeddings also be positionally encoded?

view more: next ›