bUt iTs Ai InTeGrAtEd
bUt iTs Ai InTeGrAtEd
afaik its using the backup of Rarbg and adding new content too
its the one with .to
Media is better indexed on Rarbg (which doesnt exist anymore, but the biggest copy is TheRarbg which works similarly)
big reason why i switched to kopia, borg just doesnt cut it anymore…
I would be careful with some of these providers depending on your usage.
You are potentially sending a ton of info to them…
I have access to Bing Chat Enterprise through my company, and only because its the Enterprise version i am half confident in using it with more restrictive data.
Though the frontend of copilot is so heavy and sucks, so i have a proxy for GPT-API to Bing Chat.
Had hoped GPT4All Bing provider would support login, but sadly not, so essentially had to reimplement it all myself.
The GPT services out there use something called ‘tools’.
They get presented to the model and the model can ‘call’ a tool with arguments, which can then extract some data and input it into the context for the model to continue.
I found out, the models which can run on a normal PC (or even a Laptop) are okay, but not super great. (around or a bit worse than ChatGpt3)
The good stuff (e.g. Nous-Capybara 31B or the Mistral/Mixtral ones) needs some more memory and compute.
Yeah, it needs those rules for e.g. port-forwarding into the containers.
But it doesnt really ‘nuke’ existing ones.
I have simply placed my rules at higher priority than normal. Very simple in nftables and good to not have rules mixed between nftables and iptables in unexpected ways.
You should filter as early as possible anyways to reduce ressource usage on e.g. connection tracking.
Have a friend with a Gen1, super easy to hack and not used in some time…
The others need a modchip, can be a bit pricy and is more fliddly.
Was just a matter of time
Copilot is weird and can give out very weird responses that have little to do with your conversations.
And of course it might just grab context depending on what you do (e.g. clicking the copilot button might already do that).
I found it works best as GPT model if you disable the fancy stuff like search. It too easily looses track of what happened or completly goes off the rails.
(i believe disabling search is a beta feature in some regions, but its a hidden flag you can theoretically set, i made a tampermonkey script to add a button).
I hate the slow UI of Copilot, so i translate requests from a different GPT interface.
DNS filtering only gets you so far. Its far, but certainly not at the end of the road. More complex and differently designed systems wont be bothered much.
Encrypted DNS or simply hosting legitimate stuff on the same domain cannot really be fully blocked entirely or make your life difficult.
Better quality releases and more active users with much less leeches as they get thrown out.
Though there are many site admins with some complex here too… your experience can vary.
And of course you need to contribute to the community, most trackers will grant you buffer for both uploading and keeping the torrent running. You want something, then you have to give back.
If the tracker doesnt give possebilities to build your buffer in multiple ways, other than just uploading, its usually a shit tracker.
And some are just super hard to impossible to get into. Start small, wait for open signups or just go to new trackers, they might get bigger over time.
Dont publicly beg for invites, you can humiliate yourself in private chats if you are into that.
BTRFS or ZFS and then you can just rollback to an earlier snapshot.
law enforcement is exactly that… enforcement.
If there are no laws in place to force providers to do that, it cant be done via these means.
And if you cannot enforce because the provider is outside the jurisdiction, then you cant either.
And if you start forcing blocks, the users will adapt by either changing provider or simply evading the block.
Just keep it seeding?
Of course if you want both, best space saving would be to use the same file.
I have multiple servers, so it doesnt really matter anyways, one machine downloads and seeds via its SSD and theother is just for storage on HDDs. Though i could setup tiered storage in this scenario to be able to seed more with same SSD strorage amount.
You can downsample from BluRay, which would give you least loss.
But if you only have some good h264 version and want space savings, you can also reencode that, while probably loosing some small amount of quality, depending on your settings.
I dont see why someone would need this deal anyways… most is already available, and most the new stuff probably too, even without API access.
I also expect the fediverse to be crawled and used for training, thats just the thing about publicly available stuff, it gets used, if we like it or not…
I found TheRarbg to have better results compared to 1337.
And often 1337 is not accessible… probably because of cloudflare
So i understood you just want some local storage system with some fault tolerance.
ZFS will do that. Nothing fancy, just volumes as either blockdevice or ZFS filesystem.
If you want something more fancy, maybe even distributed, check out storage cluster systems with erasure coding, less storage wasted than with pure replication, though comes at reconstruction cost if something goes wrong.
MinIO comes to mind, tough i never used it… my requirements seem to be so rare, these tools only get close :/
afaik you can add more disks and nodes more or less dynamically with it.
yeah, just use kubectl and pipe stuff around with bash to make it work, pretty easy