• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • Luxury! My homeserver has an i5 3470 with 6GB or RAM (yes, it’s a cursed 4+2 setup)! </badMontyPythonReference>

    Interesting, I also run Nextcloud and pihole, and vaultwarden, jellyfin, paperless-ngx, gitea, vscode-server and a minecraft server (every now and then).

    You’re right that such a system really does show its age, but only when doing multiple intensive tasks at the same time. I try not to backup my photos to Nextcloud while running minecraft, for example, as the imagine identification task pins my CPU at 100%. So yes, I agree, you’re probably not doing anything out of the ordinary on your setup.

    The point I was trying to make still stands though, as that pi 2B could run more than I would’ve expected beforehand. I believe it once even ran jellyfin, a simple file server, samba, and a webserver with a simple HTML website. Jellyfin worked just fine, as long as the pi didn’t have to transcode (never got hardware transcoding to work).

    It is funny that you should run out of memory, seeing as everything fits (albeit, just barely) on my machine in 1/5 the memory. Would de overhead of running VM’s account for such a large difference?


  • Coming from someone who started selfhosting on a pi 2B (similar-ish specs), you’d be surprised. If you don’t need anything fast or fancy, that 1GB will go a long way, and plenty of selfhosted apps require very little CPU. The only real problem I faced was that all HTTPS-related network tasks were limited at ~3MB/s, as that is how fast my pi could encrypt the data (presumably, I just saw my webserver utilising the entire CPU and figured this was the most likely explanation)


  • It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.


  • Maxy@lemmy.blahaj.zonetoSelfhosted@lemmy.worldData HDD with SSD catch drive
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)


  • It’s possible for a certain hardware/software setup not to support a certain codec. For example, my jellyfin client (Finamp) uses the iOS native decoders (afaik), which means opus files are practically broken. My music library (8000+ songs) contained exactly 1 lossy file, which just so happened to be an opus file. I decided to spend the extra ~20MB to standardise my entire library to flac files, ensuring I could play every song on all my devices.

    Edit cause I posted too soon: you are generally correct; only in very specific circumstances will you encounter compatibility issues like this one in the modern world. This is 100% apple being apple, and you can expect pretty much every other (reasonably modern) device to support all codecs you might encounter in the wild.


  • To add to the audio compression: it isn’t possible to further compress an mp3 file without losing any quality. You can either:

    1. Recompress to a lossy codec (mp3, aac, opus). This will lead to smaller file sizes if you set the bitrate lower than that of the input file, but it will always worsen the quality, no matter the bitrate.
    2. Recompress to a lossless format (flac easily being the best one). Going from a lossy to a lossless format will increase the file size (sometimes by quite a substantial amount), while keeping the same quality. There is very little reason for you to do this
    3. keep the original files (my recommendation)

    If you’re willing to spend some extra time learning about audio compression, you can download lossless files and compress those directly to whatever format and bitrate you want. The quality will be better than option 1 above, as the audio is only lossely compressed once instead of twice.



  • I’ve been running some external drives on my server for about a year now. In my experience, hard drives with an external power supply suffer less from random disconnects. The specific PC also makes quite a large difference in reliability. My server is just a regular desktop and has very little problem staying connected and powering my 3 external drives. My seedbox is an old laptop, and has been having almost constant problems with random disconnects and power issues. Maybe test how well your framework does with some external drives before committing to the plan?






  • I believe SSD’s don’t actually experience wear when reading data, only when writing. Loading more data from SSD’s shouldn’t cause any premature failure. Overwriting more data each update could cause the drive to fail slightly earlier, but if that’s really that big of a concern, you’d be best of moving to Debian stable (no updates means no SSD writes).

    If SSD wear prevention is really that big of a concern, you might be interested in profile-sync-daemon (https://wiki.archlinux.org/title/Profile-sync-daemon). It reduces writes to hard drives by keeping your browser profile in RAM, and only periodically syncing it to disk.

    Though I must add that SSD’s wearing out really isn’t that much of an issue with modern drives. With normal usage, a drive will become obsolete long before it actually wears out.


  • Not OP (OC? Not the person you were helping, you get what I mean), are you sure you meant df -h? fd -H seems more useful for to me when trying to find a specific file in a dotfolder, though even that didn’t work on my system. fd ignores ~/.config by default, so you need to use fd -u (which is an alias for fd -I -H) to find the correct files.

    Anyways, from your description it seems like the correct file would be ~/.config/kwinrc, which exists on my system.




  • Ah, it looks like we have a small misunderstanding. I thought you were talking about uncompressed video, which is enormous. This is only used in HDMI cables for example. A 1080p60 uncompressed video is 2.98Gbit/s, or about 1.22 terabytes per hour.

    A remux is “uncompressed” in the sense that it isn’t recompressed, or in this case transcoded. A remux is still compressed, just to a lesser degree than a transcode. This means the files are indeed larger, but the quality is also better than transcodes.

    To clarify the article’s confusing statement: they claim that remuxes can reduce size by throwing away some audio streams, while keeping the original video. This is true, but the video itself hasn’t gotten any smaller: you are simply throwing away other information.




  • Yes, some minor formatting changes occur when opening a docx file in libreoffice. Hardly sounds like a deal breaker to me. And yes, you do get a pop-up when saving to docx in libreoffice (with the toggle to disable the pop-ups right there in the message). Microsoft office does the exact same thing when saving to an odt file though:

    Once again, if you have to collaborate with office-users (and you cannot deal with the horror of having a different amount of space between the items), just use office online. How many times do I have to repeat myself?

    Let me guess you’re someone who works in IT and never had a typical “office job” that includes spending 90% of your time writing reports and pushing spreadsheets around.

    1. No, I do not work in IT, nor do I aspire to work in IT. I’m just a regular PC-user, who just so happens to have other opinions than you do. HOW DARE I?!?
    2. Wouldn’t IT-workers of all people know what the more optimized editors are?

    This is why you don’t get it, you’re not the typical user of MS Office and you don’t share the same use cases the OP, the article author and myself share.

    1. The article you shared was talking about gaming, the adobe creative suite, virtual machines, electrical engineers, labs, architects and sysadmins/developers. Please don’t try to claim that the article author and OP ever had “the same use cases”.
    2. I guess you are finally correct though, I’m indeed not the typical user of MS Office (thank god). The typical user pays $70 a year just to edit word docs, while calling the family tech support each time they try to add a horizontal page in word. If your use case is being trapped into a proprietary office solution, where you have to provide a reason before microsoft allows you to shut down your onedrive, where all your documents are saved in a mythical “cloud”, then I am glad that our use-cases differ.
    3. I hope you see the irony of you using markdown in a comment describing why I am “out of touch” for using markdown.

    If you want to use windows, that’s fine. But please don’t share such blatantly ignorant articles, and don’t try to defend them when multiple people point out why it is wrong about so many things.

    I probably won’t reply to your next reaction (should there be any) unless you come up with some actual arguments, instead of “the line spacing is broken, you’re out of touch, not me”.