I’ve pirated every video converter known to man (UniConverter, WinX, VideoProc, Aiseesoft, Tipard, etc) & even tried open source tools like ffmpeg and handbrake and I can’t get hardware acceleration to work unless I just don’t understand how it’s supposed to work. I have a Radeon ™ RX 470 graphics card and plenty of processing power.

An example is when I attempt to convert a video to HEVC and don’t use acceleration, I can get like 100 FPS and 2-3 mins rendering time but all my CPUs go to over 100%.

However, when I turn on acceleration or use the AMD HEVC Encoder (ffmpeg, handbrake), the FPS rate drops to like 10-15 FPS, the CPUs barely go over 10% and the GPU then jumps to over 100% which is fine but then it tells me it’ll take like 20 mins to render a 20 mins tv episode!?!?

This is driving me crazy. Can someone provide some insight on this? I’d be forever grateful. Thanks!

  • FBJimmy@lemmus.org
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 month ago

    Based on how you’re observing the loading move from 100% CPU ro 100% GPU, I would suggest that it is “working” to some extent.

    I don’t have any experience with that GPU, but here’s few things to keep in mind with this:

    1. When you use a GPU for video encoding, it’s not the case that it’s ‘accelerating’ what you were doing without it. What you’re doing is switching from running a software implementation of an HEVC encoder on your CPU to running a hardware implementation of an HEVC encoder on your GPU. Hardware and Software encoders are very different to one another and they won’t combine forces; it’s one or the other.

    2. Video encoders have literally hundreds of configuration options. How you configure the encoder will have a massive impact on the encoding time. To get results that I’m happy with for archiving usually means encoding at slower than real-time for me on a 5800X CPU; if you’re getting over 100fps on your CPU I would guess that you have it setup on some very fast settings - I wouldn’t recommend this for anything other than real-time transcoding. Conversely, it’s possible you have slower settings configured for your GPU.

    3. Video encoding is very difficult to do “well” in hardware. Generally speaking software is better suited to the sort of algorithms that are needed. GPUs can be beneficial in speeding up an encode, but the result won’t be as good in terms of quality vs file size - for the same quality a GPU encode will be bigger, or for the same file size a GPU encode will be lower quality.

    I guess this is a roundabout way of suggesting that if you’re happy with the quality of your 100fps CPU encodes, stick with it!

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      This is probably the best answer you will get OP ! I have done some encode (BD -> SVT-AV1) and everything FBJimmy said is everything I have gathered through my search on how to get the best quality/speed encode without loosing to much of fine details.

      This won’t make you happy if what you want is to use GPU encoding, cauz this is for on the fly encoding (streaming via twitch, Youtube, whatever…). It seems a nice idea to do GPU encoding but CPU software encoding is way more efficient than GPU.

      It seems You aren’t looking for quality video encoding, but more speedy encoding? If that’s the case, yeah GPU encoding seems the best idea here. But can’t help sorry…

      Most of the encode I have done with ffmpeg on AV1 got arround 20fps ? Yes it’s slow, however I get near “lossless” quality with an acceptable file size to serve over Jellyfin. Also, I never heard someone mention that 80-90% CPU utilization is bad for your CPU if your temps are all right (over 80° seems a bit alarming). Sure if you’re doing video encoding every day, your CPU will suffer offer time, I mean that’s practically what they are build for… Processing information ! And like everything, the more you use it, the more it wears out (the same goes for your GPU…)

      But I can understand your determination and hope you will find your way arround. I’m also stubborn when I want something to work the way I want.

    • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      It doesn’t help when I don’t have a very good grasp of the Hardware mechanics of it. Thanks for trying to clarify for me! The thing I’m most concerned with in using the CPU for everything is most software including Handbrake I try, if I let the CPU do all the processing, each CPU core goes to >100% which is not good for the system for long periods of time and literally got 100s of DVD/BluRays I want to reprocess. I’ve always been told around 55%-65% on each core is acceptable when processing video. Any additional information you can provide would be most appreicated.

      • we_avoid_temptation@lemmy.zip
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 month ago

        each CPU core goes to >100% which is not good for the system for long periods of time

        If you don’t have effective cooling, maybe, but I’ve never heard of any reason to keep core utilization under any specific percentage. Are your temps an issue?

        • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          No, not so far. No crashes or anything like that. Someone somewhere just told me a good range for video rendering was between 65-75% core usage.

          • catloaf@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            I can think of no logical explanation for that. Maybe if you wanted to use CPU encoding and use the system at the same time. But given how many cores systems have these days, percentages don’t mean much. As long as you leave a few cores available, you’ll be able to use the system.

            If you don’t care about that, let it go to 100%.

          • Kissaki@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            That’s bullshit. There’s no reason to limit or target a specific or non-maximum CPU core usage.

            That would only make sense to evade hardware faults or cooling issues. Never as a general guideline.

          • Paula_Tejando@lemmy.eco.br
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            A good range for CPU utilization is 100%. Same for memory. Anything less and you’re wasting your computer, letting energy flow through your components and degrading them without much benefit.