Pretraining Problem

Home Forums DeepFaceLab Training Pretraining Problem

  • This topic has 3 replies, 3 voices, and was last updated 1 month ago by mrthong.
Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #9099
    Ismail111
    Participant

      Hi guys! I’ve never created a video with Deepfacelab before. First of all, I heard that if I do pretraining, it will be faster and easier every time I make a df video. When I open the Deepfacelab folder, I directly run the 6) train SAEHD file. But I get an error at the last part because of the values I entered.

      My System Features:

      My Graphics Card: RTX 3060
      11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 2.30 GHz
      RAM: 16GB

      Can you tell me what path I should follow and what values I should enter?

      #9102
      deepfakeclub
      Participant

        i always start with the default value’s

        also if you are a beginner i would not reccomend trying to do all the complicated stuff at once

        however this is what i would reccomend

        train using the default values

        delete the model that is already in the model folder

        create a completely new one or import a pretrained model that you can find somewhere

        this site offers really good comunity made models

        also if this didnt help

        check your workspace folder and see if everything is correct in there

        #9103
        Ismail111
        Participant

          I deleted the previous model and made a new model. I tried to do SAEHD training again, but I got the same error again. I can’t find out why I’m getting this error. I’m about to go crazy…

          [n] Enable pretraining mode ( y/n ?:help ) : y
          Initializing models: 80%|##################################################4 | 4/5 [01:31<00:22, 22.88s/it]
          Error: OOM when allocating tensor with shape[131072,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
          [[node src_dst_opt/ms_inter_AB/dense1/weight_0/Assign (defined at C:\Users\Ersin\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series_build_11_20_2021\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:37) ]]
          Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn’t available when running in Eager mode.

          #10304
          mrthong
          Participant

            Lower your batch size

          Viewing 4 posts - 1 through 4 (of 4 total)
          • You must be logged in to reply to this topic.