-
Search Results
-
Topic: Command prompt
On my previous Windows10 laptop I could run DeepFaceLab with no problems. On my new laptop which runs Windows 11 homme with NVIDIA RTX4070 each time I try to start extracting a movie the program opens a windows/system32/ command prompt, and nothing else. I tried the RTX3000 and Directx12 versions but both have this issue. How to solve this?
Hello!
I’m trying to add a pretrained model for SAEHD. In this tutorial “https://www.deepfakevfx.com/tutorials/deepfacelab-2-0-pretraining-tutorial/” (timestamp 1:15) it says that the pak file of the downloaded pretrain faceset is to be dropped into the folder “pretrain faces”. However, when I open the downloaded folder there is no pak file to use, only dat, npy, jpg and txt files. I downloaded “DF-UDT WF 384” from this site for reference.Any help on what I’m missing here would be appreciated!
Topic: How to merge DFM file?
I downloaded the DFM file in DeepFaceLive and copied it to the workspace/model directory in DeepFaceLab. I can only see my own trained Quick96 model using 7) merge Quick96.bat, but I can’t find any models using 7) merge AMP.bat or 7) merge SAEHD.bat. Why is that?
I’m experiencing a persistent issue with DeepFaceLab where it freezes when attempting to train using the SAEHD or AMP models. The program gets stuck after loading data, without consuming any CPU, memory, or GPU resources. This issue occurs even when pretraining is disabled. Interestingly, the Quick96 and XSeg models work perfectly fine.
Troubleshooting Steps Taken:
Data Check: Verified the training data quality and paths, ensuring they are not the cause of the issue.
Environment Verification: Confirmed DeepFaceLab is using its built-in CUDA and Python environment.
GPU Driver Check: Installed the latest GPU drivers for my RTX 4060 Ti.
Resource Monitoring: Monitored GPU, CPU, and RAM usage, observing no activity when the program freezes.
DeepFaceLab Logs: Checked DeepFaceLab logs, but no obvious errors were found.
train.py Configuration: Modified batch_size and resolution in train.py to reduce memory usage, but the problem persists.
Pretraining Disabled: Disabled pretraining for SAEHD model, issue remains.
Software Conflict Check: Closed unnecessary software and services, still unable to resolve issue.
Dependency Analysis: Used pip list to check and tried updating/downgrading some DeepFaceLab dependency libraries, but issue remains.
Possible Causes (Speculated):
Hardware Incompatibility: A potential compatibility issue between my RTX 4060 Ti and DeepFaceLab, especially with SAEHD and AMP models.
DeepFaceLab Code Bug: A potential bug within the SAEHD or AMP model code in DeepFaceLab, leading to the freeze.
Software Conflict: A potential software conflict, specific to my system environment, impacting only the SAEHD and AMP models.
DeepFaceLab Version Issue: A potential bug within my specific DeepFaceLab version.
Seeking Help On:
RTX 4060 Ti Compatibility: Are there any known compatibility issues with RTX 4060 Ti and DeepFaceLab’s SAEHD/AMP models?
DeepFaceLab Code Analysis: If someone is familiar with DeepFaceLab code, could you help analyze the SAEHD and AMP model code, focusing on:
Model initialization.
Data loading and preprocessing.
Loss function calculation.
Gradient update.
Code related to hardware resource allocation and instruction sets.
Software Conflicts: Are there any known software conflicts specific to DeepFaceLab, especially impacting only the SAEHD and AMP models?
Dependency Issues: Any possible dependency issues that might cause this freezing behavior? I can provide a list of dependency versions using pip list.
System Information:
GPU: RTX 4060 Ti 16GB
CPU: i7 Processor
RAM: 32GB
Operating System: (windows 11)
DeepFaceLab Version: (DeepFaceLab_NVIDIA_RTX3000_series)
CUDA: DeepFaceLab’s built-in CUDA
Python: DeepFaceLab’s built-in Python
Additional Information:
I have tried everything listed above but the issue persists. Any insights or suggestions would be greatly appreciated.Topic: Videocard works bad
Hello there
Yesterday I built my new PC with 4070 Super. But in DeepFaceLab it works worse than 1080ti from video example. What I have to do with it?
thanksTopic: is not a dfl image file
hi. i trained my src on DeepFaceLab but i found it have some problem on src images.
so i was edit aligned jpg file on adobe photoshop.
i was edit it about 2k counts..
but it can’t use on dflab.I found a similar issue on github.
The comment reads “u can convert the files with irfan view or xnviewmp.”
so i tried to convert 2k more jpg files, but dfl said “is not a dfl image file”
https://github.com/iperov/DeepFaceLab/issues/5276when i try to pack data_src, it said “is not a dfl image file”
and when i try to train_SHAED same.Is this error occurring because the image has no metadata for dfl?
i don’t have original file now…..
how can i solve this problem?please help me. I spent a lot of time editing photoshop and converting jpg files. so im very sad now..
i am trying to learn how to use deepfakelab. I have been following the setup guide found at this link.
When it opens. it says this “[new] No saved models found. Enter a name of a new model ” at that part i just enter “New”
then i select GPU.
Once it starts flowing. i am met with the error below… Any ideas??? Thx
Initializing models: 100%|###############################################################| 5/5 [00:01<00:00, 4.27it/s]
Loading samples: 0it [00:00, ?it/s]
Error: No training data provided.
Traceback (most recent call last):
File “C:\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py”, line 58, in trainerThread
debug=debug)
File “C:\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py”, line 193, in __init__
self.on_initialize()
File “C:\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_Quick96\Model.py”, line 240, in on_initialize
generators_count=src_generators_count ),
File “C:\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py”, line 48, in __init__
raise ValueError(‘No training data provided.’)
ValueError: No training data provided.Hi,
I ran into this error while trying to export my SAEHD model to dfm:
Traceback (most recent call last):
File “D:\DFL\WF_640\_internal\DeepFaceLab\main.py”, line 416, in <module>
arguments.func(arguments)
File “D:\DFL\WF_640\_internal\DeepFaceLab\main.py”, line 193, in process_exportdfm
ExportDFM.main(model_class_name = arguments.model_name, saved_models_path = Path(arguments.model_dir))
File “D:\DFL\WF_640\_internal\DeepFaceLab\mainscripts\ExportDFM.py”, line 22, in main
model.export_dfm ()
File “D:\DFL\WF_640\_internal\DeepFaceLab\models\Model_SAEHD\Model.py”, line 1028, in export_dfm
[‘out_face_mask’,’out_celeb_face’,’out_celeb_face_mask’]
File “D:\DFL\WF_640\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py”, line 346, in new_func
return func(*args, **kwargs)
File “D:\DFL\WF_640\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\graph_util_impl.py”, line 281, in convert_variables_to_constants
variable_names_denylist=variable_names_blacklist)
File “D:\DFL\WF_640\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\convert_to_constants.py”, line 1282, in convert_variables_to_constants_from_session_graph
variable_names_denylist=variable_names_denylist))
File “D:\DFL\WF_640\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\convert_to_constants.py”, line 1106, in _replace_variables_by_constants
None, tensor_data)
File “D:\DFL\WF_640\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\convert_to_constants.py”, line 389, in convert_variable_to_constant
tensor_data.numpy.shape)
File “D:\DFL\WF_640\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\tensor_util.py”, line 528, in make_tensor_proto
“Cannot create a tensor proto whose content is larger than 2GB.”)
ValueError: Cannot create a tensor proto whose content is larger than 2GB.For what I’ve read online, this is a common issue when using a large model (not just in the deepfake world but for every project using Tensorflow), and the solutions usually involve some code-tweaking that seem way over my Python level, or not having a model larger than 2GB.
I am surprised since I used the 640 pretrained model available here : https://www.deepfakevfx.com/pretrained-models-saehd/df-ud-wf-640-7394/
But I can’t find a similar topic to this in this forum.Does anyone already ran into a similar issue ?
Do I have to restart my project using a <2GB model ?Thx
During installation I received the log message:
Can not open output file : The system cannot find the path specified.: C:\Program Files \DeepFaceLab \DeepFace Lab_NVIDIA_RTX3000_series \_intemal \python-3.6.8 \Lib site-packages \tensorflow include \extemal \cudn _frontend _archive \_virtual _includes \cudn_frontend
I’m not sure if this is related but when I run “extract images from video data_src.bat” I get the following error and the command fails:
FileNotFoundError: [WinError 3] The system cannot find the path specified: ‘C:\\Program Files\\DeepFaceLab\\DeepFaceLab_NVIDIA_RTX3000_series\\_internal\\_e\\u\\AppData\\Roaming\\NVIDIA\\ComputeCache_ALL’
This is a clean install so why would these files be missing and how can I resolve?
I’m running Win10 with NVIDIA Quadro RTX6000.
I have a problem with training SAEHD
I followed all the steps in the video until I get to the part to train SAEHD. I keep getting this error message “Error: No training data provided.”
This is the error message below
####### ###
Running trainer.Choose one of saved models, or enter a name to create a new model.
[r] : rename
[d] : delete[0] : new – latest
: new
new
Loading new_SAEHD model…Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : NVIDIA GeForce RTX 3070 Laptop GPU[0] Which GPU indexes to choose? :
0Press enter in 2 seconds to override model settings.
[] Session name ( ?:help ) :[0] Autobackup every N hour ( 0..24 ?:help ) :
0
[24] Maximum N backups ( ?:help ) :
24
[n] Write preview history ( y/n ?:help ) :
n
[4] Number of samples to preview ( 1 – 16 ?:help ) : 2
2
[n] Use old preview panel ( y/n ) :
n
[0] Target iteration :
0
[n] Retrain high loss samples ( y/n ?:help ) :
n
[n] Flip SRC faces randomly ( y/n ?:help ) :
n
[y] Flip DST faces randomly ( y/n ?:help ) :
y
[8] Batch_size ( ?:help ) : 4
4
[n] Use fp16 ( y/n ?:help ) :
n
[8] Max cpu cores to use. ( 1 – 256 ?:help ) :
8
[y] Masked training ( y/n ?:help ) :
y
[n] Eyes priority ( y/n ?:help ) :
n
[n] Mouth priority ( y/n ?:help ) :
n
[y] Uniform yaw distribution of samples ( y/n ?:help ) : n
[y] Blur out mask ( y/n ?:help ) :
y
[y] Place models and optimizer on GPU ( y/n ?:help ) : n
[y] Use AdaBelief optimizer? ( y/n ?:help ) :
y
[y] Use learning rate dropout ( n/y/cpu ?:help ) : n
n
[SSIM] Loss function ( SSIM/MS-SSIM/MS-SSIM+L1 ?:help ) :
SSIM
[5e-05] Learning rate ( 0.0 .. 1.0 ?:help ) :
5e-05
[y] Enable random warp of samples ( y/n ?:help ) :
y
[0.0] Random hue/saturation/light intensity ( 0.0 .. 0.3 ?:help ) :
0.0
[n] Enable random downsample of samples ( y/n ?:help ) :
n
[n] Enable random noise added to samples ( y/n ?:help ) :
n
[n] Enable random blur of samples ( y/n ?:help ) :
n
[n] Enable random jpeg compression of samples ( y/n ?:help ) :
n
[none] Enable random shadows and highlights of samples ( none/src/dst/all ?:help ) :
none
[0.0] GAN power ( 0.0 .. 10.0 ?:help ) :
0.0
[0.0] Background power ( 0.0..1.0 ?:help ) :
0.0
[0.0] Face style power ( 0.0..100.0 ?:help ) :
0.0
[0.0] Background style power ( 0.0..100.0 ?:help ) :
0.0
[none] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot/fs-aug ?:help ) : fs-aug
fs-aug
[n] Random color ( y/n ?:help ) : y
[n] Enable gradient clipping ( y/n ?:help ) : y
[n] Enable pretraining mode ( y/n ?:help ) :
n
Initializing models: 100%|###############################################################| 5/5 [00:05<00:00, 1.07s/it]
Loaded 32961 packed faces from C:\zthers\DeepFaceLab_NVIDIA_RTX3000_series\workspace\data_src\aligned
Loading samples: 0it [00:00, ?it/s]
Error: No training data provided.
Traceback (most recent call last):
File “C:\zthers\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py”, line 106, in trainerThread
debug=debug)
File “C:\zthers\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py”, line 246, in _init_
self.on_initialize()
File “C:\zthers\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py”, line 892, in on_initialize
generators_count=dst_generators_count
File “C:\zthers\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py”, line 48, in _init_
raise ValueError(‘No training data provided.’)
ValueError: No training data provided.
Exception in thread Thread-4:
Traceback (most recent call last):
File “threading.py”, line 916, in _bootstrap_inner
File “threading.py”, line 864, in run
File “C:\zthers\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\mplib\__init__.py”, line 38, in host_thread
result.append(shuffle_idxs.pop())
IndexError: pop from empty list

DeepfakeVFX.com
Deepfake Forum & Creator Community

DeepfakeVFX.com
Deepfake Forum & Creator Community