Difference between revisions of "DeepFaceLab"
Michael.mast (talk | contribs) (Created page with "==RHEL 8== <pre> pip3.6 install --user colorama pip3.6 install --user numpy </pre>") |
Michael.mast (talk | contribs) (→Usage) |
||
(24 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | ==RHEL 8== | + | ==RHEL 8 Installation== |
+ | <ref>https://linuxconfig.org/how-to-install-ffmpeg-on-redhat-8</ref> | ||
+ | <ref>https://pub.dfblue.com/pub/2019-10-25-deepfacelab-tutorial</ref> | ||
+ | <ref>https://pypi.org/project/opencv-python/</ref> | ||
+ | As of this writing, pip3.6 will install tensorflow 2.3.0 which does not support configproto<ref>https://stackoverflow.com/questions/58726388/how-can-i-fix-attributeerror-module-tensorflow-has-no-attribute-configproto?noredirect=1&lq=1</ref>. The fix is to install an older version, in this case I just used what was listed on the stackoverflow post. | ||
<pre> | <pre> | ||
+ | mkdir build && cd build | ||
+ | sudo dnf groupinstall "Development Tools" | ||
+ | sudo dnf install git | ||
+ | git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg && cd ffmpeg | ||
+ | ./configure --disable-x86asm | ||
+ | make | ||
+ | sudo make install | ||
+ | cd ../ | ||
+ | git clone https://github.com/nagadit/DeepFaceLab_Linux.git && cd DeepFaceLab_Linux/scripts | ||
+ | chmod +x * | ||
+ | git clone https://github.com/iperov/DeepFaceLab.git | ||
+ | sudo pip3.6 install --upgrade pip | ||
pip3.6 install --user colorama | pip3.6 install --user colorama | ||
pip3.6 install --user numpy | pip3.6 install --user numpy | ||
+ | pip3.6 install --user scikit-build | ||
+ | pip3.6 install --user opencv-python-headless | ||
+ | pip3.6 install --user tqdm | ||
+ | pip3.6 install --user ffmpeg-python | ||
+ | pip3.6 install --user tensorflow==1.14 | ||
+ | pip3.6 install --user pillow | ||
+ | pip3.6 install --user scipy | ||
+ | </pre> | ||
+ | <pre> | ||
+ | pip3.6 install --user tensorflow-gpu==1.13.2 | ||
+ | pip3.6 install --user tensorflow-auto-detect | ||
+ | </pre> | ||
+ | |||
+ | ==Usage== | ||
+ | *NOTE : AS of this writing opencl is not fully supported due to missing configproto conversion for newer versions of tesnorflow. This means no AMD or Intel GPU support. | ||
+ | My use case was a prank, which was an excuse to play around with the technology. | ||
+ | *Downloaded three video conference calls where the target of the prank was prominent. | ||
+ | *Using kdenlive; I removed all video containing other people, merged the remaining into one video, then removed all instances of the target covering their face. In the end I had almost 30 minutes of video. | ||
+ | *I downloaded the destination video from youtube. It was an interview with someone that the target doesn't like, upon which the targets face will be placed. I will also play around with head swapping, but the destination has a lot more hair than the target. | ||
+ | *The destination was a very short clip, but had other people in it. I cut out anything with other people, but will add them back in post swap. | ||
+ | *I ran the following to get started | ||
+ | <pre> | ||
+ | ./env.sh | ||
+ | ./1_clear_workspace.sh | ||
+ | </pre> | ||
+ | *I copied the source video to build/DeepFaceLab_Linux/scripts/workspace/data_src.mp4 | ||
+ | *Copied destination video to build/DeepFaceLab_Linux/scripts/workspace/data_dst.mp4 | ||
+ | *At this point I extracted the frames from the source using defaults. This ran at .99x, so it took slightly longer than the video length. | ||
+ | <pre>./2_extract_image_from_data_src.sh</pre> | ||
+ | *Then I kicked off the facial extraction from the source, using defaults. | ||
+ | <pre>./4_data_src_extract_faces_S3FD.sh</pre> | ||
+ | On my Dell Opiplex 9020M with i5-4590T and no video card, I was able to extract faces at ~3.22s/it. I have 51,368 frames and it appears to process each one at 3.22 seconds each. After 17 hours I was at ~37%. | ||
+ | *Now extract the frames from the destination. In my case I edited the destination video to only contain the target face. | ||
+ | <pre> | ||
+ | ./3_extract_image_from_data_dst.sh | ||
+ | </pre> | ||
+ | *Now extract the faces. At this point I moved the process to my workstation at the office as it does have a GPU. However it only ran on my CPU. It is an AMD Ryzen 5 3400G with eight threads, but I wasn't running much faster. 2.90s/it vs 3.22s/it. | ||
+ | <pre> | ||
+ | ./5_data_dst_extract_faces_S3FD.sh | ||
+ | </pre> | ||
+ | <ref>https://mrdeepfakes.com/forums/thread-1-1-sfw-guide-deepfacelab-2-0-guide-recommended-up-to-date</ref>*Now we can work on training. After I trained for several days I had a good start. However I noticed that the skin tones were not matching up, so I went back and did some editing of the source material to make it easier. I also noticed that using the whole face was messing up the hair on the target. So after running back over the source steps I can begin training again using partial face instead. | ||
+ | <pre> | ||
+ | ./6_train_SAEHD_no_preview.sh | ||
+ | |||
+ | ... | ||
+ | |||
+ | ==---------- Model Options -----------== | ||
+ | == == | ||
+ | == resolution: 256 == | ||
+ | == face_type: f == | ||
+ | == models_opt_on_gpu: False == | ||
+ | == archi: df-u == | ||
+ | == ae_dims: 256 == | ||
+ | == e_dims: 256 == | ||
+ | == d_dims: 256 == | ||
+ | == d_mask_dims: 84 == | ||
+ | == masked_training: True == | ||
+ | == eyes_prio: True == | ||
+ | == uniform_yaw: False == | ||
+ | == lr_dropout: cpu == | ||
+ | == random_warp: False == | ||
+ | == gan_power: 0.0 == | ||
+ | == true_face_power: 0.0 == | ||
+ | == face_style_power: 0.0 == | ||
+ | == bg_style_power: 0.0 == | ||
+ | == ct_mode: none == | ||
+ | == clipgrad: False == | ||
+ | == pretrain: False == | ||
+ | == autobackup_hour: 6 == | ||
+ | == write_preview_history: False == | ||
+ | == target_iter: 50000 == | ||
+ | == random_flip: True == | ||
+ | == batch_size: 4 == | ||
+ | == == | ||
+ | </pre> | ||
+ | |||
+ | ==Messing Around== | ||
+ | <ref>https://www.tensorflow.org/api_docs/python/tf/compat/v1/ConfigProto</ref><ref>https://github.com/tensorflow/tensorflow/issues/18538</ref> | ||
+ | <pre> | ||
+ | DeepFaceLab_Linux-master/scripts/DeepFaceLab/core/leras/nn.py | ||
+ | tf.compat.v1.ConfigProto | ||
+ | tf.compat.v1.Session() | ||
</pre> | </pre> |
Latest revision as of 09:14, 14 October 2020
RHEL 8 Installation
[1] [2] [3] As of this writing, pip3.6 will install tensorflow 2.3.0 which does not support configproto[4]. The fix is to install an older version, in this case I just used what was listed on the stackoverflow post.
mkdir build && cd build sudo dnf groupinstall "Development Tools" sudo dnf install git git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg && cd ffmpeg ./configure --disable-x86asm make sudo make install cd ../ git clone https://github.com/nagadit/DeepFaceLab_Linux.git && cd DeepFaceLab_Linux/scripts chmod +x * git clone https://github.com/iperov/DeepFaceLab.git sudo pip3.6 install --upgrade pip pip3.6 install --user colorama pip3.6 install --user numpy pip3.6 install --user scikit-build pip3.6 install --user opencv-python-headless pip3.6 install --user tqdm pip3.6 install --user ffmpeg-python pip3.6 install --user tensorflow==1.14 pip3.6 install --user pillow pip3.6 install --user scipy
pip3.6 install --user tensorflow-gpu==1.13.2 pip3.6 install --user tensorflow-auto-detect
Usage
- NOTE : AS of this writing opencl is not fully supported due to missing configproto conversion for newer versions of tesnorflow. This means no AMD or Intel GPU support.
My use case was a prank, which was an excuse to play around with the technology.
- Downloaded three video conference calls where the target of the prank was prominent.
- Using kdenlive; I removed all video containing other people, merged the remaining into one video, then removed all instances of the target covering their face. In the end I had almost 30 minutes of video.
- I downloaded the destination video from youtube. It was an interview with someone that the target doesn't like, upon which the targets face will be placed. I will also play around with head swapping, but the destination has a lot more hair than the target.
- The destination was a very short clip, but had other people in it. I cut out anything with other people, but will add them back in post swap.
- I ran the following to get started
./env.sh ./1_clear_workspace.sh
- I copied the source video to build/DeepFaceLab_Linux/scripts/workspace/data_src.mp4
- Copied destination video to build/DeepFaceLab_Linux/scripts/workspace/data_dst.mp4
- At this point I extracted the frames from the source using defaults. This ran at .99x, so it took slightly longer than the video length.
./2_extract_image_from_data_src.sh
- Then I kicked off the facial extraction from the source, using defaults.
./4_data_src_extract_faces_S3FD.sh
On my Dell Opiplex 9020M with i5-4590T and no video card, I was able to extract faces at ~3.22s/it. I have 51,368 frames and it appears to process each one at 3.22 seconds each. After 17 hours I was at ~37%.
- Now extract the frames from the destination. In my case I edited the destination video to only contain the target face.
./3_extract_image_from_data_dst.sh
- Now extract the faces. At this point I moved the process to my workstation at the office as it does have a GPU. However it only ran on my CPU. It is an AMD Ryzen 5 3400G with eight threads, but I wasn't running much faster. 2.90s/it vs 3.22s/it.
./5_data_dst_extract_faces_S3FD.sh
[5]*Now we can work on training. After I trained for several days I had a good start. However I noticed that the skin tones were not matching up, so I went back and did some editing of the source material to make it easier. I also noticed that using the whole face was messing up the hair on the target. So after running back over the source steps I can begin training again using partial face instead.
./6_train_SAEHD_no_preview.sh ... ==---------- Model Options -----------== == == == resolution: 256 == == face_type: f == == models_opt_on_gpu: False == == archi: df-u == == ae_dims: 256 == == e_dims: 256 == == d_dims: 256 == == d_mask_dims: 84 == == masked_training: True == == eyes_prio: True == == uniform_yaw: False == == lr_dropout: cpu == == random_warp: False == == gan_power: 0.0 == == true_face_power: 0.0 == == face_style_power: 0.0 == == bg_style_power: 0.0 == == ct_mode: none == == clipgrad: False == == pretrain: False == == autobackup_hour: 6 == == write_preview_history: False == == target_iter: 50000 == == random_flip: True == == batch_size: 4 == == ==
Messing Around
DeepFaceLab_Linux-master/scripts/DeepFaceLab/core/leras/nn.py tf.compat.v1.ConfigProto tf.compat.v1.Session()
- ↑ https://linuxconfig.org/how-to-install-ffmpeg-on-redhat-8
- ↑ https://pub.dfblue.com/pub/2019-10-25-deepfacelab-tutorial
- ↑ https://pypi.org/project/opencv-python/
- ↑ https://stackoverflow.com/questions/58726388/how-can-i-fix-attributeerror-module-tensorflow-has-no-attribute-configproto?noredirect=1&lq=1
- ↑ https://mrdeepfakes.com/forums/thread-1-1-sfw-guide-deepfacelab-2-0-guide-recommended-up-to-date
- ↑ https://www.tensorflow.org/api_docs/python/tf/compat/v1/ConfigProto
- ↑ https://github.com/tensorflow/tensorflow/issues/18538