Difference between revisions of "DeepFaceLab"
Jump to navigation
Jump to search
Michael.mast (talk | contribs) (→Usage) |
Michael.mast (talk | contribs) |
||
Line 24: | Line 24: | ||
pip3.6 install --user ffmpeg-python | pip3.6 install --user ffmpeg-python | ||
pip3.6 install --user tensorflow==1.14 | pip3.6 install --user tensorflow==1.14 | ||
− | |||
− | |||
pip3.6 install --user pillow | pip3.6 install --user pillow | ||
pip3.6 install --user scipy | pip3.6 install --user scipy | ||
− | </pre> | + | </pre> |
+ | |||
==Usage== | ==Usage== | ||
My use case was a prank, which was an excuse to play around with the technology. | My use case was a prank, which was an excuse to play around with the technology. |
Revision as of 08:42, 20 September 2020
RHEL 8 Installation
[1] [2] [3] As of this writing, pip3.6 will install tensorflow 2.3.0 which does not support configproto[4]. The fix is to install an older version, in this case I just used what was listed on the stackoverflow post.
mkdir build && cd build sudo dnf groupinstall "Development Tools" sudo dnf install git git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg && cd ffmpeg ./configure --disable-x86asm make sudo make install cd ../ git clone https://github.com/nagadit/DeepFaceLab_Linux.git && cd DeepFaceLab_Linux/scripts chmod +x * git clone https://github.com/iperov/DeepFaceLab.git sudo pip3.6 install --upgrade pip pip3.6 install --user colorama pip3.6 install --user numpy pip3.6 install --user scikit-build pip3.6 install --user opencv-python-headless pip3.6 install --user tqdm pip3.6 install --user ffmpeg-python pip3.6 install --user tensorflow==1.14 pip3.6 install --user pillow pip3.6 install --user scipy
Usage
My use case was a prank, which was an excuse to play around with the technology.
- Downloaded three video conference calls where the target of the prank was prominent.
- Using kdenlive; I removed all video containing other people, merged the remaining into one video, then removed all instances of the target covering their face. In the end I had almost 30 minutes of video.
- I downloaded the destination video from youtube. It was an interview with someone that the target doesn't like, upon which the targets face will be placed. I will also play around with head swapping, but the destination has a lot more hair than the target.
- The destination was a very short clip, but had other people in it. I cut out anything with other people, but will add them back in post swap.
- I ran the following to get started
./env.sh ./1_clear_workspace.sh
- I copied the source video to build/DeepFaceLab_Linux/scripts/data_src.mp4
- Copied destination video to build/DeepFaceLab_Linux/scripts/data_dst.mp4
- At this point I extracted the frames from the source using defaults. This ran at .99x, so it took slightly longer than the video length.
./2_extract_image_from_data_src.sh
- Then I kicked off the facial extraction from the source, using defaults.
./4_data_src_extract_faces_S3FD.sh
On my Dell Opiplex 9020M with i5-4590T and no video card, I was able to extract faces at ~3.22s/it. I have 51,368 frames and it appears to process each one at 3.22 seconds each. After 17 hours I was at ~37%.
- Now extract the frames from the destination. In my case I edited the destination video to only contain the target face.
./3_extract_image_from_data_dst.sh
- ↑ https://linuxconfig.org/how-to-install-ffmpeg-on-redhat-8
- ↑ https://pub.dfblue.com/pub/2019-10-25-deepfacelab-tutorial
- ↑ https://pypi.org/project/opencv-python/
- ↑ https://stackoverflow.com/questions/58726388/how-can-i-fix-attributeerror-module-tensorflow-has-no-attribute-configproto?noredirect=1&lq=1