Introduction to the DeepFace Module

Introduction

Hello! In this article I will be introducing the deepface module.


What is the DeepFace module?

DeepFace is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python.

In simple terms deepface can analyse a variety of features without the need to train your own models etc.


Preparing the environment

First we need to initialize the virtual environment that we will be using for this example, this can be done via:

python3 <span>-m</span> venv <span>env source env</span>/bin/activate
python3 <span>-m</span> venv <span>env source env</span>/bin/activate
python3 -m venv env source env/bin/activate

Enter fullscreen mode Exit fullscreen mode


Installing the dependencies

Create a requirements.txt file and add the following:

<span># requirements.txt</span>
deepface
<span># requirements.txt</span>
deepface
# requirements.txt deepface

Enter fullscreen mode Exit fullscreen mode

Then install via:

pip <span>install</span> <span>-r</span> requirements.txt
pip <span>install</span> <span>-r</span> requirements.txt
pip install -r requirements.txt

Enter fullscreen mode Exit fullscreen mode


Writing the source code

Create a main.py file and import the following modules:

<span>import</span> <span>argparse</span>
<span>from</span> <span>deepface</span> <span>import</span> <span>DeepFace</span>
<span>import</span> <span>argparse</span>
<span>from</span> <span>deepface</span> <span>import</span> <span>DeepFace</span>
import argparse from deepface import DeepFace

Enter fullscreen mode Exit fullscreen mode

Next we need to create the main function:

<span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span>
<span>ap</span> <span>=</span> <span>argparse</span><span>.</span><span>ArgumentParser</span><span>()</span>
<span>ap</span><span>.</span><span>add_argument</span><span>(</span><span>"</span><span>-i</span><span>"</span><span>,</span> <span>"</span><span>--image</span><span>"</span><span>,</span> <span>required</span> <span>=</span> <span>True</span><span>,</span> <span>help</span> <span>=</span> <span>"</span><span>Path to input image</span><span>"</span><span>)</span>
<span>args</span> <span>=</span> <span>vars</span><span>(</span><span>ap</span><span>.</span><span>parse_args</span><span>())</span>
<span>img_path</span> <span>=</span> <span>args</span><span>[</span><span>"</span><span>image</span><span>"</span><span>]</span>
<span>face_analysis</span> <span>=</span> <span>DeepFace</span><span>.</span><span>analyze</span><span>(</span><span>img_path</span> <span>=</span> <span>img_path</span><span>)</span>
<span>print </span><span>(</span><span>"</span><span>gender:</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>gender</span><span>"</span><span>])</span>
<span>print </span><span>(</span><span>"</span><span>age:</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>age</span><span>"</span><span>])</span>
<span>print </span><span>(</span><span>"</span><span>dominant_race:</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>dominant_race</span><span>"</span><span>])</span>
<span>print </span><span>(</span><span>"</span><span>dominant_emotion</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>dominant_emotion</span><span>"</span><span>])</span>
<span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span>
  <span>ap</span> <span>=</span> <span>argparse</span><span>.</span><span>ArgumentParser</span><span>()</span>
  <span>ap</span><span>.</span><span>add_argument</span><span>(</span><span>"</span><span>-i</span><span>"</span><span>,</span> <span>"</span><span>--image</span><span>"</span><span>,</span> <span>required</span> <span>=</span> <span>True</span><span>,</span> <span>help</span> <span>=</span> <span>"</span><span>Path to input image</span><span>"</span><span>)</span>
  <span>args</span> <span>=</span> <span>vars</span><span>(</span><span>ap</span><span>.</span><span>parse_args</span><span>())</span>

  <span>img_path</span> <span>=</span> <span>args</span><span>[</span><span>"</span><span>image</span><span>"</span><span>]</span>

  <span>face_analysis</span> <span>=</span> <span>DeepFace</span><span>.</span><span>analyze</span><span>(</span><span>img_path</span> <span>=</span> <span>img_path</span><span>)</span>

  <span>print </span><span>(</span><span>"</span><span>gender:</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>gender</span><span>"</span><span>])</span>
  <span>print </span><span>(</span><span>"</span><span>age:</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>age</span><span>"</span><span>])</span>
  <span>print </span><span>(</span><span>"</span><span>dominant_race:</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>dominant_race</span><span>"</span><span>])</span>
  <span>print </span><span>(</span><span>"</span><span>dominant_emotion</span><span>"</span><span>,</span> <span>face_analysis</span><span>[</span><span>"</span><span>dominant_emotion</span><span>"</span><span>])</span>
if __name__ == "__main__": ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required = True, help = "Path to input image") args = vars(ap.parse_args()) img_path = args["image"] face_analysis = DeepFace.analyze(img_path = img_path) print ("gender:", face_analysis["gender"]) print ("age:", face_analysis["age"]) print ("dominant_race:", face_analysis["dominant_race"]) print ("dominant_emotion", face_analysis["dominant_emotion"])

Enter fullscreen mode Exit fullscreen mode

The above code accepts an image file and then passes the image file to the DeepFace module for analysis.

The code can then be run via:

python main.py <span>-i</span> lena.jpg
python main.py <span>-i</span> lena.jpg
python main.py -i lena.jpg

Enter fullscreen mode Exit fullscreen mode

Please note that the first time you run this script the models will need to be downloaded which will take some time.

If you were to print all of face_analysis you will get the following output:

<span>{</span><span>'emotion'</span>: <span>{</span><span>'angry'</span>: 0.09911301312968135, <span>'disgust'</span>: 1.032224883346089e-06, <span>'fear'</span>: 2.6556044816970825, <span>'happy'</span>: 0.01839055767050013, <span>'sad'</span>: 65.46446681022644, <span>'surprise'</span>: 0.0007067909336910816, <span>'neutral'</span>: 31.761714816093445<span>}</span>, <span>'dominant_emotion'</span>: <span>'sad'</span>, <span>'region'</span>: <span>{</span><span>'x'</span>: 177, <span>'y'</span>: 77, <span>'w'</span>: 68, <span>'h'</span>: 68<span>}</span>, <span>'age'</span>: 31, <span>'gender'</span>: <span>'Woman'</span>, <span>'race'</span>: <span>{</span><span>'asian'</span>: 0.18712843253856495, <span>'indian'</span>: 0.08294145721779508, <span>'black'</span>: 0.007420518965146703, <span>'white'</span>: 90.12329519529911, <span>'middle eastern'</span>: 3.5380205385697208, <span>'latino hispanic'</span>: 6.061198178601156<span>}</span>, <span>'dominant_race'</span>: <span>'white'</span><span>}</span>
<span>{</span><span>'emotion'</span>: <span>{</span><span>'angry'</span>: 0.09911301312968135, <span>'disgust'</span>: 1.032224883346089e-06, <span>'fear'</span>: 2.6556044816970825, <span>'happy'</span>: 0.01839055767050013, <span>'sad'</span>: 65.46446681022644, <span>'surprise'</span>: 0.0007067909336910816, <span>'neutral'</span>: 31.761714816093445<span>}</span>, <span>'dominant_emotion'</span>: <span>'sad'</span>, <span>'region'</span>: <span>{</span><span>'x'</span>: 177, <span>'y'</span>: 77, <span>'w'</span>: 68, <span>'h'</span>: 68<span>}</span>, <span>'age'</span>: 31, <span>'gender'</span>: <span>'Woman'</span>, <span>'race'</span>: <span>{</span><span>'asian'</span>: 0.18712843253856495, <span>'indian'</span>: 0.08294145721779508, <span>'black'</span>: 0.007420518965146703, <span>'white'</span>: 90.12329519529911, <span>'middle eastern'</span>: 3.5380205385697208, <span>'latino hispanic'</span>: 6.061198178601156<span>}</span>, <span>'dominant_race'</span>: <span>'white'</span><span>}</span>
{'emotion': {'angry': 0.09911301312968135, 'disgust': 1.032224883346089e-06, 'fear': 2.6556044816970825, 'happy': 0.01839055767050013, 'sad': 65.46446681022644, 'surprise': 0.0007067909336910816, 'neutral': 31.761714816093445}, 'dominant_emotion': 'sad', 'region': {'x': 177, 'y': 77, 'w': 68, 'h': 68}, 'age': 31, 'gender': 'Woman', 'race': {'asian': 0.18712843253856495, 'indian': 0.08294145721779508, 'black': 0.007420518965146703, 'white': 90.12329519529911, 'middle eastern': 3.5380205385697208, 'latino hispanic': 6.061198178601156}, 'dominant_race': 'white'}

Enter fullscreen mode Exit fullscreen mode

If you were to print the output of gender/age/race/emotion you will get the following output:

gender: Woman
age: 31
dominant_race: white
dominant_emotion sad
gender: Woman
age: 31
dominant_race: white
dominant_emotion sad
gender: Woman age: 31 dominant_race: white dominant_emotion sad

Enter fullscreen mode Exit fullscreen mode

Feel free to try the example with a variety of your own images.


Conclusion

Here I have introduced the DeepFace module. I’ve had experience training my own models etc. But I thought this module was very helpful and can be used with just a few lines of code and without the need to train your own models etc.

Feel free to try it out and let me know if there are any other useful modules etc.

The code can be found at: https://github.com/ethand91/simple_deepface_example

Happy Coding!


Like me work? I post about a variety of topics, if you would like to see more please like and follow me.
Also I love coffee.

原文链接:Introduction to the DeepFace Module

© 版权声明
THE END
喜欢就支持一下吧
点赞6 分享
As long as there s tomorrow, today s always the startng lne.
只要还有明天,今天就永远是起跑线
评论 抢沙发

请登录后发表评论

    暂无评论内容