Skip to content

Latest commit

 

History

History
200 lines (136 loc) · 6.41 KB

File metadata and controls

200 lines (136 loc) · 6.41 KB

pyramidbox_lite_server_mask

Module Name pyramidbox_lite_server_mask
Category face detection
Network PyramidBox
Dataset WIDER FACEDataset + Baidu Face Dataset
Fine-tuning supported or not No
Module Size 1.2MB
Latest update date 2021-02-26
Data indicators -

I.Basic Information

  • Application Effect Display

    • Sample results:


  • Module Introduction

    • PyramidBox-Lite is a light-weight model based on PyramidBox proposed by Baidu in ECCV 2018. This model has solid robustness against interferences such as light and scale variation. This module is based on PyramidBox, trained on WIDER FACE Dataset and Baidu Face Dataset, and can be used for mask detection.

II.Installation

III.Module API Prediction

  • 1、Command line Prediction

    • $ hub run pyramidbox_lite_server_mask --input_path "/PATH/TO/IMAGE"
    • If you want to call the Hub module through the command line, please refer to: PaddleHub Command Line Instruction
  • 2、Prediction Code Example

    • import paddlehub as hub
      import cv2
      
      mask_detector = hub.Module(name="pyramidbox_lite_server_mask")
      result = mask_detector.face_detection(images=[cv2.imread('/PATH/TO/IMAGE')])
      # or
      # result = mask_detector.face_detection(paths=['/PATH/TO/IMAGE'])
  • 3、API

    • def face_detection(images=None,
                         paths=None,
                         batch_size=1,
                         use_gpu=False,
                         visualization=False,
                         output_dir='detection_result',
                         use_multi_scale=False,
                         shrink=0.5,
                         confs_threshold=0.6)
      • Detect all faces in image, and judge the existence of mask.

      • Parameters

        • images (list[numpy.ndarray]): image data, ndarray.shape is in the format [H, W, C], BGR;
        • paths (list[str]): image path;
        • batch_size (int): the size of batch;
        • use_gpu (bool): use GPU or not; set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU
        • visualization (bool): Whether to save the results as picture files;
        • output_dir (str): save path of images;
        • use_multi_scale (bool) : whether to detect across multiple scales;
        • shrink (float): the scale to resize image
        • confs_threshold (float): the confidence threshold

        NOTE: choose one parameter to provide data from paths and images

      • Return

        • res (list[dict]): results
          • path (str): path for input image
          • data (list): detection results, each element in the list is dict
            • label (str): 'NO MASK' or 'MASK';
            • confidence (float): the confidence of the result
            • left (int): the upper left corner x coordinate of the detection box
            • top (int): the upper left corner y coordinate of the detection box
            • right (int): the lower right corner x coordinate of the detection box
            • bottom (int): the lower right corner y coordinate of the detection box
    • def save_inference_model(dirname)
      • Save model to specific path

      • Parameters

        • dirname: model save path

IV.Server Deployment

  • PaddleHub Serving can deploy an online service of face detection.

  • Step 1: Start PaddleHub Serving

    • Run the startup command:

    • $ hub serving start -m pyramidbox_lite_server_mask
    • The servitization API is now deployed and the default port number is 8866.

    • NOTE: If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.

  • Step 2: Send a predictive request

    • With a configured server, use the following lines of code to send the prediction request and obtain the result

    • import requests
      import json
      import cv2
      import base64
      
      
      def cv2_to_base64(image):
        data = cv2.imencode('.jpg', image)[1]
        return base64.b64encode(data.tostring()).decode('utf8')
      
      # Send an HTTP request
      data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]}
      headers = {"Content-type": "application/json"}
      url = "http://127.0.0.1:8866/predict/pyramidbox_lite_server_mask"
      r = requests.post(url=url, headers=headers, data=json.dumps(data))
      
      # print prediction results
      print(r.json()["results"])
  • Gradio APP support

    Starting with PaddleHub 2.3.1, the Gradio APP for pyramidbox_lite_server_mask is supported to be accessed in the browser using the link http://127.0.0.1:8866/gradio/pyramidbox_lite_server_mask.

V.Paddle Lite Deployment

  • Save model demo

    • import paddlehub as hub
      pyramidbox_lite_server_mask = hub.Module(name="pyramidbox_lite_server_mask")
      
      # save model in directory named test_program
      pyramidbox_lite_server_mask.save_inference_model(dirname="test_program")
  • transform model

    • The model downloaded from paddlehub is a prediction model. If we want to deploy it in mobile device, we can use OPT tool provided by PaddleLite to transform the model. For more information, please refer to OPT tool)
  • Deploy the model with Paddle Lite

V.Release Note

  • 1.0.0

    First release

  • 1.3.2

    Remove fluid api

  • 1.4.0

    Fix a bug of save_inference_model

  • 1.5.0

    Add Gradio APP support.

    • $ hub install pyramidbox_lite_server_mask==1.5.0