Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 48 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,56 @@
train-detector
==============

This repository contains scripts that will help train a license plate detector for a particular region. Your trained region detector can then be used in OpenALPR.

The license plate region detector uses the Local Binary Pattern (LBP) algorithm. In order to train the detector, you will need many positive and negative images. This repository already contains a collection of negative images. You will need to add your own positive images.

To get started, you will first need many cropped plate images containing positive license plate matches. Please see the "eu" positive image folder in this repository to understand the types of plate images required.

The [Plate Tagger Utility](https://github.com/openalpr/plate_tagger) is helpful to tag the plate locations. After tagging the plates you can run the "crop_plates.py" function to extract the crops from the input images at your target aspect ratio.

After you've collected many (hundreds to thousands) of positive plate images, the next step is to train the detector. First you must configure the training script to use the correct dimensions.

Edit the prep.py script and change the WIDTH, HEIGHT, and COUNTRY variables to match the country that you are training. The width and height should be proportional to the plate size (slightly larger is OK). A total pixel area of around 650 seems to work best. Also, adjust the path to your OpenCV libraries, if that needs to be changed.
This repository contains scripts that will help train a license plate detector
for a particular region. Your trained region detector can then be used in
OpenALPR.

The license plate region detector uses the Local Binary Pattern (LBP)
algorithm. In order to train the detector, you will need many positive and
negative images. This repository already contains a collection of negative
images. You will need to add your own positive images.

To get started, you will first need many cropped plate images containing
positive license plate matches. Please see the "eu" positive image folder in
this repository to understand the types of plate images required.

The [Plate Tagger Utility](https://github.com/openalpr/plate_tagger) is
helpful to tag the plate locations. After tagging the plates you can run the
"crop_plates.py" function to extract the crops from the input images at your
target aspect ratio.

```
python3 -m venv td
source td/bin/activate
python -m pip install --editable .
python3 crop_plates.py --input_dir /tmp/pool --out_dir ~/work/ANPR-RevenueNSW/data/cropped_plates
```

After you've collected many (hundreds to thousands) of positive plate images,
the next step is to train the detector. First you must configure the training
script to use the correct dimensions.

Edit the prep.py script and change the WIDTH, HEIGHT, and COUNTRY variables to
match the country that you are training. The width and height should be
proportional to the plate size (slightly larger is OK). A total pixel area of
around 650 seems to work best. Also, adjust the path to your OpenCV libraries,
if that needs to be changed.

Once you are ready to start training, enter the following commands:

- rm ./out/* (clear the out folder in case it has data from previous runs)
- ./prep.py neg
- ./prep.py pos
- ./prep.py train
- Copy the output from the above command onto the command line. You should adjust the numStages to a smaller value (usually 12 stages works well, but it will depend on your input images). You may also need to adjust the numPos value to a smaller number in order to complete the training.
```
rm ./out/* (clear the out folder in case it has data from previous runs)
python3 prep.py neg
python3 prep.py pos
python3 prep.py train
```

Copy the output from the above command onto the command line. You should
adjust the numStages to a smaller value (usually 12 stages works well, but it
will depend on your input images). You may also need to adjust the numPos
value to a smaller number in order to complete the training.


Copy the out/cascade.xml file to your OpenALPR runtime directory (runtime_data/region/[countrycode].xml). You should now be able to use the region for plate detection.
Copy the out/cascade.xml file to your OpenALPR runtime directory
(runtime_data/region/[countrycode].xml). You should now be able to use the
region for plate detection.
107 changes: 14 additions & 93 deletions crop_plates.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import sys
import json
import math
import cv, cv2
import cv2
import numpy as np
import copy
import yaml
Expand All @@ -22,16 +22,12 @@
parser.add_argument( "--zoom_out_percent", dest="zoom_out_percent", action="store", type=float, default=1.25,
help="Percent multiplier to zoom out before cropping" )

parser.add_argument( "--plate_width", dest="plate_width", action="store", type=float, required=True,
help="Desired aspect ratio width" )
parser.add_argument( "--plate_height", dest="plate_height", action="store", type=float, required=True,
help="Desired aspect ratio height" )

options = parser.parse_args()


if not os.path.isdir(options.input_dir):
print "input_dir (%s) doesn't exist"
print("input_dir (%s) doesn't exist")
sys.exit(1)


Expand All @@ -40,82 +36,6 @@



def get_box(x1, y1, x2, y2, x3, y3, x4, y4):
height1 = int(round(math.sqrt((x1-x4)*(x1-x4) + (y1-y4)*(y1-y4))))
height2 = int(round(math.sqrt((x3-x2)*(x3-x2) + (y3-y2)*(y3-y2))))

height = height1
if height2 > height:
height = height2

# add 25% to the height
height *= options.zoom_out_percent
#height += (height * .05)

#print "Height: %d - %d" % (height1, height2)


points = [(x1,y1), (x2,y2), (x3,y3), (x4,y4)]
moment = cv.Moments(points)
centerx = int(round(moment.m10/moment.m00))
centery = int(round(moment.m01/moment.m00))


training_aspect = options.plate_width / options.plate_height
width = int(round(training_aspect * height))

# top_left = ( int(centerx - (width / 2)), int(centery - (height / 2)))
# bottom_right = ( int(centerx + (width / 2)), int(centery + (height / 2)))

top_left_x = int(round(centerx - (width / 2)))
top_left_y = int(round(centery - (height / 2)))

return (top_left_x, top_left_y, width, int(round(height)))

def crop_rect(big_image, x,y,width,height):
# Crops the rectangle from the big image and returns a cropped image
# Special care is taken to avoid cropping beyond the edge of the image.
# It fills this area in with random pixels

(big_height, big_width, channels) = big_image.shape
if x >= 0 and y >= 0 and (y+height) < big_height and (x+width) < big_width:
crop_img = img[y:y+height, x:x+width]
else:
#print "Performing partial crop"
#print "x: %d y: %d width: %d height: %d" % (x,y,width,height)
#print "big_width: %d big_height: %d" % (big_width, big_height)
crop_img = np.zeros((height, width, 3), np.uint8)
cv2.randu(crop_img, (0,0,0), (255,255,255))

offset_x = 0
offset_y = 0
if x < 0:
offset_x = -1 * x
x = 0
width -= offset_x
if y < 0:
offset_y = -1 * y
y = 0
height -= offset_y
if (x+width) >= big_width:
offset_x = 0
width = big_width - x
if (y+height) >= big_height:
offset_y = 0
height = big_height - y

#print "offset_x: %d offset_y: %d, width: %d, height: %d" % (offset_x, offset_y, width, height)

original_crop = img[y:y+height-1, x:x+width-1]
(small_image_height, small_image_width, channels) = original_crop.shape
#print "Small shape: %dx%d" % (small_image_width, small_image_height)
# Draw the small image onto the large image
crop_img[offset_y:offset_y+small_image_height, offset_x:offset_x+small_image_width] = original_crop


#cv2.imshow("Test", crop_img)
return crop_img

count = 1
yaml_files = []
for in_file in os.listdir(options.input_dir):
Expand All @@ -128,7 +48,7 @@ def crop_rect(big_image, x,y,width,height):
for yaml_file in yaml_files:


print "Processing: " + yaml_file + " (" + str(count) + "/" + str(len(yaml_files)) + ")"
print("Processing: " + yaml_file + " (" + str(count) + "/" + str(len(yaml_files)) + ")")
count += 1


Expand All @@ -142,7 +62,7 @@ def crop_rect(big_image, x,y,width,height):
# Skip missing images
full_image_path = os.path.join(options.input_dir, image)
if not os.path.isfile(full_image_path):
print "Could not find image file %s, skipping" % (full_image_path)
print("Could not find image file %s, skipping" % (full_image_path))
continue


Expand All @@ -151,16 +71,17 @@ def crop_rect(big_image, x,y,width,height):
for i in range(0, len(cc)):
cc[i] = int(cc[i])

box = get_box(cc[0], cc[1], cc[2], cc[3], cc[4], cc[5], cc[6], cc[7])


img = cv2.imread(full_image_path)
crop = crop_rect(img, box[0], box[1], box[2], box[3])
mask = np.zeros(img.shape[0:2], dtype=np.uint8)
points = np.array([[[cc[0],cc[1]],[cc[2], cc[3]], [cc[4],cc[5]], [cc[6],cc[7]]]])

cv2.drawContours(mask, [points], -1, (255,255,255), -1, cv2.LINE_AA)

# cv2.imshow("test", crop)
# cv2.waitKey(0)
res = cv2.bitwise_and(img,img,mask = mask)
rect = cv2.boundingRect(points) # returns (x,y,w,h) of the rect
cropped = res[rect[1]: rect[1] + rect[3], rect[0]: rect[0] + rect[2]]
out_crop_path = os.path.join(options.out_dir, os.path.basename(yaml_without_ext) + ".jpg")
cv2.imwrite(out_crop_path, cropped )

out_crop_path = os.path.join(options.out_dir, yaml_without_ext + ".jpg")
cv2.imwrite(out_crop_path, crop )

print "%d Cropped images are located in %s" % (count-1, options.out_dir)
print("%d Cropped images are located in %s" % (count-1, options.out_dir))
2 changes: 0 additions & 2 deletions positive/.gitignore

This file was deleted.

67 changes: 35 additions & 32 deletions prep.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

WIDTH=36
HEIGHT=18
COUNTRY='us'
COUNTRY='nsw'

#WIDTH=52
#HEIGHT=13
Expand All @@ -21,10 +21,12 @@
#COUNTRY='br'

#constants
OPENCV_DIR= '/home/mhill/projects/alpr/libraries/opencv/bin'
SAMPLE_CREATOR = OPENCV_DIR + '/opencv_createsamples'
OPENCV_DIR = "/usr/local/Cellar/opencv/4.2.0_3/bin/"
SAMPLE_CREATOR = "/usr/local/Cellar/opencv@2/2.4.13.7_7/bin/opencv_createsamples"
TRAINCASCADE = "/usr/local/Cellar/opencv@2/2.4.13.7_7/bin/opencv_traincascade"

BASE_DIR = '/home/mhill/projects/alpr/samples/training/'

BASE_DIR = './'

OUTPUT_DIR = BASE_DIR + "out/"
INPUT_NEGATIVE_DIR = BASE_DIR + 'raw-neg/'
Expand All @@ -44,12 +46,15 @@


def print_usage():
print "Usage: prep.py [Operation]"
print " -- Operations --"
print " neg -- Prepares the negative samples list"
print " pos -- Copies all the raw positive files to a opencv vector"
print " showpos -- Shows the positive samples that were created"
print " train -- Outputs the command for the Cascade Training algorithm"
usage = '''
Usage: prep.py [Operation]
-- Operations --
neg -- Prepares the negative samples list
pos -- Copies all the raw positive files to a opencv vector
showpos -- Shows the positive samples that were created
train -- Outputs the command for the Cascade Training algorithm
'''
print(usage)

def file_len(fname):
with open(fname) as f:
Expand All @@ -71,7 +76,7 @@ def file_len(fname):


if command == "neg":
print "Neg"
print("Neg")

# Get rid of any spaces
for neg_file in os.listdir(INPUT_NEGATIVE_DIR):
Expand All @@ -97,7 +102,7 @@ def file_len(fname):
f.close()

elif command == "pos":
print "Pos"
print("Pos")
info_arg = '-info %s' % (POSITIVE_INFO_FILE)

# Copy all files in the raw directory and build an info file
Expand Down Expand Up @@ -133,52 +138,50 @@ def file_len(fname):

if filename.endswith(".txt"):
continue
try:
img = Image.open(OUTPUT_POSITIVE_DIR + filename)

# get the image's width and height in pixels
width, height = img.size
f.write(filename + " 1 0 0 " + str(width) + " " + str(height) + '\n')
try:
img = Image.open(OUTPUT_POSITIVE_DIR + filename)
# get the image's width and height in pixels
width, height = img.size
f.write(filename + " 1 0 0 " + str(width) + " " + str(height) + '\n')

total_pics = total_pics + 1
except IOError:
print "Exception reading image file: " + filename
total_pics = total_pics + 1
except IOError:
print("Exception reading image file: " + filename)

f.close()




# Collapse the samples into a vector file
execStr = '%s/opencv_createsamples %s %s %s -num %d' % (OPENCV_DIR, vector_arg, width_height_arg, info_arg, total_pics )
print execStr
execStr = '%s %s %s %s -num %d' % (SAMPLE_CREATOR, vector_arg, width_height_arg, info_arg, total_pics )
print(execStr)

os.system(execStr)
#opencv_createsamples -info ./positive.txt -vec ../positive/vecfile.vec -w 120 -h 60 -bg ../negative/PentagonCityParkingGarage21.jpg -num 100


elif command == "showpos":
print "SHOW"
execStr = '%s/opencv_createsamples -vec %s -w %d -h %d' % (OPENCV_DIR, VEC_FILE, WIDTH, HEIGHT )
print execStr
print("SHOW")
execStr = '%s -vec %s -w %d -h %d' % (SAMPLE_CREATOR, VEC_FILE, WIDTH, HEIGHT )
print(execStr)
os.system(execStr)
#opencv_createsamples -vec ../positive/vecfile.vec -w 120 -h 60
elif command == "train":
print "TRAIN"
print("TRAIN")

data_arg = '-data %s/' % (OUTPUT_DIR)
bg_arg = '-bg %s' % (NEGATIVE_INFO_FILE)

try:
num_pos_samples = file_len(POSITIVE_INFO_FILE)
num_pos_samples = file_len(POSITIVE_INFO_FILE)
except:
num_pos_samples = -1
num_pos_samples = -1
num_neg_samples = file_len(NEGATIVE_INFO_FILE)

execStr = '%s/opencv_traincascade %s %s %s %s -numPos %d -numNeg %d -maxFalseAlarmRate 0.45 -featureType LBP -numStages 13' % (OPENCV_DIR, data_arg, vector_arg, bg_arg, width_height_arg, num_pos_samples, num_neg_samples )
execStr = '%s %s %s %s %s -numPos %d -numNeg %d -maxFalseAlarmRate 0.45 -featureType LBP -numStages 13 -precalcValBufSize 0 -precalcIdxBufSize 0' % (TRAINCASCADE, data_arg, vector_arg, bg_arg, width_height_arg, num_pos_samples, num_neg_samples )

print "Execute the following command to start training:"
print execStr
print("Execute the following command to start training:\n%s" % execStr)
#opencv_traincascade -data ./out/ -vec ./positive/vecfile.vec -bg ./negative/negative.txt -w 120 -h 60 -numPos 99 -numNeg 5 -featureType LBP -numStages 8
#opencv_traincascade -data ./out/ -vec ./positive/vecfile.vec -bg ./negative/negative.txt -w 120 -h 60 -numPos 99 -numNeg 5 -featureType LBP -numStages 20
elif command == "SDFLSDFSDFSDF":
Expand Down
Loading