@@ -13,17 +13,24 @@ So we don't use any parsers here.
13
13
14
14
2 . Config your model
15
15
16
- See [ Tensorrt Model Config] ( #ConfigSection )
16
+ See [ Tensorrt Model Config] ( #ConfigSection )
17
17
18
- 3 . Build ` fastrt ` execute file
18
+ 3 . (Optional) Build <a name =" step3 " ></a >` third party ` libs
19
+
20
+ See [ Build third_party section] ( #third_party )
21
+
22
+ 4 . Build <a name =" step4 " ></a >` fastrt ` execute file
19
23
20
24
```
21
25
mkdir build
22
26
cd build
23
- cmake -DBUILD_FASTRT_ENGINE=ON -DBUILD_DEMO=ON ..
27
+ cmake -DBUILD_FASTRT_ENGINE=ON \
28
+ -DBUILD_DEMO=ON \
29
+ -DUSE_CNUMPY=ON ..
24
30
make
25
31
```
26
- 4 . Run <a name =" step4 " ></a >` fastrt `
32
+
33
+ 5 . Run <a name =" step5 " ></a >` fastrt `
27
34
28
35
put ` model_best.wts ` into ` FastRT/ `
29
36
@@ -35,20 +42,20 @@ So we don't use any parsers here.
35
42
./demo/fastrt -d // deserialize 'xxx.engine' file and run inference
36
43
```
37
44
38
- 5 . Verify the output with pytorch
45
+ 6 . Verify the output with pytorch
39
46
40
47
41
- 6 . (Optional) Once you verify the result, you can set FP16 for speed up
48
+ 7 . (Optional) Once you verify the result, you can set FP16 for speed up
42
49
```
43
50
mkdir build
44
51
cd build
45
52
cmake -DBUILD_FASTRT_ENGINE=ON -DBUILD_DEMO=ON -DBUILD_FP16=ON ..
46
53
make
47
54
```
48
55
49
- then go to [ step 4 ] ( #step4 )
56
+ then go to [ step 5 ] ( #step5 )
50
57
51
- 7 . (Optional) Build tensorrt model as shared libs
58
+ 8 . (Optional) Build tensorrt model as shared libs
52
59
53
60
```
54
61
mkdir build
@@ -65,7 +72,7 @@ So we don't use any parsers here.
65
72
make
66
73
```
67
74
68
- then go to [ step 4 ] ( #step4 )
75
+ then go to [ step 5 ] ( #step5 )
69
76
70
77
### <a name =" ConfigSection " ></a >` Tensorrt Model Config `
71
78
@@ -213,5 +220,14 @@ static const int EMBEDDING_DIM = 0;
213
220
sudo docker run --gpus all -it --name fastrt -v /home/YOURID/workspace:/workspace -d trt7:cuda102
214
221
// then put the repo into `/home/YOURID/workspace/` before you getin container
215
222
```
216
-
223
+
217
224
* [ Installation reference] ( https://github.com/wang-xinyu/tensorrtx/blob/master/tutorials/install.md )
225
+
226
+ ### Build <a name =" third_party " ></a > third party
227
+
228
+ * for read/write numpy
229
+
230
+ ```
231
+ cd third_party/cnpy
232
+ cmake -DCMAKE_INSTALL_PREFIX=../../libs/cnpy -DENABLE_STATIC=OFF . && make -j4 && make install
233
+ ```
0 commit comments