|
|
@@ -24,3 +24,110 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
|
|
|
</tbody>
|
|
|
</table>
|
|
|
<b>Note: The above accuracy metrics refer to Top-1 Accuracy on the [ImageNet-1k](https://www.image-net.org/index.php) validation set.</b>
|
|
|
+
|
|
|
+## Object Detection Module
|
|
|
+<table>
|
|
|
+<thead>
|
|
|
+<tr>
|
|
|
+<th>Model Name</th>
|
|
|
+<th>mAP (%)</th>
|
|
|
+<th>Model Size (M)</th>
|
|
|
+<th>Model Download Link</th></tr>
|
|
|
+</thead>
|
|
|
+<tbody>
|
|
|
+<tr>
|
|
|
+<td>PP-YOLOE_plus-L</td>
|
|
|
+<td>52.8</td>
|
|
|
+<td>185.3 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-YOLOE_plus-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-L_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>PP-YOLOE_plus-M</td>
|
|
|
+<td>49.7</td>
|
|
|
+<td>83.2 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-YOLOE_plus-M_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-M_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>PP-YOLOE_plus-S</td>
|
|
|
+<td>43.6</td>
|
|
|
+<td>28.3 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-YOLOE_plus-S_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-S_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>PP-YOLOE_plus-X</td>
|
|
|
+<td>54.7</td>
|
|
|
+<td>349.4 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-YOLOE_plus-X_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-X_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>RT-DETR-H</td>
|
|
|
+<td>56.3</td>
|
|
|
+<td>435.8 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/RT-DETR-H_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-H_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>RT-DETR-L</td>
|
|
|
+<td>53.0</td>
|
|
|
+<td>113.7 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/RT-DETR-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-L_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>RT-DETR-R18</td>
|
|
|
+<td>46.5</td>
|
|
|
+<td>70.7 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/RT-DETR-R18_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-R18_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>RT-DETR-R50</td>
|
|
|
+<td>53.1</td>
|
|
|
+<td>149.1 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/RT-DETR-R50_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-R50_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>RT-DETR-X</td>
|
|
|
+<td>54.8</td>
|
|
|
+<td>232.9 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/RT-DETR-X_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-X_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+</tbody>
|
|
|
+</table>
|
|
|
+<b>Note: The above accuracy metrics are for</b> [COCO2017](https://cocodataset.org/#home) <b>validation set mAP(0.5:0.95).</b>
|
|
|
+
|
|
|
+## Text Detection Module
|
|
|
+<table>
|
|
|
+<thead>
|
|
|
+<tr>
|
|
|
+<th>Model Name</th>
|
|
|
+<th>Detection Hmean (%)</th>
|
|
|
+<th>Model Size (M)</th>
|
|
|
+<th>Model Download Link</th></tr>
|
|
|
+</thead>
|
|
|
+<tbody>
|
|
|
+<tr>
|
|
|
+<td>PP-OCRv4_mobile_det</td>
|
|
|
+<td>77.79</td>
|
|
|
+<td>4.2 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-OCRv4_mobile_det_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_mobile_det_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>PP-OCRv4_server_det</td>
|
|
|
+<td>82.69</td>
|
|
|
+<td>100.1 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-OCRv4_server_det_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_det_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+</tbody>
|
|
|
+</table>
|
|
|
+<b>Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 500 images for detection.</b>
|
|
|
+
|
|
|
+## Text Recognition Module
|
|
|
+<table>
|
|
|
+<thead>
|
|
|
+<tr>
|
|
|
+<th>Model Name</th>
|
|
|
+<th>Recognition Avg Accuracy (%)</th>
|
|
|
+<th>Model Size (M)</th>
|
|
|
+<th>Model Download Link</th></tr>
|
|
|
+</thead>
|
|
|
+<tbody>
|
|
|
+<tr>
|
|
|
+<td>PP-OCRv4_mobile_rec</td>
|
|
|
+<td>78.20</td>
|
|
|
+<td>10.6 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-OCRv4_mobile_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_mobile_rec_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+<tr>
|
|
|
+<td>PP-OCRv4_server_rec</td>
|
|
|
+<td>79.20</td>
|
|
|
+<td>71.2 M</td>
|
|
|
+<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/PP-OCRv4_server_rec_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_pretrained.pdparams">Trained Model</a></td></tr>
|
|
|
+</tbody>
|
|
|
+</table>
|
|
|
+<b>Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 11,000 images for text recognition.</b>
|