Browse Source

refine en docs (#3308)

* refine docs

* fixed errors

* refine en docs
学卿 9 months ago
parent
commit
b2d2a92e94

+ 2 - 2
docs/module_usage/tutorials/cv_modules/face_feature.en.md

@@ -401,7 +401,7 @@ Similar to model training and evaluation, the following steps are required:
 * Specify the mode as model inference prediction: `-o Global.mode=predict`
 * Specify the path to the model weights: `-o Predict.model_dir="./output/best_model/inference"`
 * Specify the path to the input data: `-o Predict.input="..."`
-Other related parameters can be set by modifying the fields under `Global` and `Predict` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common.md).
+Other related parameters can be set by modifying the fields under `Global` and `Predict` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common.en.md).
 
 #### 4.4.2 Model Integration
 The model can be directly integrated into the PaddleX pipeline or into your own project.
@@ -413,4 +413,4 @@ The face feature module can be integrated into the PaddleX pipeline for [<b>Face
 2. <b>Module Integration</b>
 
 The weights you produced can be directly integrated into the face feature module. You can refer to the Python example code in [Quick Integration](#III.-Quick-Integration) and only need to replace the model with the path to the model you trained.
-</details></details>
+</details></details>

+ 1 - 5
docs/module_usage/tutorials/cv_modules/human_keypoint_detection.en.md

@@ -64,10 +64,6 @@ for res in output:
     res.save_to_json("./output/res.json")
 ```
 
-```bash
-{'res': {'input_path': 'keypoint_detection_002.jpg', 'kpts': [{'keypoints': [[175.2838134765625, 56.043609619140625, 0.6522828936576843], [181.32794189453125, 49.642051696777344, 0.7338210940361023], [169.46002197265625, 50.59111022949219, 0.6837076544761658], [193.3421173095703, 51.91969680786133, 0.8676544427871704], [164.50787353515625, 55.6519889831543, 0.8232858777046204], [219.7235870361328, 90.28710174560547, 0.8812915086746216], [152.90377807617188, 95.07806396484375, 0.9093065857887268], [233.1095733642578, 149.6704864501953, 0.7706904411315918], [139.5576629638672, 144.38327026367188, 0.7555014491081238], [245.22830200195312, 202.4243927001953, 0.706590473651886], [117.83794403076172, 188.56410217285156, 0.8892115950584412], [203.29542541503906, 200.2967071533203, 0.838330864906311], [172.00791931152344, 201.1993865966797, 0.7636935710906982], [181.18797302246094, 273.0669250488281, 0.8719099164009094], [185.1750030517578, 278.4797668457031, 0.6878190040588379], [171.55068969726562, 362.42730712890625, 0.7994316816329956], [201.6941375732422, 354.5953369140625, 0.6789217591285706]], 'kpt_score': 0.7831441760063171}]}}
-```
-
 <details><summary>👉 <b>The result obtained after running is: (Click to expand)</b></summary>
 
 ```bash
@@ -266,7 +262,7 @@ A single command can complete the data validation:
 python main.py -c paddlex/configs/keypoint_detection/PP-TinyPose_128x96.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/keypoint_coco_examples
-````
+```
 
 After executing the above command, PaddleX will validate the dataset and summarize its basic information. If the command runs successfully, it will print `Check dataset passed !` in the log. The validation result file is saved at `./output/check_dataset_result.json`, and related outputs are saved in the `./output/check_dataset `directory under the current directory. This includes visualized sample images and sample distribution histograms.
 

+ 5 - 5
docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.en.md

@@ -9,7 +9,7 @@ Face recognition is a crucial component in the field of computer vision, aiming
 
 The face recognition pipeline is an end-to-end system dedicated to solving face detection and recognition tasks. It can quickly and accurately locate face regions in images, extract facial features, and retrieve and compare them with pre-established features in a feature database to confirm identity information.
 
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/face_recognition/01.png"/>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/pipelines/face_recognition/02.jpg"/>
 <b>The face recognition pipeline includes a face detection module and a face feature module</b>, with several models in each module. Which models to use can be selected based on the benchmark data below. <b>If you prioritize model accuracy, choose models with higher accuracy; if you prioritize inference speed, choose models with faster inference; if you prioritize model size, choose models with smaller storage requirements</b>.
 
 
@@ -200,7 +200,7 @@ In the above Python script, the following steps are performed:
 <td><code>str</code>|<code>list</code></td>
 <td>
 <ul>
-<li><b>str</b>: The root directory of the images, data organization method refers to <a href="#2.3-构建特征库的数据组织方式">Section 2.3 Data Organization Method for Building Feature Library</a></li>
+<li><b>str</b>: The root directory of the images, data organization method refers to <a href="#23-data-organization-for-building-the-feature-library">Section 2.3 Data Organization Method for Building Feature Library</a></li>
 <li><b>List[numpy.ndarray]</b>: List of numpy.array type base library image data</li>
 </ul>
 </td>
@@ -212,7 +212,7 @@ In the above Python script, the following steps are performed:
 <td><code>str|list</code></td>
 <td>
 <ul>
-<li><b>str</b>: The path to the annotation file, the data organization method is the same as when building the feature library, refer to <a href="#2.3-构建特征库的数据组织方式">Section 2.3 Data Organization Method for Building Feature Library</a></li>
+<li><b>str</b>: The path to the annotation file, the data organization method is the same as when building the feature library, refer to <a href="#23-data-organization-for-building-the-feature-library">Section 2.3 Data Organization Method for Building Feature Library</a></li>
 <li><b>List[str]</b>: List of str type base library image annotations</li>
 </ul>
 </td>
@@ -1046,12 +1046,12 @@ SubModules:
   Detection:
     module_name: face_detection
     model_name: PP-YOLOE_plus-S_face
-    model_dir: null #可修改为微调后人脸检测模型的本地路径
+    model_dir: null # Can be modified to the local path of the fine-tuned face detection model
     batch_size: 1
   Recognition:
     module_name: face_feature
     model_name: ResNet50_face
-    model_dir: null #可修改为微调后人脸特征模型的本地路径
+    model_dir: null # Can be modified to the local path of the fine-tuned face feature model
     batch_size: 1
 ```
 

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.en.md

@@ -510,10 +510,10 @@ The parameters of the above method are described as follows:
 The general image recognition pipeline example of PaddleX requires a pre-built index library for feature retrieval. If you wish to build an index library with your private data, you need to organize the data as follows:
 
 ```bash
-data_root             # 数据集根目录,目录名称可以改变
-├── images            # 图像的保存目录,目录名称可以改变
+data_root             # The root directory of the dataset; the directory name can be changed
+├── images            # The directory for storing images; the directory name can be changed
 │   │   ...
-└── gallery.txt       # 索引库数据集标注文件,文件名称可以改变。每行给出待检索图像路径和图像标签,使用空格分隔,内容举例: “0/0.jpg 脉动
+└── gallery.txt       # The annotation file for the gallery dataset; the filename can be changed. Each line provides the path and label of an image to be retrieved, separated by a space. Example content: “0/0.jpg Pulse
 ```
 
 ## 3. Development Integration/Deployment
@@ -1010,16 +1010,16 @@ SubModules:
   Detection:
     module_name: text_detection
     model_name: PP-ShiTuV2_det
-    model_dir: null #可修改为微调后主体检测模型的本地路径
+    model_dir: null # Can be modified to the local path of the fine-tuned mainbody detection model
     batch_size: 1
   Recognition:
     module_name: text_recognition
     model_name: PP-ShiTuV2_rec
-    model_dir: null #可修改为微调后图像特征模型的本地路径
+    model_dir: null # Can be modified to the local path of the fine-tuned image feature model
     batch_size: 1
 ```
 
-Subsequently, refer to the command line method or Python script method in [2.2 Local Experience](#22-本地体验) to load the modified production line configuration file.
+Subsequently, refer to the command line method or Python script method in [2.2 Local Experience](#22-local-experience) to load the modified production line configuration file.
 
 ##  5. Multi-Hardware Support
 

+ 4 - 3
docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md

@@ -12,7 +12,7 @@ PaddleX's Human Keypoint Detection Pipeline is a Top-Down solution consisting of
 <img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/pipelines/human_keypoint_detection/01.jpg"/>
 <b>The Human Keypoint Detection Pipeline includes pedestrian detection and human keypoint detection modules</b>, with several models available. You can choose the model based on the benchmark data below. <b>If you prioritize model accuracy, choose a model with higher accuracy; if you prioritize inference speed, choose a model with faster inference speed; if you prioritize storage size, choose a model with a smaller storage size</b>.
 
-<summary> 👉Model List Details</summary>
+<details><summary> 👉Model List Details</summary>
 <b>Pedestrian Detection Module:</b>
 <table>
 <tr>
@@ -76,6 +76,7 @@ PaddleX's Human Keypoint Detection Pipeline is a Top-Down solution consisting of
 </tr>
 </table>
 <b>Note: The above accuracy metrics are based on the COCO dataset AP(0.5:0.95), with detection boxes obtained from ground truth annotations. All model GPU inference times are based on NVIDIA Tesla T4 machines with FP32 precision, and CPU inference speeds are based on Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.</b>
+</details>
 
 ## 2. Quick Start
 
@@ -566,14 +567,14 @@ SubModules:
   ObjectDetection:
     module_name: object_detection
     model_name: PP-YOLOE-S_human
-    model_dir: null #可修改为微调后行人检测模型的本地路径
+    model_dir: null # Can be modified to the local path of the fine-tuned human detection model
     batch_size: 1
     threshold: null
     img_size: null
   KeypointDetection:
     module_name: keypoint_detection
     model_name: PP-TinyPose_128x96
-    model_dir: #可修改为微调后关键点检测模型的本地路径
+    model_dir: # Can be modified to the local path of the fine-tuned keypoint detection model
     batch_size: 1
     flip: False
     use_udp: null

+ 4 - 4
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.en.md

@@ -362,7 +362,7 @@ If you are satisfied with the pipeline's performance, you can directly integrate
 Before using the general object detection pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Local Installation Guide](../../../installation/installation.en.md).
 
 #### 2.2.1 Command Line Experience
-You can quickly experience the effect of the object detection pipeline with a single command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png),并将  `--input` replace with the local path for prediction.
+You can quickly experience the effect of the object detection pipeline with a single command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png), and replace `--input` with the local path for prediction.
 
 ```bash
 paddlex --pipeline object_detection \
@@ -372,7 +372,7 @@ paddlex --pipeline object_detection \
         --device gpu:0
 ```
 
-For the description of parameters and interpretation of results, please refer to the parameter explanation and result interpretation in [2.2.2 Integration via Python Script](#222-python脚本方式集成).
+For the description of parameters and interpretation of results, please refer to the parameter explanation and result interpretation in [2.2.2 Integration via Python Script](#222-integration-via-python-script).
 
 The visualization results are saved to `save_path`, as shown below:
 
@@ -1204,7 +1204,7 @@ You can choose the appropriate method to deploy the model production line based
 If the default model weights provided by the general object detection production line do not meet your accuracy or speed requirements in your scenario, you can try further <b>fine-tuning</b> the existing model using <b>your own specific domain or application scenario data</b> to improve the recognition performance of the general object detection production line in your scenario.
 
 ### 4.1 Model Fine-Tuning
-Since the general object detection production line includes an object detection module, if the performance of the model production line is not as expected, you need to refer to the [Secondary Development](../../../module_usage/tutorials/cv_modules/object_detection.en.md#四二次开发) section in the [Object Detection Module Development Tutorial](../../../module_usage/tutorials/cv_modules/object_detection.en.md) to fine-tune the object detection model using your private dataset.
+Since the general object detection production line includes an object detection module, if the performance of the model production line is not as expected, you need to refer to the [Secondary Development](../../../module_usage/tutorials/cv_modules/object_detection.en.md#iv-custom-development) section in the [Object Detection Module Development Tutorial](../../../module_usage/tutorials/cv_modules/object_detection.en.md) to fine-tune the object detection model using your private dataset.
 
 ### 4.2 Model Application
 After completing the fine-tuning training with your private dataset, you will obtain a local model weight file.
@@ -1218,7 +1218,7 @@ SubModules:
   ObjectDetection:
     module_name: object_detection
     model_name: PicoDet-S
-    model_dir: null #可修改为微调后模型的本地路径
+    model_dir: null # Can be modified to the local path of the fine-tuned model
     batch_size: 1
     img_size: null
     threshold: null

+ 15 - 15
docs/practical_tutorials/face_recognition_tutorial.en.md

@@ -465,7 +465,7 @@ python main.py -c paddlex/configs/modules/face_detection/PP-YOLOE_plus-S.yaml \
 The prediction results can be generated under `./output` through the above instructions, and the prediction result of `cartoon_demo.jpg` is as follows:
 <center>
 
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/face_detection/04.jpg" width="600"/>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/face_recognition/04.jpg" width="600"/>
 
 </center>
 
@@ -722,12 +722,12 @@ SubModules:
   Detection:
     module_name: face_detection
     model_name: PP-YOLOE_plus-S_face
-    model_dir: "path/to/your/det_model" # 使用卡通人脸数据微调的人脸检测模型
+    model_dir: "path/to/your/det_model" # # Face detection model fine-tuned with cartoon face data
     batch_size: 1
   Recognition:
     module_name: face_feature
     model_name: ResNet50_face
-    model_dir: "path/to/your/rec_model" # 使用卡通人脸数据微调的人脸特征模型
+    model_dir: "path/to/your/rec_model" # Face feature model fine-tuned with cartoon face data
     batch_size: 1
 ```
 
@@ -735,15 +735,15 @@ Subsequently, in your Python code, you can use the production line as follows:
 
 ```python
 from paddlex import create_pipeline
-# 创建人脸识别产线
+# Create a face recognition pipeline
 pipeline = create_pipeline(pipeline="my_path/face_recognition.yaml")
-# 构建卡通人脸特征底库
+# Build a cartoon face feature database
 index_data = pipeline.build_index(gallery_imgs="cartoonface_demo_gallery", gallery_label="cartoonface_demo_gallery/gallery.txt")
-# 图像预测
+# Predict the cartoon demo image
 output = pipeline.predict("cartoonface_demo_gallery/test_images/cartoon_demo.jpg", index=index_data)
 for res in output:
     res.print()
-    res.save_to_img("./output/") # 保存可视化结果图像
+    res.save_to_img("./output/") # Save the result to an image
 ```
 
 If there is a case where a cartoon face can be detected but is recognized as "Unknown0.00", you can modify the `rec_thresholds` in the configuration file and try again after lowering the retrieval threshold. If there are cases of face recognition errors, please replace the optimal weights with the weights from the last training round, or replace the recognition model weights trained with different hyperparameters and try again.
@@ -761,11 +761,11 @@ This section takes service-oriented deployment as an example and guides you thro
 
 ```python
 from paddlex import create_pipeline
-# 创建人脸识别产线
+# Create a face recognition pipeline
 pipeline = create_pipeline(pipeline="face_recognition")
-# 构建卡通人脸特征底库
+# Build a cartoon face feature database
 index_data = pipeline.build_index(gallery_imgs="cartoonface_demo_gallery", gallery_label="cartoonface_demo_gallery/gallery.txt")
-# 保存卡通人脸特征底库
+# Save the cartoon face feature database
 index_data.save("cartoonface_index")
 ```
 
@@ -786,7 +786,7 @@ paddlex --get_pipeline_config face_recognition --save_path ./
 ```yaml
 pipeline_name: face_recognition
 
-index: ./cartoonface_index # 本地特征底库目录,使用第(1)步中构建好的特征底库
+index: ./cartoonface_index # Local feature database directory, using the feature database constructed in step (1)
 det_threshold: 0.6
 rec_threshold: 0.4
 rec_topk: 5
@@ -795,12 +795,12 @@ SubModules:
   Detection:
     module_name: face_detection
     model_name: PP-YOLOE_plus-S_face
-    model_dir: "path/to/your/det_model" # 使用卡通人脸数据微调的人脸检测模型
+    model_dir: "path/to/your/det_model" # Face detection model fine-tuned with cartoon face data
     batch_size: 1
   Recognition:
     module_name: face_feature
     model_name: ResNet50_face
-    model_dir: "path/to/your/rec_model" # 使用卡通人脸数据微调的人脸特征模型
+    model_dir: "path/to/your/rec_model" # Face feature model fine-tuned with cartoon face data
     batch_size: 1
 ```
 
@@ -825,7 +825,7 @@ import requests
 
 API_BASE_URL = "http://0.0.0.0:8080"
 
-infer_image_path = "cartoonface_demo_gallery/test_images/cartoon_demo.jpg" # 测试图片
+infer_image_path = "cartoonface_demo_gallery/test_images/cartoon_demo.jpg" # test image path
 
 with open(infer_image_path, "rb") as file:
     image_bytes = file.read()
@@ -847,4 +847,4 @@ print("\nDetected faces:")
 pprint.pp(result_infer["faces"])
 ```
 
-After executing the example code, you can view the inference results of the service deployment in the output log and the saved inference images respectively.
+After executing the example code, you can view the inference results of the service deployment in the output log and the saved inference images respectively.

+ 1 - 1
docs/practical_tutorials/face_recognition_tutorial.md

@@ -457,7 +457,7 @@ python main.py -c paddlex/configs/modules/face_detection/PP-YOLOE_plus-S.yaml \
 通过上述指令可在`./output`下生成预测结果,其中`cartoon_demo.jpg`的预测结果如下:
 <center>
 
-<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/face_detection/04.jpg" width="600"/>
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/face_recognition/04.jpg" width="600"/>
 
 </center>
 

+ 44 - 34
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.en.md

@@ -36,41 +36,31 @@ After experiencing the pipeline, determine if it meets your expectations (includ
 PaddleX provides 15 end-to-end instance segmentation models. Refer to the [Model List](../support_list/models_list.en.md) for details. Benchmarks for some models are as follows:
 
 <table>
-<thead>
-<tr>
-<th>Model List</th>
-<th>mAP(%)</th>
-<th>GPU Inference Time(ms)</th>
-<th>Model Size(M)</th>
-</tr>
-</thead>
-<tbody>
 <tr>
-<td>Mask-RT-DETR-H</td>
-<td>48.8</td>
-<td>61.40</td>
-<td>486</td>
+<th>Model</th><th>Model Download Link</th>
+<th>Mask AP</th>
+<th>GPU Inference Time (ms)<br/>[Normal Mode / High-Performance Mode]</th>
+<th>CPU Inference Time (ms)<br/>[Normal Mode / High-Performance Mode]</th>
+<th>Model Size (M)</th>
+<th>Description</th>
 </tr>
 <tr>
-<td>Mask-RT-DETR-X</td>
-<td>47.5</td>
-<td>45.70</td>
-<td>257</td>
+<td>Mask-RT-DETR-H</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/Mask-RT-DETR-H_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/Mask-RT-DETR-H_pretrained.pdparams">Trained Model</a></td>
+<td>50.6</td>
+<td>172.36 / 172.36</td>
+<td>1615.75 / 1615.75</td>
+<td>449.9 M</td>
+<td rowspan="5">Mask-RT-DETR is an instance segmentation model based on RT-DETR. By adopting the high-performance PP-HGNetV2 as the backbone network and constructing a MaskHybridEncoder encoder, along with introducing IOU-aware Query Selection technology, it achieves state-of-the-art (SOTA) instance segmentation accuracy with the same inference time.</td>
 </tr>
 <tr>
-<td>Mask-RT-DETR-L</td>
+<td>Mask-RT-DETR-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/Mask-RT-DETR-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/Mask-RT-DETR-L_pretrained.pdparams">Trained Model</a></td>
 <td>45.7</td>
-<td>37.40</td>
-<td>123</td>
+<td>88.18 / 88.18</td>
+<td>1090.84 / 1090.84</td>
+<td>113.6 M</td>
 </tr>
-<tr>
-<td>Mask-RT-DETR-S</td>
-<td>40.9</td>
-<td>32.40</td>
-<td>57</td>
-</tr>
-</tbody>
 </table>
+
 > <b>Note: The above accuracy metrics are mAP(0.5:0.95) on the [COCO2017](https://cocodataset.org/#home) validation set. GPU inference time is based on an NVIDIA V100 machine with FP32 precision.</b>
 
 In summary, models listed from top to bottom offer faster inference speeds, while those from bottom to top offer higher accuracy. This tutorial uses the `Mask-RT-DETR-H` model as an example to complete the full model development process. Choose a suitable model based on your actual usage scenario, train it, evaluate the model weights within the pipeline, and finally apply them in real-world scenarios.
@@ -92,7 +82,7 @@ tar -xf ./dataset/intseg_remote_sense_coco.tar -C ./dataset/
 When verifying the dataset, you only need one command:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-H.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-H.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/intseg_remote_sense_coco
 ```
@@ -166,7 +156,7 @@ Data conversion and data splitting can be enabled simultaneously. For data split
 Before training, ensure that you have verified the dataset. To complete PaddleX model training, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-H.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-H.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/intseg_remote_sense_coco \
     -o Train.num_classes=10
@@ -204,7 +194,7 @@ After completing the model training, all outputs are saved in the specified outp
 After completing model training, you can evaluate the specified model weight files on the validation set to verify the model's accuracy. To perform model evaluation using PaddleX, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-H.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-H.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/intseg_remote_sense_coco
 ```
@@ -324,7 +314,7 @@ Epoch Variation Results:
 Replace the model in the production line with the fine-tuned model for testing, e.g.:
 
 ```bash
-python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-H.yaml \
+python main.py -c paddlex/configs/modules/instance_segmentation/Mask-RT-DETR-H.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="output/best_model/inference" \
     -o Predict.input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/remotesensing_demo.png"
@@ -341,15 +331,35 @@ The prediction results will be generated under `./output`, and the prediction re
 
 If the general instance segmentation pipeline meets your requirements for inference speed and accuracy, you can proceed with development integration/deployment.
 
-1. Directly apply the trained model in your Python project by referring to the following sample code, and modify the `Pipeline.model` in the `paddlex/pipelines/instance_segmentation.yaml` configuration file to your own model path:
+1. If you need to use the fine-tuned model weights, you can obtain the pipeline configuration file for instance segmentation and load it for prediction. You can execute the following command to save the results in `my_path`:
+
+```bash
+paddlex --get_pipeline_config instance_segmentation --save_path ./my_path
+```
+
+Fill in the local path of the fine-tuned model weights in the `model_dir` of the pipeline configuration file. If you want to directly apply the general instance segmentation pipeline in your Python project, you can refer to the example below:
+
+```yaml
+pipeline_name: instance_segmentation
+
+SubModules:
+  InstanceSegmentation:
+    module_name: instance_segmentation
+    model_name: Mask-RT-DETR-S
+    model_dir: null # Replace this with the local path to your trained model weights
+    batch_size: 1
+    threshold: 0.5
+```
+
+Then, in your Python code, you can use the pipeline as follows:
 
 ```python
 from paddlex import create_pipeline
-pipeline = create_pipeline(pipeline="paddlex/pipelines/instance_segmentation.yaml")
+pipeline = create_pipeline(pipeline="my_path/instance_segmentation.yaml")
 output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/remotesensing_demo.png")
 for res in output:
     res.print() # Print the structured output of the prediction
-    res.save_to_img("./output/") # Save the result visualization image
+    res.save_to_img("./output/") # Save the result as a visualized image
     res.save_to_json("./output/") # Save the structured output of the prediction
 ```
 For more parameters, please refer to the [General Instance Segmentation Pipline User Guide](../pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md)。

+ 1 - 0
docs/practical_tutorials/instance_segmentation_remote_sensing_tutorial.md

@@ -82,6 +82,7 @@ PaddleX 提供了 15 个端到端的实例分割模型,具体可参考 [模型
 <td>237.5 M</td>
 </tr>
 </table>
+
 > <b>注:以上精度指标为 [COCO2017](https://cocodataset.org/#home) 验证集 mAP(0.5:0.95),GPU 推理耗时基于 NVIDIA  V100 机器,精度类型为 FP32。</b>
 
 简单来说,表格从上到下,模型推理速度更快,从下到上,模型精度更高。本教程以 `Mask-RT-DETR-H` 模型为例,完成一次模型全流程开发。你可以依据自己的实际使用场景,判断并选择一个合适的模型做训练,训练完成后可在产线内评估合适的模型权重,并最终用于实际使用场景中。

+ 68 - 43
docs/practical_tutorials/object_detection_fall_tutorial.en.md

@@ -36,60 +36,61 @@ After the trial, determine if the pipeline meets your expectations (including ac
 PaddleX provides 37 end-to-end object detection models. Refer to the [Model List](../support_list/models_list.en.md) for details. Here's a benchmark of some models:
 
 <table>
-<thead>
 <tr>
-<th>Model List</th>
+<th>Model</th><th>Model Download Link</th>
 <th>mAP(%)</th>
-<th>GPU Inference Time(ms)</th>
-<th>CPU Inference Time(ms)</th>
-<th>Model Size(M)</th>
+<th>CPU Inference Time (ms)<br/>[Normal Mode / High-Performance Mode]</th>
+<th>CPU Inference Time (ms)<br/>[Normal Mode / High-Performance Mode]</th>
+<th>Model Storage Size (M)</th>
+<th>Description</th>
 </tr>
-</thead>
-<tbody>
 <tr>
-<td>RT-DETR-H</td>
-<td>56.3</td>
-<td>100.65</td>
-<td>8451.92</td>
-<td>471</td>
+<td>PicoDet-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PicoDet-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PicoDet-L_pretrained.pdparams">Trained Model</a></td>
+<td>42.6</td>
+<td>14.68 / 5.81</td>
+<td>47.32 / 47.32</td>
+<td>20.9 M</td>
+<td rowspan="2">PP-PicoDet is a lightweight object detection algorithm for full-size, wide-angle targets, considering the computational capacity of mobile devices. Compared to traditional object detection algorithms, PP-PicoDet has a smaller model size and lower computational complexity, achieving higher speed and lower latency while maintaining detection accuracy.</td>
 </tr>
 <tr>
-<td>RT-DETR-L</td>
-<td>53.0</td>
-<td>27.89</td>
-<td>841.00</td>
-<td>125</td>
+<td>PicoDet-S</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PicoDet-S_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PicoDet-S_pretrained.pdparams">Trained Model</a></td>
+<td>29.1</td>
+<td>7.98 / 2.33</td>
+<td>14.82 / 5.60</td>
+<td>4.4 M</td>
 </tr>
 <tr>
-<td>PP-YOLOE_plus-L</td>
+<td>PP-YOLOE_plus-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-L_pretrained.pdparams">Trained Model</a></td>
 <td>52.9</td>
-<td>29.67</td>
-<td>700.97</td>
-<td>200</td>
+<td>33.55 / 10.46</td>
+<td>189.05 / 189.05</td>
+<td>185.3 M</td>
+<td rowspan="2">PP-YOLOE_plus is an upgraded version of the high-precision cloud-edge integrated model PP-YOLOE, developed by Baidu's PaddlePaddle vision team. By using the large-scale Objects365 dataset and optimizing preprocessing, it significantly enhances the model's end-to-end inference speed.</td>
 </tr>
 <tr>
-<td>PP-YOLOE_plus-S</td>
+<td>PP-YOLOE_plus-S</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-S_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-S_pretrained.pdparams">Trained Model</a></td>
 <td>43.7</td>
-<td>8.11</td>
-<td>137.23</td>
-<td>31</td>
+<td>12.16 / 4.58</td>
+<td>73.86 / 52.90</td>
+<td>28.3 M</td>
 </tr>
 <tr>
-<td>PicoDet-L</td>
-<td>42.6</td>
-<td>10.09</td>
-<td>129.32</td>
-<td>23</td>
+<td>RT-DETR-H</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/RT-DETR-H_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-H_pretrained.pdparams">Trained Model</a></td>
+<td>56.3</td>
+<td>115.92 / 28.16</td>
+<td>971.32 / 971.32</td>
+<td>435.8 M</td>
+<td rowspan="2">RT-DETR is the first real-time end-to-end object detector. The model features an efficient hybrid encoder to meet both model performance and throughput requirements, efficiently handling multi-scale features, and proposes an accelerated and optimized query selection mechanism to optimize the dynamics of decoder queries. RT-DETR supports flexible end-to-end inference speeds by using different decoders.</td>
 </tr>
 <tr>
-<td>PicoDet-S</td>
-<td>29.1</td>
-<td>3.17</td>
-<td>13.36</td>
-<td>5</td>
+<td>RT-DETR-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/RT-DETR-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-L_pretrained.pdparams">Trained Model</a></td>
+<td>53.0</td>
+<td>35.00 / 10.45</td>
+<td>495.51 / 495.51</td>
+<td>113.7 M</td>
 </tr>
-</tbody>
 </table>
+
 > <b>Note: The above accuracy metrics are based on the mAP(0.5:0.95) of the [COCO2017](https://cocodataset.org/#home) validation set. GPU inference time is measured on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.</b>
 
 In summary, models listed from top to bottom offer faster inference speeds, while those from bottom to top offer higher accuracy. This tutorial uses the PP-YOLOE_plus-S model as an example to complete the full model development process. Choose a suitable model based on your actual usage scenario, train it, evaluate the model weights within the pipeline, and finally deploy
@@ -111,7 +112,7 @@ tar -xf ./dataset/fall_det.tar -C ./dataset/
 To verify the dataset, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PP-YOLOE_plus-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/fall_det
 ```
@@ -185,7 +186,7 @@ Data conversion and data splitting can be enabled simultaneously. For data split
 Before training, ensure that you have validated your dataset. To complete the training of a PaddleX model, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PP-YOLOE_plus-S.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/fall_det \
     -o Train.num_classes=1
@@ -223,7 +224,7 @@ After completing model training, all outputs are saved in the specified output d
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. To evaluate a model using PaddleX, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PP-YOLOE_plus-S.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/fall_det
 ```
@@ -335,7 +336,7 @@ Changing Epochs Results:
 Replace the model in the production line with the fine-tuned model for testing, for example:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PP-YOLOE_plus-S.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="output/best_model/inference" \
     -o Predict.input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/fall.png"
@@ -350,16 +351,40 @@ The prediction results will be generated under `./output`, and the prediction re
 
 ## 7. Development Integration/Deployment
 If the General Object Detection Pipeline meets your requirements for inference speed and precision in the production line, you can proceed directly with development integration/deployment.
-1. Directly apply the trained model in your Python project by referring to the following sample code, and modify the `Pipeline.model` in the `paddlex/pipelines/object_detection.yaml` configuration file to your own model path:
+
+1. If you need to use the fine-tuned model weights, you can obtain the pipeline configuration file for object detection and load it for prediction. You can execute the following command to save the results in `my_path`:
+
+```bash
+paddlex --get_pipeline_config object_detection --save_path ./my_path
+```
+
+Fill in the local path of the fine-tuned model weights in the `model_dir` of the pipeline configuration file. If you want to directly apply the general object detection pipeline in your Python project, you can refer to the example below:
+
+```yaml
+pipeline_name: object_detection
+
+SubModules:
+  ObjectDetection:
+    module_name: object_detection
+    model_name: PicoDet-S
+    model_dir: null # Replace this with the local path to your trained model weights
+    batch_size: 1
+    img_size: null
+    threshold: null
+```
+
+Then, in your Python code, you can use the pipeline as follows:
+
 ```python
 from paddlex import create_pipeline
-pipeline = create_pipeline(pipeline="paddlex/pipelines/object_detection.yaml")
+pipeline = create_pipeline(pipeline="my_path/object_detection.yaml")
 output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/fall.png")
 for res in output:
     res.print() # Print the structured output of the prediction
-    res.save_to_img("./output/") # Save the visualized image of the result
+    res.save_to_img("./output/") # Save the result as a visualized image
     res.save_to_json("./output/") # Save the structured output of the prediction
 ```
+
 For more parameters, please refer to [General Object Detection Pipeline Usage Tutorial](../pipeline_usage/tutorials/cv_pipelines/object_detection.en.md).
 
 2. Additionally, PaddleX offers three other deployment methods, detailed as follows:

+ 1 - 0
docs/practical_tutorials/object_detection_fall_tutorial.md

@@ -91,6 +91,7 @@ PaddleX 提供了 37 个端到端的目标检测模型,具体可参考 [模型
 <td>113.7 M</td>
 </tr>
 </table>
+
 > <b>注:以上精度指标为 <a href="https://cocodataset.org/#home" target="_blank">COCO2017</a> 验证集 mAP(0.5:0.95)。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。</b>
 
 简单来说,表格从上到下,模型推理速度更快,从下到上,模型精度更高。本教程以PP-YOLOE_plus-S模型为例,完成一次模型全流程开发。您可以依据自己的实际使用场景,判断并选择一个合适的模型做训练,训练完成后可在产线内评估合适的模型权重,并最终用于实际使用场景中。

+ 64 - 42
docs/practical_tutorials/object_detection_fashion_pedia_tutorial.en.md

@@ -36,60 +36,61 @@ After the trial, determine if the pipeline meets your expectations (including ac
 PaddleX provides 37 end-to-end object detection models. Refer to the [Model List](../support_list/models_list.en.md) for details. Below are benchmarks for some models:
 
 <table>
-<thead>
 <tr>
-<th>Model List</th>
+<th>Model</th><th>Model Download Link</th>
 <th>mAP(%)</th>
-<th>GPU Inference Time(ms)</th>
-<th>CPU Inference Time(ms)</th>
-<th>Model Size(M)</th>
+<th>CPU Inference Time (ms)<br/>[Normal Mode / High-Performance Mode]</th>
+<th>CPU Inference Time (ms)<br/>[Normal Mode / High-Performance Mode]</th>
+<th>Model Storage Size (M)</th>
+<th>Description</th>
 </tr>
-</thead>
-<tbody>
 <tr>
-<td>RT-DETR-H</td>
-<td>56.3</td>
-<td>100.65</td>
-<td>8451.92</td>
-<td>471</td>
+<td>PicoDet-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PicoDet-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PicoDet-L_pretrained.pdparams">Trained Model</a></td>
+<td>42.6</td>
+<td>14.68 / 5.81</td>
+<td>47.32 / 47.32</td>
+<td>20.9 M</td>
+<td rowspan="2">PP-PicoDet is a lightweight object detection algorithm for full-size, wide-angle targets, considering the computational capacity of mobile devices. Compared to traditional object detection algorithms, PP-PicoDet has a smaller model size and lower computational complexity, achieving higher speed and lower latency while maintaining detection accuracy.</td>
 </tr>
 <tr>
-<td>RT-DETR-L</td>
-<td>53.0</td>
-<td>27.89</td>
-<td>841.00</td>
-<td>125</td>
+<td>PicoDet-S</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PicoDet-S_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PicoDet-S_pretrained.pdparams">Trained Model</a></td>
+<td>29.1</td>
+<td>7.98 / 2.33</td>
+<td>14.82 / 5.60</td>
+<td>4.4 M</td>
 </tr>
 <tr>
-<td>PP-YOLOE_plus-L</td>
+<td>PP-YOLOE_plus-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-L_pretrained.pdparams">Trained Model</a></td>
 <td>52.9</td>
-<td>29.67</td>
-<td>700.97</td>
-<td>200</td>
+<td>33.55 / 10.46</td>
+<td>189.05 / 189.05</td>
+<td>185.3 M</td>
+<td rowspan="2">PP-YOLOE_plus is an upgraded version of the high-precision cloud-edge integrated model PP-YOLOE, developed by Baidu's PaddlePaddle vision team. By using the large-scale Objects365 dataset and optimizing preprocessing, it significantly enhances the model's end-to-end inference speed.</td>
 </tr>
 <tr>
-<td>PP-YOLOE_plus-S</td>
+<td>PP-YOLOE_plus-S</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-S_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-S_pretrained.pdparams">Trained Model</a></td>
 <td>43.7</td>
-<td>8.11</td>
-<td>137.23</td>
-<td>31</td>
+<td>12.16 / 4.58</td>
+<td>73.86 / 52.90</td>
+<td>28.3 M</td>
 </tr>
 <tr>
-<td>PicoDet-L</td>
-<td>42.6</td>
-<td>10.09</td>
-<td>129.32</td>
-<td>23</td>
+<td>RT-DETR-H</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/RT-DETR-H_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-H_pretrained.pdparams">Trained Model</a></td>
+<td>56.3</td>
+<td>115.92 / 28.16</td>
+<td>971.32 / 971.32</td>
+<td>435.8 M</td>
+<td rowspan="2">RT-DETR is the first real-time end-to-end object detector. The model features an efficient hybrid encoder to meet both model performance and throughput requirements, efficiently handling multi-scale features, and proposes an accelerated and optimized query selection mechanism to optimize the dynamics of decoder queries. RT-DETR supports flexible end-to-end inference speeds by using different decoders.</td>
 </tr>
 <tr>
-<td>PicoDet-S</td>
-<td>29.1</td>
-<td>3.17</td>
-<td>13.36</td>
-<td>5</td>
+<td>RT-DETR-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/RT-DETR-L_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-L_pretrained.pdparams">Trained Model</a></td>
+<td>53.0</td>
+<td>35.00 / 10.45</td>
+<td>495.51 / 495.51</td>
+<td>113.7 M</td>
 </tr>
-</tbody>
 </table>
+
 > <b>Note: The above accuracy metrics are mAP(0.5:0.95) on the COCO2017 validation set. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.</b>
 
 In summary, models with faster inference speed are placed higher in the table, while models with higher accuracy are lower. This tutorial takes the PicoDet-L model as an example to complete a full model development process. You can judge and select an appropriate model for training based on your actual usage scenarios. After training, you can evaluate the suitable model weights within the pipeline and ultimately use them in practical scenarios.
@@ -111,7 +112,7 @@ tar -xf ./dataset/det_mini_fashion_pedia_coco.tar -C ./dataset/
 To verify the dataset, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_mini_fashion_pedia_coco
 ```
@@ -183,7 +184,7 @@ Data conversion and data splitting can be enabled simultaneously. The original a
 Before training, please ensure that you have validated the dataset. To complete PaddleX model training, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/det_mini_fashion_pedia_coco \
     -o Train.num_classes=15
@@ -221,7 +222,7 @@ After completing model training, all outputs are saved in the specified output d
 After completing model training, you can evaluate the specified model weight file on the validation set to verify the model accuracy. To evaluate a model using PaddleX, simply use the following command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/det_mini_fashion_pedia_coco
 ```
@@ -341,7 +342,7 @@ Epoch Variation Results:
 Replace the model in the pipeline with the fine-tuned model for testing, e.g.:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="output/best_model/inference" \
     -o Predict.input="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/application/object_detection/FashionPedia_demo.png"
@@ -357,11 +358,32 @@ The prediction results will be generated under `./output`, and the prediction re
 ## 7. Development Integration/Deployment
 If the General Object Detection Pipeline meets your requirements for inference speed and precision in your production line, you can proceed directly with development integration/deployment.
 
-1. Directly apply the trained model in your Python project by referring to the following sample code, and modify the `Pipeline.model` in the `paddlex/pipelines/object_detection.yaml` configuration file to your own model path:
+1. If you need to use the fine-tuned model weights, you can obtain the pipeline configuration file for object detection and load it for prediction. You can execute the following command to save the results in `my_path`:
+
+```bash
+paddlex --get_pipeline_config object_detection --save_path ./my_path
+```
+
+Fill in the local path of the fine-tuned model weights in the `model_dir` of the pipeline configuration file. If you want to directly apply the general object detection pipeline in your Python project, you can refer to the example below:
+
+```yaml
+pipeline_name: object_detection
+
+SubModules:
+  ObjectDetection:
+    module_name: object_detection
+    model_name: PicoDet-S
+    model_dir: null # Replace this with the local path to your trained model weights
+    batch_size: 1
+    img_size: null
+    threshold: null
+```
+
+Then, in your Python code, you can use the pipeline as follows:
 
 ```python
 from paddlex import create_pipeline
-pipeline = create_pipeline(pipeline="paddlex/pipelines/object_detection.yaml")
+pipeline = create_pipeline(pipeline="my_path/object_detection.yaml")
 output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/application/object_detection/FashionPedia_demo.png")
 for res in output:
     res.print() # Print the structured output of the prediction

+ 66 - 42
docs/practical_tutorials/object_detection_fashion_pedia_tutorial.md

@@ -37,60 +37,61 @@ PaddleX 提供了两种体验的方式,一种是可以直接通过 PaddleX whe
 PaddleX 提供了 37 个端到端的目标检测模型,具体可参考 [模型列表](../support_list/models_list.md),其中部分模型的benchmark如下:
 
 <table>
-<thead>
 <tr>
-<th>模型列表</th>
+<th>模型</th><th>模型下载链接</th>
 <th>mAP(%)</th>
-<th>GPU 推理耗时(ms)</th>
-<th>CPU 推理耗时(ms)</th>
-<th>模型存储大小(M)</th>
+<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
+<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
+<th>模型存储大小 (M)</th>
+<th>介绍</th>
 </tr>
-</thead>
-<tbody>
 <tr>
-<td>RT-DETR-H</td>
-<td>56.3</td>
-<td>100.65</td>
-<td>8451.92</td>
-<td>471</td>
+<td>PicoDet-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PicoDet-L_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PicoDet-L_pretrained.pdparams">训练模型</a></td>
+<td>42.6</td>
+<td>14.68 / 5.81</td>
+<td>47.32 / 47.32</td>
+<td>20.9 M</td>
+<td rowspan="2">PP-PicoDet是一种全尺寸、棱视宽目标的轻量级目标检测算法,它考虑移动端设备运算量。与传统目标检测算法相比,PP-PicoDet具有更小的模型尺寸和更低的计算复杂度,并在保证检测精度的同时更高的速度和更低的延迟。</td>
 </tr>
 <tr>
-<td>RT-DETR-L</td>
-<td>53.0</td>
-<td>27.89</td>
-<td>841.00</td>
-<td>125</td>
+<td>PicoDet-S</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PicoDet-S_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PicoDet-S_pretrained.pdparams">训练模型</a></td>
+<td>29.1</td>
+<td>7.98 / 2.33</td>
+<td>14.82 / 5.60</td>
+<td>4.4 M</td>
 </tr>
 <tr>
-<td>PP-YOLOE_plus-L</td>
+<td>PP-YOLOE_plus-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-L_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-L_pretrained.pdparams">训练模型</a></td>
 <td>52.9</td>
-<td>29.67</td>
-<td>700.97</td>
-<td>200</td>
+<td>33.55 / 10.46</td>
+<td>189.05 / 189.05</td>
+<td>185.3 M</td>
+<td rowspan="2">PP-YOLOE_plus 是一种是百度飞桨视觉团队自研的云边一体高精度模型PP-YOLOE迭代优化升级的版本,通过使用Objects365大规模数据集、优化预处理,大幅提升了模型端到端推理速度。</td>
 </tr>
 <tr>
-<td>PP-YOLOE_plus-S</td>
+<td>PP-YOLOE_plus-S</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-YOLOE_plus-S_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-YOLOE_plus-S_pretrained.pdparams">训练模型</a></td>
 <td>43.7</td>
-<td>8.11</td>
-<td>137.23</td>
-<td>31</td>
+<td>12.16 / 4.58</td>
+<td>73.86 / 52.90</td>
+<td>28.3 M</td>
 </tr>
 <tr>
-<td>PicoDet-L</td>
-<td>42.6</td>
-<td>10.09</td>
-<td>129.32</td>
-<td>23</td>
+<td>RT-DETR-H</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/RT-DETR-H_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-H_pretrained.pdparams">训练模型</a></td>
+<td>56.3</td>
+<td>115.92 / 28.16</td>
+<td>971.32 / 971.32</td>
+<td>435.8 M</td>
+<td rowspan="2">RT-DETR是第一个实时端到端目标检测器。该模型设计了一个高效的混合编码器,满足模型效果与吞吐率的双需求,高效处理多尺度特征,并提出了加速和优化的查询选择机制,以优化解码器查询的动态化。RT-DETR支持通过使用不同的解码器来实现灵活端到端推理速度。</td>
 </tr>
 <tr>
-<td>PicoDet-S</td>
-<td>29.1</td>
-<td>3.17</td>
-<td>13.36</td>
-<td>5</td>
+<td>RT-DETR-L</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/RT-DETR-L_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-L_pretrained.pdparams">训练模型</a></td>
+<td>53.0</td>
+<td>35.00 / 10.45</td>
+<td>495.51 / 495.51</td>
+<td>113.7 M</td>
 </tr>
-</tbody>
 </table>
+
 > <b>注:以上精度指标为 <a href="https://cocodataset.org/#home" target="_blank">COCO2017</a> 验证集 mAP(0.5:0.95)。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。</b>
 
 简单来说,表格从上到下,模型推理速度更快,从下到上,模型精度更高。本教程以 PicoDet-L 模型为例,完成一次模型全流程开发。你可以依据自己的实际使用场景,判断并选择一个合适的模型做训练,训练完成后可在产线内评估合适的模型权重,并最终用于实际使用场景中。
@@ -112,7 +113,7 @@ tar -xf ./dataset/det_mini_fashion_pedia_coco.tar -C ./dataset/
 在对数据集校验时,只需一行命令:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_mini_fashion_pedia_coco
 ```
@@ -184,7 +185,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
 在训练之前,请确保您已经对数据集进行了校验。完成 PaddleX 模型的训练,只需如下一条命令:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=./dataset/det_mini_fashion_pedia_coco \
     -o Train.num_classes=15
@@ -222,7 +223,7 @@ PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相
 在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,只需一行命令:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/det_mini_fashion_pedia_coco
 ```
@@ -342,7 +343,7 @@ python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
 将产线中的模型替换为微调后的模型进行测试,如:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/modules/object_detection/PicoDet-L.yaml \
     -o Global.mode=predict \
     -o Predict.model_dir="output/best_model/inference" \
     -o Predict.input="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/application/object_detection/FashionPedia_demo.png"
@@ -357,10 +358,33 @@ python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
 
 ## 7. 开发集成/部署
 如果通用目标检测产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
-1. 直接将训练好的模型应用在您的 Python 项目中,可以参考如下示例代码,并将`paddlex/pipelines/object_detection.yaml`配置文件中的`Pipeline.model`修改为自己的模型路径:
+
+1. 若您需要使用微调后的模型权重,可以获取 object_detection 产线配置文件,并加载配置文件进行预测。可执行如下命令将结果保存在 `my_path` 中:
+
+```bash
+paddlex --get_pipeline_config object_detection --save_path ./my_path
+```
+
+将微调后模型权重的本地路径填写至产线配置文件中的 `model_dir` 即可, 若您需要将通用目标检测产线直接应用在您的 Python 项目中,可以参考 如下示例:
+
+```yaml
+pipeline_name: object_detection
+
+SubModules:
+  ObjectDetection:
+    module_name: object_detection
+    model_name: PicoDet-S
+    model_dir: null # 此处替换为您训练后得到的模型权重本地路径
+    batch_size: 1
+    img_size: null
+    threshold: null
+```
+
+随后,在您的 Python 代码中,您可以这样使用产线:
+
 ```python
 from paddlex import create_pipeline
-pipeline = create_pipeline(pipeline="paddlex/pipelines/object_detection.yaml")
+pipeline = create_pipeline(pipeline="my_path/object_detection.yaml")
 output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/application/object_detection/FashionPedia_demo.png")
 for res in output:
     res.print() # 打印预测的结构化输出