yongsheng yuan 1 年之前
父節點
當前提交
80163c42b7

+ 2 - 2
docs/module_usage/tutorials/cv_modules/anomaly_detection.md

@@ -23,8 +23,8 @@
 
 完成wheel包的安装后,几行代码即可完成图像异常检测模块的推理,可以任意切换该模块下的模型,您也可以将图像异常检测的模块中的模型推理集成到您的项目中。
 运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png)到本地。
-```bash
-from paddlex.inference import create_model 
+```python
+from paddlex import create_model 
 
 model_name = "STFPM"
 

+ 2 - 2
docs/module_usage/tutorials/cv_modules/anomaly_detection_en.md

@@ -21,7 +21,7 @@ The above model accuracy indicators are measured from the MVTec_AD dataset.
 Before quick integration, you need to install the PaddleX wheel package. For the installation method of the wheel package, please refer to the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md). After installing the wheel package, a few lines of code can complete the inference of the unsupervised anomaly detection module. You can switch models under this module freely, and you can also integrate the model inference of the unsupervised anomaly detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png) to your local machine.
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "STFPM"
 
@@ -40,7 +40,7 @@ For more information on the usage of PaddleX's single-model inference API, pleas
 If you seek higher accuracy from existing models, you can leverage PaddleX's custom development capabilities to develop better unsupervised anomaly detection models. Before using PaddleX to develop unsupervised anomaly detection models, ensure you have installed the PaddleDetection plugin for PaddleX. The installation process can be found in the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md).
 
 ### 4.1 Data Preparation
-Before model training, you need to prepare the corresponding dataset for the task module. PaddleX provides a data validation function for each module, and **only data that passes the validation can be used for model training**. Additionally, PaddleX provides demo datasets for each module, which you can use to complete subsequent development based on the official demos. If you wish to use private datasets for subsequent model training, refer to the [PaddleX Semantic Segmentation Task Module Data Annotation Tutorial](/docs_new_en/data_annotations/cv_modules/semantic_segmentation_en.md).
+Before model training, you need to prepare the corresponding dataset for the task module. PaddleX provides a data validation function for each module, and **only data that passes the validation can be used for model training**. Additionally, PaddleX provides demo datasets for each module, which you can use to complete subsequent development based on the official demos. If you wish to use private datasets for subsequent model training, refer to the [PaddleX Semantic Segmentation Task Module Data Annotation Tutorial](../../../data_annotations/cv_modules/semantic_segmentation_en.md).
 
 #### 4.1.1 Demo Data Download
 You can use the following commands to download the demo dataset to a specified folder:

+ 2 - 2
docs/module_usage/tutorials/cv_modules/face_detection.md

@@ -23,7 +23,7 @@
 完成whl包的安装后,几行代码即可完成人脸检测模块的推理,可以任意切换该模块下的模型,您也可以将人脸检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_detection.png)到本地。
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PicoDet_LCNet_x2_5_face"
 
@@ -41,7 +41,7 @@ for res in output:
 如果你追求更高精度的现有模型,可以使用PaddleX的二次开发能力,开发更好的人脸检测模型。在使用PaddleX开发人脸检测模型之前,请务必安装PaddleX的PaddleDetection插件,安装过程可以参考 [PaddleX本地安装教程](../../../installation/installation.md)
 
 ### 4.1 数据准备
-在进行模型训练前,需要准备相应任务模块的数据集。PaddleX 针对每一个模块提供了数据校验功能,**只有通过数据校验的数据才可以进行模型训练**。此外,PaddleX为每一个模块都提供了demo数据集,您可以基于官方提供的 Demo 数据完成后续的开发。若您希望用私有数据集进行后续的模型训练,可以参考[PaddleX目标检测任务模块数据标注教程](/data_annotations/cv_modules/object_detection.md)。
+在进行模型训练前,需要准备相应任务模块的数据集。PaddleX 针对每一个模块提供了数据校验功能,**只有通过数据校验的数据才可以进行模型训练**。此外,PaddleX为每一个模块都提供了demo数据集,您可以基于官方提供的 Demo 数据完成后续的开发。若您希望用私有数据集进行后续的模型训练,可以参考[PaddleX目标检测任务模块数据标注教程](../../../data_annotations/cv_modules/object_detection.md)。
 
 #### 4.1.1 Demo 数据下载
 您可以参考下面的命令将 Demo 数据集下载到指定文件夹:

+ 1 - 1
docs/module_usage/tutorials/cv_modules/face_detection_en.md

@@ -21,7 +21,7 @@ Face detection is a fundamental task in object detection, aiming to automaticall
 Before quick integration, you need to install the PaddleX wheel package. For the installation method of the wheel package, please refer to the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md). After installing the wheel package, a few lines of code can complete the inference of the face detection module. You can switch models under this module freely, and you can also integrate the model inference of the face detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_detection.png) to your local machine.
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PicoDet_LCNet_x2_5_face"
 

+ 4 - 4
docs/module_usage/tutorials/cv_modules/human_detection.md

@@ -49,7 +49,7 @@
 完成wheel包的安装后,几行代码即可完成行人检测模块的推理,可以任意切换该模块下的模型,您也可以将行人检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/human_detection.jpg)到本地。
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-YOLOE-S_human"
 
@@ -77,7 +77,7 @@ for res in output:
 
 ```bash
 cd /path/to/paddlex
-wget https://bj.bcebos.com/v1/paddledet/data/widerperson_coco_examples.tar -P ./dataset
+wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/widerperson_coco_examples.tar -P ./dataset
 tar -xf ./dataset/widerperson_coco_examples.tar -C ./dataset/
 ```
 #### 4.1.2 数据校验
@@ -234,7 +234,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 ```
 与模型训练类似,需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_human``.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_human.yaml`)
 * 指定模式为模型评估:`-o Global.mode=evaluate`
 * 指定验证数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Evaluate`下的字段来进行设置,详细请参考[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -243,7 +243,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
   <summary>👉 <b>更多说明(点击展开)</b></summary>
 
 
-在模型评估时,需要指定模型权重文件路径,每个配置文件中都内置了默认的权重保存路径,如需要改变,只需要通过追加命令行参数的形式进行设置即可,如`-o Evaluate.weight_path=``./output/best_model/best_model/model.pdparams`。
+在模型评估时,需要指定模型权重文件路径,每个配置文件中都内置了默认的权重保存路径,如需要改变,只需要通过追加命令行参数的形式进行设置即可,如`-o Evaluate.weight_path=./output/best_model/best_model/model.pdparams`。
 
 在完成模型评估后,会产出`evaluate_result.json,其记录了`评估的结果,具体来说,记录了评估任务是否正常完成,以及模型的评估指标,包含 AP;
 

+ 2 - 2
docs/module_usage/tutorials/cv_modules/human_detection_en.md

@@ -50,7 +50,7 @@ After installing the wheel package, you can perform human detection with just a
 
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-YOLOE-S_human"
 
@@ -78,7 +78,7 @@ You can download the demo dataset to a specified folder using the following comm
 
 ```bash
 cd /path/to/paddlex
-wget https://bj.bcebos.com/v1/paddledet/data/widerperson_coco_examples.tar -P ./dataset
+wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/widerperson_coco_examples.tar -P ./dataset
 tar -xf ./dataset/widerperson_coco_examples.tar -C ./dataset/
 ```
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -40,7 +40,7 @@
 完成whl包的安装后,几行代码即可完成主体检测模块的推理,可以任意切换该模块下的模型,您也可以将主体检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png)到本地。
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-ShiTuV2_det"
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md

@@ -40,7 +40,7 @@ Mainbody detection is a fundamental task in object detection, aiming to identify
 After installing the wheel package, you can perform mainbody detection inference with just a few lines of code. You can easily switch between models under this module, and integrate the mainbody detection model inference into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png) to your local machine.
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-ShiTuV2_det"
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/semantic_segmentation.md

@@ -46,7 +46,7 @@
 
 完成 wheel 包的安装后,几行代码即可完成语义分割模块的推理,可以任意切换该模块下的模型,您也可以将语义分割的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_semantic_segmentation_002.png)到本地。
 
-```bash
+```python
 from paddlex import create_model
 model = create_model("PP-LiteSeg-T")
 output = model.predict("general_semantic_segmentation_002.png", batch_size=1)

+ 9 - 11
docs/module_usage/tutorials/cv_modules/semantic_segmentation_en.md

@@ -42,12 +42,12 @@ Semantic segmentation is a technique in computer vision that classifies each pix
 </details>
 
 ## III. Quick Integration
-> ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation.md)
+> ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
 
 
 Just a few lines of code can complete the inference of the Semantic Segmentation module, allowing you to easily switch between models under this module. You can also integrate the model inference of the the Semantic Segmentation module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_semantic_segmentation_002.png) to your local machine.
 
-```bash
+```python
 from paddlex import create_model
 model = create_model("PP-LiteSeg-T")
 output = model.predict("general_semantic_segmentation_002.png", batch_size=1)
@@ -56,15 +56,15 @@ for res in output:
     res.save_to_img("./output/")
     res.save_to_json("./output/res.json")
 ```
-For more information on using PaddleX's single-model inference API, refer to the [PaddleX Single Model Python Script Usage Instructions](../../instructions/model_python_API.md).
+For more information on using PaddleX's single-model inference API, refer to the [PaddleX Single Model Python Script Usage Instructions](../../instructions/model_python_API_en.md).
 
 ## IV. Custom Development
 
-If you seek higher accuracy, you can leverage PaddleX's custom development capabilities to develop better Semantic Segmentation models. Before developing a Semantic Segmentation model with PaddleX, ensure you have installed PaddleClas plugin for PaddleX. The installation process can be found in the custom development section of the [PaddleX Local Installation Tutorial](https://github.com/AmberC0209/PaddleX/blob/docs_change/docs_new/installation/installation.md).
+If you seek higher accuracy, you can leverage PaddleX's custom development capabilities to develop better Semantic Segmentation models. Before developing a Semantic Segmentation model with PaddleX, ensure you have installed PaddleClas plugin for PaddleX. The installation process can be found in the custom development section of the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md).
 
 ### 4.1 Dataset Preparation
 
-Before model training, you need to prepare a dataset for the task. PaddleX provides data validation functionality for each module. **Only data that passes validation can be used for model training.** Additionally, PaddleX provides demo datasets for each module, which you can use to complete subsequent development. If you wish to use private datasets for model training, refer to [PaddleX Semantic Segmentation Task Module Data Preparation Tutorial](/docs_new_en/data_annotations/cv_modules/semantic_segmentation_en.md).
+Before model training, you need to prepare a dataset for the task. PaddleX provides data validation functionality for each module. **Only data that passes validation can be used for model training.** Additionally, PaddleX provides demo datasets for each module, which you can use to complete subsequent development. If you wish to use private datasets for model training, refer to [PaddleX Semantic Segmentation Task Module Data Preparation Tutorial](../../../data_annotations/cv_modules/semantic_segmentation_en.md).
 
 #### 4.1.1 Demo Data Download
 
@@ -118,8 +118,6 @@ The specific content of the verification result file is:
 }
 ```
 
-</details>
-
 The verification results above indicate that `check_pass` being `True` means the dataset format meets the requirements. Explanations for other indicators are as follows:
 
 * `attributes.num_classes`: The number of classes in this dataset is 2;
@@ -247,14 +245,14 @@ You need to follow these steps:
 * Set the mode to model training: `-o Global.mode=train`
 * Specify the training dataset path: `-o Global.dataset_dir`
 
-Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to train using the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters Documentation](../../instructions/config_parameters_common.md).
+Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to train using the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration Parameters Documentation](../../instructions/config_parameters_common_en.md).
 
 <details>
   <summary>👉 <b>More Details (Click to Expand)</b></summary>
 
 * During model training, PaddleX automatically saves model weight files, with the default path being `output`. To specify a different save path, use the `-o Global.output` field in the configuration file.
 * PaddleX abstracts the concepts of dynamic graph weights and static graph weights from you. During model training, both dynamic and static graph weights are produced, and static graph weights are used by default for model inference.
-* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list.md).
+* When training other models, specify the corresponding configuration file. The mapping between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
 
 After model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
@@ -279,7 +277,7 @@ Similar to model training, follow these steps:
 * Set the mode to model evaluation: `-o Global.mode=evaluate`
 * Specify the validation dataset path: `-o Global.dataset_dir`
 
-Other related parameters can be set by modifying the `Global` and `Evaluate` fields in the `.yaml` configuration file. For more details, refer to the [PaddleX Common Configuration Parameters Documentation](../../instructions/config_parameters_common.md).
+Other related parameters can be set by modifying the `Global` and `Evaluate` fields in the `.yaml` configuration file. For more details, refer to the [PaddleX Common Configuration Parameters Documentation](../../instructions/config_parameters_common_en.md).
 
 <details>
   <summary>👉 <b>More Details (Click to Expand)</b></summary>
@@ -328,5 +326,5 @@ The document semantic segmentation module can be integrated into PaddleX pipelin
 
 2. **Module Integration**
 
-The weights you produce can be directly integrated into the semantic segmentation module. You can refer to the Python sample code in [Quick Integration](#quick-integration) and just replace the model with the path to the model you trained.
+The weights you produce can be directly integrated into the semantic segmentation module. You can refer to the Python sample code in [Quick Integration](#iii-quick-integration) and just replace the model with the path to the model you trained.
     

+ 39 - 5
docs/module_usage/tutorials/cv_modules/small_object_detection.md

@@ -19,7 +19,7 @@
     <th >GPU推理耗时(ms)</th>
     <th >CPU推理耗时 (ms)</th>
     <th >模型存储大小(M)</th>
-    <th >介绍VisDrone</th>
+    <th >介绍</th>
   </tr>
   <tr>
     <td>PP-YOLOE_plus_SOD-L</td>
@@ -28,7 +28,7 @@
     <td>57.1</td>
     <td>1007.0</td>
     <td>324.93</td>
-    <td rowspan="3">基于VisDrone训练的PP-YOLOE_plus小目标检测模型</td>
+    <td rowspan="3">基于VisDrone训练的PP-YOLOE_plus小目标检测模型。VisDrone是针对无人机视觉数据的基准数据集,由于目标较小同时具有一定的挑战性而被用于小目标检测任务的训练和评测</td>
     
   </tr>
   <tr>
@@ -58,7 +58,7 @@
 完成whl包的安装后,几行代码即可完成小目标检测模块的推理,可以任意切换该模块下的模型,您也可以将小目标检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/small_object_detection.jpg)到本地。
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-YOLOE_plus_SOD-S"
 
@@ -152,7 +152,41 @@ python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml
 
 **(1)数据集格式转换**
 
-小目标检测不支持数据格式转换。
+小目标检测支持 `VOC`、`LabelMe` 格式的数据集转换为 `COCO` 格式。
+
+数据集校验相关的参数可以通过修改配置文件中 `CheckDataset` 下的字段进行设置,配置文件中部分参数的示例说明如下:
+
+* `CheckDataset`:
+  * `convert`:
+    * `enable`: 是否进行数据集格式转换,小目标检测支持 `VOC`、`LabelMe` 格式的数据集转换为 `COCO` 格式,默认为 `False`;
+    * `src_dataset_type`: 如果进行数据集格式转换,则需设置源数据集格式,默认为 `null`,可选值为 `VOC`、`LabelMe` 和 `VOCWithUnlabeled`、`LabelMeWithUnlabeled` ;
+例如,您想转换 `LabelMe` 格式的数据集为 `COCO` 格式,以下面的`LabelMe` 格式的数据集为例,则需要修改配置如下:
+
+```bash
+......
+CheckDataset:
+  ......
+  convert:
+    enable: True
+    src_dataset_type: LabelMe
+  ......
+```
+随后执行命令:
+
+```bash
+python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml \
+    -o Global.mode=check_dataset \
+    -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset
+```
+当然,以上参数同样支持通过追加命令行参数的方式进行设置,以 `LabelMe` 格式的数据集为例:
+
+```bash
+python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml \
+    -o Global.mode=check_dataset \
+    -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset \
+    -o CheckDataset.convert.enable=True \
+    -o CheckDataset.convert.src_dataset_type=LabelMe
+```
 
 **(2)数据集划分**
 
@@ -198,7 +232,7 @@ python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml
 </details>
 
 ### 4.2 模型训练
-一条命令即可完成模型的训练,以此处PP-ShiTuV2_det的训练为例:
+一条命令即可完成模型的训练,以此处PP-YOLOE_plus_SOD-S的训练为例:
 
 ```bash
 python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml \

+ 37 - 3
docs/module_usage/tutorials/cv_modules/small_object_detection_en.md

@@ -27,7 +27,7 @@ Small object detection typically refers to accurately detecting and locating sma
     <td>57.1</td>
     <td>1007.0</td>
     <td>324.93</td>
-    <td rowspan="3">PP-YOLOE_plus small object detection model trained on VisDrone</td>
+    <td rowspan="3">PP-YOLOE_plus small object detection model trained on VisDrone. VisDrone is a benchmark dataset specifically for unmanned aerial vehicle (UAV) visual data, which is used for small object detection due to the small size of the targets and the inherent challenges they pose.</td>
     
   </tr>
   <tr>
@@ -57,7 +57,7 @@ Small object detection typically refers to accurately detecting and locating sma
 After installing the wheel package, you can complete the inference of the small object detection module with just a few lines of code. You can switch models under this module freely, and you can also integrate the model inference of the small object detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/small_object_detection.jpg) to your local machine.
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-YOLOE_plus_SOD-S"
 
@@ -151,7 +151,41 @@ After completing the dataset verification, you can convert the dataset format or
 
 **(1) Dataset Format Conversion**
 
-Small object detection does not support data format conversion.
+Small object detection supports converting datasets in `VOC` and `LabelMe` formats to `COCO` format.
+
+Parameters related to dataset validation can be set by modifying the fields under `CheckDataset` in the configuration file. Examples of some parameters in the configuration file are as follows:
+
+* `CheckDataset`:
+  * `convert`:
+    * `enable`: Whether to perform dataset format conversion. Small object detection supports converting `VOC` and `LabelMe` format datasets to `COCO` format. Default is `False`;
+    * `src_dataset_type`: If dataset format conversion is performed, the source dataset format needs to be set. Default is `null`, with optional values `VOC`, `LabelMe`, `VOCWithUnlabeled`, `LabelMeWithUnlabeled`;
+For example, if you want to convert a `LabelMe` format dataset to `COCO` format, taking the following `LabelMe` format dataset as an example, you need to modify the configuration as follows:
+
+```bash
+......
+CheckDataset:
+  ......
+  convert:
+    enable: True
+    src_dataset_type: LabelMe
+  ......
+```
+Then execute the command:
+
+```bash
+python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml \
+    -o Global.mode=check_dataset \
+    -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset
+```
+Of course, the above parameters also support being set by appending command line arguments. Taking a `LabelMe` format dataset as an example:
+
+```bash
+python main.py -c paddlex/configs/smallobject_detection/PP-YOLOE_plus_SOD-S.yaml \
+    -o Global.mode=check_dataset \
+    -o Global.dataset_dir=./path/to/your_smallobject_labelme_dataset \
+    -o CheckDataset.convert.enable=True \
+    -o CheckDataset.convert.src_dataset_type=LabelMe
+```
 
 **(2) Dataset Splitting**
 

+ 4 - 4
docs/module_usage/tutorials/cv_modules/vehicle_detection.md

@@ -45,7 +45,7 @@
 完成wheel包的安装后,几行代码即可完成车辆检测模块的推理,可以任意切换该模块下的模型,您也可以将车辆检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_detection.jpg)到本地。
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-YOLOE-S_vehicle"
 
@@ -140,7 +140,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 
 **(1)数据集格式转换**
 
-人脸检测不支持数据格式转换。
+车辆检测不支持数据格式转换。
 
 **(2)数据集划分**
 
@@ -225,7 +225,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 ```
 与模型训练类似,需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_vehicle``.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为`PP-YOLOE-S_vehicle.yaml`)
 * 指定模式为模型评估:`-o Global.mode=evaluate`
 * 指定验证数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Evaluate`下的字段来进行设置,详细请参考[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -234,7 +234,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
   <summary>👉 <b>更多说明(点击展开)</b></summary>
 
 
-在模型评估时,需要指定模型权重文件路径,每个配置文件中都内置了默认的权重保存路径,如需要改变,只需要通过追加命令行参数的形式进行设置即可,如`-o Evaluate.weight_path=``./output/best_model/best_model/model.pdparams`。
+在模型评估时,需要指定模型权重文件路径,每个配置文件中都内置了默认的权重保存路径,如需要改变,只需要通过追加命令行参数的形式进行设置即可,如`-o Evaluate.weight_path=./output/best_model/best_model/model.pdparams`。
 
 在完成模型评估后,会产出`evaluate_result.json,其记录了`评估的结果,具体来说,记录了评估任务是否正常完成,以及模型的评估指标,包含 AP;
 

+ 5 - 5
docs/module_usage/tutorials/cv_modules/vehicle_detection_en.md

@@ -39,18 +39,18 @@ Vehicle detection is a subtask of object detection, specifically referring to th
 **Note: The evaluation set for the above accuracy metrics is PPVehicle dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 </details>
 
-## III. Quick Integration  <a id="quick"> </a> 
+## III. Quick Integration
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation_en.md)
 
 After installing the wheel package, you can complete the inference of the vehicle detection module with just a few lines of code. You can switch models under this module freely, and you can also integrate the model inference of the vehicle detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_detection.jpg) to your local machine.
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PP-YOLOE-S_vehicle"
 
 model = create_model(model_name)
-output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_detection.jpg", batch_size=1)
+output = model.predict("vehicle_detection.jpg", batch_size=1)
 
 for res in output:
     res.print(json_format=False)
@@ -205,7 +205,7 @@ Other related parameters can be set by modifying the `Global` and `Train` fields
 * During model training, PaddleX automatically saves model weight files, defaulting to `output`. To specify a save path, use the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
 * When training other models, specify the corresponding configuration file. The correspondence between models and configuration files can be found in the [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md).
-After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically```markdown
+After completing model training, all outputs are saved in the specified output directory (default is `./output/`), typically
 Similar to model training, the following steps are required:
 
 * Specify the `.yaml` configuration file path of the model (here it is `PP-YOLOE-S_vehicle.yaml`)
@@ -269,4 +269,4 @@ Similar to model training and evaluation, the following steps are required:
 Other related parameters can be set by modifying the fields under `Global` and `Predict` in the `.yaml` configuration file. For details, please refer to [PaddleX Common Model Configuration File Parameter Description](../../instructions/config_parameters_common_en.md).
 
 #### 4.4.2 Model Integration
-The weights you produced can be directly integrated into the vehicle detection module. You can refer to the Python example code in [Quick Integration](#quick), simply replace the model with the path to your trained model.
+The weights you produced can be directly integrated into the vehicle detection module. You can refer to the Python example code in [Quick Integration](#iii-quick-integration), simply replace the model with the path to your trained model.

+ 2 - 2
docs/module_usage/tutorials/ocr_modules/layout_detection.md

@@ -26,7 +26,7 @@
 完成whl包的安装后,几行代码即可完成版面区域检测模块的推理,可以任意切换该模块下的模型,您也可以将版面区域检测模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg)到本地。
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PicoDet-L_layout_3cls"
 
@@ -248,7 +248,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 模型可以直接集成到PaddleX产线中,也可以直接集成到您自己的项目中。
 
 1. **产线集成**
-版面区域检测模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelies/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../..//pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成版面区域检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+版面区域检测模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成版面区域检测模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 1. **模块集成**
 您产出的权重可以直接集成到版面区域检测模块中,可以参考[快速集成](#三快速集成)的 Python 示例代码,只需要将模型替换为你训练的到的模型路径即可。

+ 2 - 2
docs/module_usage/tutorials/ocr_modules/layout_detection_en.md

@@ -26,7 +26,7 @@ The core task of structure analysis is to parse and segment the content of input
 After installing the wheel package, a few lines of code can complete the inference of the structure analysis module. You can switch models under this module freely, and you can also integrate the model inference of the structure analysis module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg) to your local machine.
 
 ```python
-from paddlex.inference import create_model 
+from paddlex import create_model 
 
 model_name = "PicoDet-L_layout_3cls"
 
@@ -250,7 +250,7 @@ Other related parameters can be set by modifying the fields under `Global` and `
 The model can be directly integrated into PaddleX pipelines or into your own projects.
 
 1. **Pipeline Integration**
-The structure analysis module can be integrated into PaddleX pipelines such as the [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelies/table_recognition_en.md) and the [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../..//pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the layout area localization module. In pipeline integration, you can use high-performance deployment and service-oriented deployment to deploy your model.
+The structure analysis module can be integrated into PaddleX pipelines such as the [General Table Recognition Pipeline](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md) and the [Document Scene Information Extraction Pipeline v3 (PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md). Simply replace the model path to update the layout area localization module. In pipeline integration, you can use high-performance deployment and service-oriented deployment to deploy your model.
 
 1. **Module Integration**
 The weights you produce can be directly integrated into the layout area localization module. You can refer to the Python example code in the [Quick Integration](#quick) section, simply replacing the model with the path to your trained model.