Browse Source

Merge develop to 3.0-beta1 for docs and bugfix (#2248)

* Doc review2 (#2235)

* doc review modification

* repair error yaml modify

* set device when init model

* support dcu

* refine practical en docs (#2238)

* bugfix & support CI for chatocr

* repair link bug (#2244)

* repair link bug

* replace Solov2 into SOLOv2

* Refine ocr docs (#2242)

* Update paddlepaddle_install.md

* Update paddlepaddle_install_en.md

* refine docs

* refine docs

* refine

* fix params of resize op in ocr_det (#2246)

* add chatocrv3 practical tutorial (#2241)

* add chatocrv3 practical tutorial

* add seal

* rm unnecessary device setting (#2247)

* Update ppchatocrv3 serving app interface (#2245)

* update doc (#2240)

* update doc

update

update readme

* update docs

---------

Co-authored-by: liuhongen1234567 <65936492+liuhongen1234567@users.noreply.github.com>
Co-authored-by: gaotingquan <gaotingquan@baidu.com>
Co-authored-by: zhangyubo0722 <94225063+zhangyubo0722@users.noreply.github.com>
Co-authored-by: Liu Jiaxuan <85537209+liu-jiaxuan@users.noreply.github.com>
Co-authored-by: Tingquan Gao <35441050@qq.com>
Co-authored-by: Sunflower7788 <263037929@qq.com>
Co-authored-by: Lin Manhui <bob1998425@hotmail.com>
Co-authored-by: AmberC0209 <55582609+AmberC0209@users.noreply.github.com>
cuicheng01 1 năm trước cách đây
mục cha
commit
524fab4a4e
97 tập tin đã thay đổi với 2718 bổ sung1464 xóa
  1. 21 12
      README.md
  2. 59 49
      README_en.md
  3. 2 2
      docs/data_annotations/cv_modules/instance_segmentation.md
  4. 40 41
      docs/module_usage/instructions/config_parameters_common.md
  5. 41 42
      docs/module_usage/instructions/config_parameters_common_en.md
  6. 52 50
      docs/module_usage/instructions/config_parameters_time_series.md
  7. 55 75
      docs/module_usage/instructions/config_parameters_time_series_en.md
  8. 1 1
      docs/module_usage/tutorials/cv_modules/anomaly_detection.md
  9. 2 2
      docs/module_usage/tutorials/cv_modules/face_detection.md
  10. 2 2
      docs/module_usage/tutorials/cv_modules/human_detection.md
  11. 3 3
      docs/module_usage/tutorials/cv_modules/image_classification.md
  12. 7 7
      docs/module_usage/tutorials/cv_modules/image_feature.md
  13. 18 17
      docs/module_usage/tutorials/cv_modules/instance_segmentation.md
  14. 13 12
      docs/module_usage/tutorials/cv_modules/instance_segmentation_en.md
  15. 2 2
      docs/module_usage/tutorials/cv_modules/mainbody_detection.md
  16. 4 4
      docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md
  17. 8 8
      docs/module_usage/tutorials/cv_modules/ml_classification.md
  18. 33 34
      docs/module_usage/tutorials/cv_modules/object_detection.md
  19. 33 35
      docs/module_usage/tutorials/cv_modules/object_detection_en.md
  20. 23 3
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md
  21. 20 0
      docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition_en.md
  22. 1 1
      docs/module_usage/tutorials/cv_modules/semantic_segmentation.md
  23. 3 3
      docs/module_usage/tutorials/cv_modules/small_object_detection.md
  24. 6 4
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md
  25. 4 0
      docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition_en.md
  26. 2 2
      docs/module_usage/tutorials/cv_modules/vehicle_detection.md
  27. 2 2
      docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md
  28. 7 7
      docs/module_usage/tutorials/ocr_modules/formula_recognition.md
  29. 3 3
      docs/module_usage/tutorials/ocr_modules/layout_detection.md
  30. 3 3
      docs/module_usage/tutorials/ocr_modules/layout_detection_en.md
  31. 2 2
      docs/module_usage/tutorials/ocr_modules/seal_text_detection.md
  32. 4 4
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md
  33. 1 1
      docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md
  34. 2 2
      docs/module_usage/tutorials/ocr_modules/text_detection.md
  35. 6 6
      docs/module_usage/tutorials/ocr_modules/text_recognition.md
  36. 6 7
      docs/module_usage/tutorials/ocr_modules/text_recognition_en.md
  37. 1 1
      docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md
  38. 1 1
      docs/module_usage/tutorials/time_series_modules/time_series_classification.md
  39. 1 1
      docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md
  40. 102 50
      docs/pipeline_usage/pipeline_develop_guide.md
  41. 20 23
      docs/pipeline_usage/pipeline_develop_guide_en.md
  42. 4 4
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md
  43. 4 4
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md
  44. 6 6
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md
  45. 5 5
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md
  46. 6 6
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md
  47. 4 3
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md
  48. 7 7
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md
  49. 4 4
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md
  50. 6 6
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md
  51. 2 2
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md
  52. 8 8
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md
  53. 2 2
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md
  54. 3 3
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md
  55. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md
  56. 4 13
      docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md
  57. 2 3
      docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md
  58. 86 56
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md
  59. 71 47
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md
  60. 3 3
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md
  61. 2 2
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md
  62. 344 0
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md
  63. 354 0
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition_en.md
  64. 7 7
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md
  65. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md
  66. 6 6
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md
  67. 4 4
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md
  68. 6 6
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md
  69. 4 4
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md
  70. 6 6
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md
  71. 4 4
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md
  72. 438 0
      docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial.md
  73. 1 1
      docs/practical_tutorials/image_classification_garbage_tutorial_en.md
  74. 1 1
      docs/practical_tutorials/ocr_det_license_tutorial.md
  75. 1 1
      docs/practical_tutorials/ocr_rec_chinese_tutorial.md
  76. 7 7
      docs/practical_tutorials/ts_anomaly_detection_en.md
  77. 3 3
      docs/practical_tutorials/ts_classification_en.md
  78. 5 5
      docs/practical_tutorials/ts_forecast_en.md
  79. 284 284
      docs/support_list/models_list.md
  80. 285 285
      docs/support_list/models_list_en.md
  81. 3 0
      paddlex/inference/components/llm/__init__.py
  82. 4 3
      paddlex/inference/components/paddle_predictor/predictor.py
  83. 4 2
      paddlex/inference/models/base/basic_predictor.py
  84. 6 2
      paddlex/inference/models/text_detection.py
  85. 14 4
      paddlex/inference/pipelines/base.py
  86. 1 2
      paddlex/inference/pipelines/formula_recognition.py
  87. 1 2
      paddlex/inference/pipelines/ocr.py
  88. 28 64
      paddlex/inference/pipelines/ppchatocrv3/ppchatocrv3.py
  89. 1 1
      paddlex/inference/pipelines/ppchatocrv3/utils.py
  90. 1 2
      paddlex/inference/pipelines/seal_recognition.py
  91. 27 27
      paddlex/inference/pipelines/serving/_pipeline_apps/ppchatocrv3.py
  92. 2 2
      paddlex/inference/pipelines/single_model_pipeline.py
  93. 4 4
      paddlex/inference/pipelines/table_recognition/table_recognition.py
  94. 8 8
      paddlex/inference/results/chat_ocr.py
  95. 2 2
      paddlex/inference/utils/pp_option.py
  96. 2 2
      paddlex/model.py
  97. 3 3
      paddlex/pipelines/PP-ChatOCRv3-doc.yaml

Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 21 - 12
README.md


Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 59 - 49
README_en.md


+ 2 - 2
docs/data_annotations/cv_modules/instance_segmentation.md

@@ -38,7 +38,7 @@ pip install labelme
 
 * 在 `fruit` 文件夹中创建待标注数据集的类别标签文件 `label.txt`,并在 `label.txt` 中按行写入待标注数据集的类别。以水果实例分割数据集的 `label.txt` 为例,如下图所示:
 
-![alt text](/tmp//images/data_prepare/instance_segmentation/06.png)
+![alt text](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/data_prepare/instance_segmentation/06.png)
 
 #### 2.3.2 启动 Labelme
 终端进入到带标注数据集根目录,并启动 `labelme` 标注工具。
@@ -102,4 +102,4 @@ dataset_dir                  # 数据集根目录,目录名称可以改变
 
 * 实例分割数据要求采用 `COCO` 数据格式标注出数据集中每张图像各个目标区域的像素边界和类别,采用 `[x1,y1,x2,y2,...,xn,yn]` 表示物体的多边形边界(segmentation)。其中,`(xn,yn)` 表示多边形各个角点坐标。标注信息存放到 `annotations` 目录下的 `json` 文件中,训练集 `instance_train.json` 和验证集 `instance_val.json` 分开存放。
 * 如果你有一批未标注数据,我们推荐使用 `LabelMe` 进行数据标注。对于使用 `LabelMe` 标注的数据集,PaddleX产线支持进行数据格式转换。
-* 为确保格式转换顺利完成,请严格遵循示例数据集的文件命名和组织方式: [LabelMe 示例数据集](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/instance_seg_labelme_examples.tar)。
+* 为确保格式转换顺利完成,请严格遵循示例数据集的文件命名和组织方式: [LabelMe 示例数据集](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/instance_seg_labelme_examples.tar)。

+ 40 - 41
docs/module_usage/instructions/config_parameters_common.md

@@ -3,51 +3,50 @@
 # PaddleX通用模型配置文件参数说明
 
 # Global
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|model|str|指定模型名称|-|必需|
-|mode|str|指定模式(check_dataset/train/evaluate/export/predict)|-|必需|
-|dataset_dir|str|数据集路径|-|必需|
-|device|str|指定使用的设备|-|必需|
-|output|str|输出路径|"output"|可选|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|model|str|指定模型名称|yaml文件中指定的模型名称|
+|mode|str|指定模式(check_dataset/train/evaluate/export/predict)|check_dataset|
+|dataset_dir|str|数据集路径|yaml文件中指定的数据集路径|
+|device|str|指定使用的设备|yaml文件中指定的设备id|
+|output|str|输出路径|"output"|
 # CheckDataset
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|convert.enable|bool|是否进行数据集格式转换|False|可选|
-|convert.src_dataset_type|str|需要转换的源数据集格式|null|可选|
-|split.enable|bool|是否重新划分数据集|False|可选|
-|split.train_percent|int|设置训练集的百分比,类型为0-100之间的任意整数,需要保证和val_percent值加和为100;|null|可选|
-|split.val_percent|int|设置验证集的百分比,类型为0-100之间的任意整数,需要保证和train_percent值加和为100;|null|可选|
-|split.gallery_percent|int|设置验证集中被查询样本的百分比,类型为0-100之间的任意整数,需要保证和train_percent、query_percent,值加和为100;该参数只有图像特征模块才会使用|null|可选|
-|split.query_percent|int|设置验证集中查询样本的百分比,类型为0-100之间的任意整数,需要保证和train_percent、gallery_percent,值加和为100;该参数只有图像特征模块才会使用|null|可选|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|convert.enable|bool|是否进行数据集格式转换;图像分类、行人属性识别、车辆属性识别、文档方向分类、主体检测、行人检测、车辆检测、人脸检测、异常检测、文本检测、印章文本检测、文本识别、表格识别、图像矫正、版面区域检测暂不支持数据格式转换;图像多标签分类支持COCO格式的数据转换;图像特征、语义分割、实例分割支持LabelMe格式的数据转换;目标检测和小目标检测支持VOC、LabelMe格式的数据转换;公式识别支持PKL格式的数据转换;时序预测、时序异常检测、时序分类支持xlsx和xls格式的数据转换|False|
+|convert.src_dataset_type|str|需要转换的源数据集格式|null|
+|split.enable|bool|是否重新划分数据集|False|
+|split.train_percent|int|设置训练集的百分比,类型为0-100之间的任意整数,需要保证和val_percent值加和为100;|null|
+|split.val_percent|int|设置验证集的百分比,类型为0-100之间的任意整数,需要保证和train_percent值加和为100;|null|
+|split.gallery_percent|int|设置验证集中被查询样本的百分比,类型为0-100之间的任意整数,需要保证和train_percent、query_percent,值加和为100;该参数只有图像特征模块才会使用|null|
+|split.query_percent|int|设置验证集中查询样本的百分比,类型为0-100之间的任意整数,需要保证和train_percent、gallery_percent,值加和为100;该参数只有图像特征模块才会使用|null|
 
 # Train
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|num_classes|int|数据集中的类别数|-|必需|
-|epochs_iters|int|模型对训练数据的重复学习次数|-|必需|
-|batch_size|int|训练批大小|-|必需|
-|learning_rate|float|初始学习率|-|必需|
-|pretrain_weight_path|str|预训练权重路径|null|可选|
-|warmup_steps|int|预热步数|-|必需|
-|resume_path|str|模型中断后的恢复路径|null|可选|
-|log_interval|int|训练日志打印间隔|-|必需|
-|eval_interval|int|模型评估间隔|-|必需|
-|save_interval|int|模型保存间隔|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|num_classes|int|数据集中的类别数;如果您需要在私有数据集进行训练,需要对该参数进行设置;图像矫正、文本检测、印章文本检测、文本识别、公式识别、表格识别、时序预测、时序异常检测、时序分类不支持该参数|yaml文件中指定类别数|
+|epochs_iters|int|模型对训练数据的重复学习次数|yaml文件中指定的重复学习次数|
+|batch_size|int|训练批大小|yaml文件中指定的训练批大小|
+|learning_rate|float|初始学习率|yaml文件中指定的初始学习率|
+|pretrain_weight_path|str|预训练权重路径|null|
+|warmup_steps|int|预热步数|yaml文件中指定的预热步数|
+|resume_path|str|模型中断后的恢复路径|null|
+|log_interval|int|训练日志打印间隔|yaml文件中指定的训练日志打印间隔|
+|eval_interval|int|模型评估间隔|yaml文件中指定的模型评估间隔|
+|save_interval|int|模型保存间隔;异常检测、语义分割、图像矫正、时序预测、时序异常检测、时序分类暂不支持该参数|yaml文件中指定的模型保存间隔|
 
 # Evaluate
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|weight_path|str|评估模型路径|-|必需|
-|log_interval|int|评估日志打印间隔|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|weight_path|str|评估模型路径|默认训练产出的本地路径,当指定为None时,表示使用官方权重|
+|log_interval|int|评估日志打印间隔|yaml文件中指定的评估日志打印间隔|
 # Export
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|weight_path|str|导出模型的动态图权重路径|各模型官方动态图权重URL|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|weight_path|str|导出模型的动态图权重路径|默认训练产出的本地路径,当指定为None时,表示使用官方权重|
 # Predict
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|batch_size|int|预测批大小|-|必需|
-|model_dir|str|预测模型路径|PaddleX模型官方权重|可选|
-|input|str|预测输入路径|-|必需|
-
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|batch_size|int|预测批大小|yaml文件中指定的预测批大小|
+|model_dir|str|预测模型路径|默认训练产出的本地推理模型路径,当指定为None时,表示使用官方权重|
+|input|str|预测输入路径|yaml文件中指定的预测输入路径|

+ 41 - 42
docs/module_usage/instructions/config_parameters_common_en.md

@@ -1,56 +1,55 @@
 [简体中文](config_parameters_common.md) | English
 
-
-# PaddleX General Model Configuration File Parameter Explanation
+# PaddleX Common Model Configuration File Parameter Explanation
 
 # Global
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |
-|-|-|-|-|-|
-| model | str | Specifies the model name | - | Required |
-| mode | str | Specifies the mode (check_dataset/train/evaluate/export/predict) | - | Required |
-| dataset_dir | str | Path to the dataset | - | Required |
-| device | str | Specifies the device to use | - | Required |
-| output | str | Output path | "output" | Optional |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| model | str | Specifies the model name | Model name specified in the YAML file |
+| mode | str | Specifies the mode (check_dataset/train/evaluate/export/predict) | check_dataset |
+| dataset_dir | str | Path to the dataset | Dataset path specified in the YAML file |
+| device | str | Specifies the device to use | Device ID specified in the YAML file |
+| output | str | Output path | "output" |
 
 # CheckDataset
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |
-|-|-|-|-|-|
-| convert.enable | bool | Whether to enable dataset format conversion | False | Optional |
-| convert.src_dataset_type | str | Source dataset format to convert | null | Optional |
-| split.enable | bool | Whether to re-split the dataset | False | Optional |
-| split.train_percent | int | Sets the percentage of the training set, an integer between 0-100, which needs to sum up to 100 with val_percent | null | Optional |
-| split.val_percent | int | Sets the percentage of the validation set, an integer between 0-100, which needs to sum up to 100 with train_percent | null | Optional |
-| split.gallery_percent | int | Sets the percentage of gallery samples in the validation set, an integer between 0-100, which needs to sum up to 100 with train_percent and query_percent; this parameter is only used in the image feature module | null | Optional |
-| split.query_percent | int | Sets the percentage of query samples in the validation set, an integer between 0-100, which needs to sum up to 100 with train_percent and gallery_percent; this parameter is only used in the image feature module | null | Optional |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| convert.enable | bool | Whether to convert the dataset format; Image classification, pedestrian attribute recognition, vehicle attribute recognition, document orientation classification, object detection, pedestrian detection, vehicle detection, face detection, anomaly detection, text detection, seal text detection, text recognition, table recognition, image rectification, and layout area detection do not support data format conversion; Image multi-label classification supports COCO format conversion; Image feature, semantic segmentation, and instance segmentation support LabelMe format conversion; Object detection and small object detection support VOC and LabelMe format conversion; Formula recognition supports PKL format conversion; Time series prediction, time series anomaly detection, and time series classification support xlsx and xls format conversion | False |
+| convert.src_dataset_type | str | The source dataset format to be converted | null |
+| split.enable | bool | Whether to re-split the dataset | False |
+| split.train_percent | int | Sets the percentage of the training set, an integer between 0-100, ensuring the sum with val_percent is 100; | null |
+| split.val_percent | int | Sets the percentage of the validation set, an integer between 0-100, ensuring the sum with train_percent is 100; | null |
+| split.gallery_percent | int | Sets the percentage of gallery samples in the validation set, an integer between 0-100, ensuring the sum with train_percent and query_percent is 100; This parameter is only used in the image feature module | null |
+| split.query_percent | int | Sets the percentage of query samples in the validation set, an integer between 0-100, ensuring the sum with train_percent and gallery_percent is 100; This parameter is only used in the image feature module | null |
 
 # Train
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |
-|-|-|-|-|-|
-| num_classes | int | Number of classes in the dataset | - | Required |
-| epochs_iters | int | Number of times the model learns from the training data | - | Required |
-| batch_size | int | Training batch size | - | Required |
-| learning_rate | float | Initial learning rate | - | Required |
-| pretrain_weight_path | str | Pre-trained weight path | null | Optional |
-| warmup_steps | int | Warmup steps | - | Required |
-| resume_path | str | Path to resume the model after interruption | null | Optional |
-| log_interval | int | Interval for printing training logs | - | Required |
-| eval_interval | int | Interval for model evaluation | - | Required |
-| save_interval | int | Interval for saving the model | - | Required |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| num_classes | int | Number of classes in the dataset; If you need to train on a private dataset, you need to set this parameter; Image rectification, text detection, seal text detection, text recognition, formula recognition, table recognition, time series prediction, time series anomaly detection, and time series classification do not support this parameter | Number of classes specified in the YAML file |
+| epochs_iters | int | Number of times the model repeats learning the training data | Number of iterations specified in the YAML file |
+| batch_size | int | Training batch size | Training batch size specified in the YAML file |
+| learning_rate | float | Initial learning rate | Initial learning rate specified in the YAML file |
+| pretrain_weight_path | str | Pre-trained weight path | null |
+| warmup_steps | int | Warm-up steps | Warm-up steps specified in the YAML file |
+| resume_path | str | Model resume path after interruption | null |
+| log_interval | int | Training log printing interval | Training log printing interval specified in the YAML file |
+| eval_interval | int | Model evaluation interval | Model evaluation interval specified in the YAML file |
+| save_interval | int | Model saving interval; not supported for anomaly detection, semantic segmentation, image rectification, time series forecasting, time series anomaly detection, and time series classification  | Model saving interval specified in the YAML file |
 
 # Evaluate
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |
-|-|-|-|-|-|
-| weight_path | str | Path to the model for evaluation | - | Required |
-| log_interval | int | Interval for printing evaluation logs | - | Required |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| weight_path | str | Evaluation model path | Default local path from training output, when specified as None, indicates using official weights |
+| log_interval | int | Evaluation log printing interval | Evaluation log printing interval specified in the YAML file |
 
 # Export
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |
-|-|-|-|-|-|
-| weight_path | str | Path to the dynamic graph weights of the model to export | Official dynamic graph weights URL for each model | Required |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| weight_path | str | Dynamic graph weight path for exporting the model | Default local path from training output, when specified as None, indicates using official weights |
 
 # Predict
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |
-|-|-|-|-|-|
-| batch_size | int | Prediction batch size | - | Required |
-| model_dir | str | Path to the prediction model | Official PaddleX model weights | Optional |
-| input | str | Path to the prediction input | - | Required |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| batch_size | int | Prediction batch size | The prediction batch size specified in the YAML file |
+| model_dir | str | Path to the prediction model | The default local inference model path produced by training. When specified as None, it indicates the use of official weights |
+| input | str | Path to the prediction input | The prediction input path specified in the YAML file |

+ 52 - 50
docs/module_usage/instructions/config_parameters_time_series.md

@@ -3,62 +3,64 @@
 # PaddleX时序任务模型配置文件参数说明
 
 # Global
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|model|str|指定模型名称|-|必需|
-|mode|str|指定模式(check_dataset/train/evaluate/export/predict)|-|必需|
-|dataset_dir|str|数据集路径|-|必需|
-|device|str|指定使用的设备|-|必需|
-|output|str|输出路径|"output"|可选|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|model|str|指定模型名称|yaml文件中指定的模型名称|
+|mode|str|指定模式(check_dataset/train/evaluate/export/predict)|check_dataset|
+|dataset_dir|str|数据集路径|yaml文件中指定的数据集路径|
+|device|str|指定使用的设备|yaml文件中指定的设备id|
+|output|str|输出路径|"output"|
+
 # CheckDataset
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|convert.enable|bool|是否进行数据集格式转换|False|可选|
-|convert.src_dataset_type|str|需要转换的源数据集格式|null|不可选|
-|split.enable|bool|是否重新划分数据集|False|可选|
-|split.train_percent|int|设置训练集的百分比,类型为0-100之间的任意整数,需要保证和val_percent值加和为100;|-|可选|
-|split.val_percent|int|设置验证集的百分比,类型为0-100之间的任意整数,需要保证和train_percent值加和为100;|-|可选|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|convert.enable|bool|是否进行数据集格式转换;时序预测、时序异常检测、时序分类支持xlsx和xls格式的数据转换|False|
+|convert.src_dataset_type|str|需要转换的源数据集格式|null|
+|split.enable|bool|是否重新划分数据集|False|
+|split.train_percent|int|设置训练集的百分比,类型为0-100之间的任意整数,需要保证和val_percent值加和为100;|null|
+|split.val_percent|int|设置验证集的百分比,类型为0-100之间的任意整数,需要保证和train_percent值加和为100;|null|
+
 # Train
 ### 时序任务公共参数
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|epochs_iters|int|模型对训练数据的重复学习次数|-|必需|
-|batch_size|int|批大小|-|必需|
-|learning_rate|float|初始学习率|-|必需|
-|time_col|str|时间列,须结合自己的数据设置时间序列数据集的时间列的列名称。|-|必需|
-|freq|str or int|频率,须结合自己的数据设置时间频率,如:1min、5min、1h。|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|epochs_iters|int|模型对训练数据的重复学习次数|yaml文件中指定的重复学习次数|
+|batch_size|int|批大小|yaml文件中指定的批大小|
+|learning_rate|float|初始学习率|yaml文件中指定的初始学习率|
+|time_col|str|时间列,须结合自己的数据设置时间序列数据集的时间列的列名称。|yaml文件中指定的时间列|
+|freq|str or int|频率,须结合自己的数据设置时间频率,如:1min、5min、1h。|yaml文件中指定的频率|
 ### 时序预测参数
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|target_cols|str|目标变量列,须结合自己的数据设置时间序列数据集的目标变量的列名称,可以为多个,多个之间用','分隔|-|必需|
-|input_len|int|对于时序预测任务,该参数表示输入给模型的历史时间序列长度;输入长度建议结合实际场景及预测长度综合考虑,一般来说设置的越大,能够参考的历史信息越多,模型精度通常越高。|-|必需|
-|predict_len|int|希望模型预测未来序列的长度;预测长度建议结合实际场景综合考虑,一般来说设置的越大,希望预测的未来序列越长,模型精度通常越低。|-|必需|
-|patience|int|early stop机制参数,指在停止训练之前,容忍模型在验证集上的性能多少次连续没有改进;耐心值越大,一般训练时间越长。|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|target_cols|str|目标变量列,须结合自己的数据设置时间序列数据集的目标变量的列名称,可以为多个,多个之间用','分隔|OT|
+|input_len|int|对于时序预测任务,该参数表示输入给模型的历史时间序列长度;输入长度建议结合实际场景及预测长度综合考虑,一般来说设置的越大,能够参考的历史信息越多,模型精度通常越高。|96|
+|predict_len|int|希望模型预测未来序列的长度;预测长度建议结合实际场景综合考虑,一般来说设置的越大,希望预测的未来序列越长,模型精度通常越低。|96|
+|patience|int|early stop机制参数,指在停止训练之前,容忍模型在验证集上的性能多少次连续没有改进;耐心值越大,一般训练时间越长。|10|
 ### 时序异常检测
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|input_len|int|对于时序异常检测任务,该参数表示输入给模型的时间序列长度,会按照该长度对时间序列切片,预测该长度下这一段时序序列是否有异常;输入长度建议结合实际场景考虑。如:输入长度为 96,则表示希望预测 96 个时间点是否有异常。|-|必需|
-|feature_cols|str|特征变量表示能够判断设备是否异常的相关变量,例如设备是否异常,可能与设备运转时的散热量有关。结合自己的数据,设置特征变量的列名称,可以为多个,多个之间用','分隔。|-|必需|
-|label_col|str|代表时序时间点是否异常的编号,异常点为 1,正常点为 0。|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|input_len|int|对于时序异常检测任务,该参数表示输入给模型的时间序列长度,会按照该长度对时间序列切片,预测该长度下这一段时序序列是否有异常;输入长度建议结合实际场景考虑。如:输入长度为 96,则表示希望预测 96 个时间点是否有异常。|96|
+|feature_cols|str|特征变量表示能够判断设备是否异常的相关变量,例如设备是否异常,可能与设备运转时的散热量有关。结合自己的数据,设置特征变量的列名称,可以为多个,多个之间用','分隔。|feature_0,feature_1|
+|label_col|str|代表时序时间点是否异常的编号,异常点为 1,正常点为 0。|label|
 ### 时序分类
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|num_classes|int|数据集中的类别数|-|必需|
-|target_cols|str|用于判别类别的特征变量列,须结合自己的数据设置时间序列数据集的目标变量的列名称,可以为多个,多个之间用','分隔|-|必需|
-|freq|str or int|频率,须结合自己的数据设置时间频率,如:1min、5min、1h。|-|必需|
-|group_id|str|一个群组编号表示的是一个时序样本,相同编号的时序序列组成一个样本。结合自己的数据设置指定群组编号的列名称, 如:group_id。|-|必需|
-|static_cov_cols|str|代表时序的类别编号列,同一个样本的标签相同。结合自己的数据设置类别的列名称,如:label。|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|target_cols|str|用于判别类别的特征变量列,须结合自己的数据设置时间序列数据集的目标变量的列名称,可以为多个,多个之间用','分隔|dim_0,dim_1,dim_2|
+|freq|str or int|频率,须结合自己的数据设置时间频率,如:1min、5min、1h。|1|
+|group_id|str|一个群组编号表示的是一个时序样本,相同编号的时序序列组成一个样本。结合自己的数据设置指定群组编号的列名称, 如:group_id。| group_id|
+|static_cov_cols|str|代表时序的类别编号列,同一个样本的标签相同。结合自己的数据设置类别的列名称,如:label。|label|
 # Evaluate
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|weight_path|str|评估模型路径|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|weight_path|str|评估模型路径|默认训练产出的本地路径,当指定为None时,表示使用官方权重|
+
 # Export
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|weight_path|str|导出模型的动态图权重路径|各模型官方动态图权重URL|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|weight_path|str|导出模型的动态图权重路径|默认训练产出的本地路径,当指定为None时,表示使用官方权重|
 # Predict
-|参数名|数据类型|描述|默认值|必需/可选|
-|-|-|-|-|-|
-|model_dir|str|预测模型路径|模型官方权重|可选|
-|input|str|预测输入路径|-|必需|
-|batch_size|int|预测批大小|-|必需|
+|参数名|数据类型|描述|默认值|
+|-|-|-|-|
+|batch_size|int|预测批大小|yaml文件中指定的预测批大小|
+|model_dir|str|预测模型路径|默认训练产出的本地推理模型路径,当指定为None时,表示使用官方权重|
+|input|str|预测输入路径|yaml文件中指定的预测输入路径|

+ 55 - 75
docs/module_usage/instructions/config_parameters_time_series_en.md

@@ -4,89 +4,69 @@
 
 # Global
 
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| model | str | Specifies the model name | - | Required |  
-| mode | str | Specifies the mode (check_dataset/train/evaluate/export/predict) | - | Required |  
-| dataset_dir | str | Path to the dataset | - | Required |  
-| device | str | Specifies the device to use | - | Required |  
-| output | str | Output directory path | "output" | Optional |
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| model | str | Specifies the model name | Model name specified in the YAML file |
+| mode | str | Specifies the mode (check_dataset/train/evaluate/export/predict) | check_dataset |
+| dataset_dir | str | Path to the dataset | Dataset path specified in the YAML file |
+| device | str | Specifies the device to use | Device ID specified in the YAML file |
+| output | str | Output path | "output" |
 
 # CheckDataset
 
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| convert.enable | bool | Whether to enable dataset format conversion | False | Optional |  
-| convert.src_dataset_type | str | The source dataset format to convert from | null | Required |  
-| split.enable | bool | Whether to re-split the dataset | False | Optional |  
-| split.train_percent | int | Sets the percentage of the training set, an integer between 0-100. It should sum up to 100 with `val_percent`. | - | Optional |  
-| split.val_percent | int | Sets the percentage of the validation set, an integer between 0-100. It should sum up to 100 with `train_percent`. | - | Optional |  
-  
-# Train
-
-### Common parameters for time series tasks
-
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| epochs_iters | int | Number of times the model learns from the training data | - | Required |  
-| batch_size | int | Batch size for training | - | Required |  
-| learning_rate | float | Initial learning rate | - | Required |  
-| time_col | str | Time column, must be set to the column name that represents the time series data's timestamp in your dataset. | - | Required |  
-| freq | str or int | Frequency, must be set to the time frequency of your data, such as '1min', '5min', '1h'. | - | Required |  
-  
-**Note**: The default values for these parameters are not specified ("-"), indicating that they must be explicitly provided by the user based on their specific dataset and requirements.
-
-### Time series forecasting parameters
-
-
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| target_cols | str | Target variable column(s), must be set to the column name(s) that represent the target variable(s) in your time series dataset. Multiple columns can be specified by separating them with commas. | - | Required |  
-| input_len | int | For time series prediction tasks, this parameter represents the length of historical time series data input to the model. The input length should be considered in conjunction with the prediction length and the specific scenario. Generally, a larger input length allows the model to reference more historical information, which may lead to higher accuracy. | - | Required |  
-| predict_len | int | The desired length of the future sequence that the model should predict. The prediction length should be considered in conjunction with the specific scenario. Generally, a larger prediction length means predicting a longer future sequence, which may lead to lower model accuracy. | - | Required |  
-| patience | int | A parameter for the early stopping mechanism, indicating how many times the model's performance on the validation set can be consecutively unchanged before stopping training. A larger patience value generally results in longer training time. | - | Required |  
-  
-**Note**: The default values for these parameters are not specified ("-"), indicating that they must be explicitly provided by the user based on their specific dataset and requirements.
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| convert.enable | bool | Whether to convert the dataset format; time series prediction, anomaly detection, and classification support data conversion from xlsx and xls formats | False |
+| convert.src_dataset_type | str | The source dataset format to be converted | null |
+| split.enable | bool | Whether to re-split the dataset | False |
+| split.train_percent | int | Sets the percentage of the training set, an integer between 0-100, ensuring the sum with val_percent is 100; | null |
+| split.val_percent | int | Sets the percentage of the validation set, an integer between 0-100, ensuring the sum with train_percent is 100; | null |
 
-### Time series anomaly detection parameters
 
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| input_len | int | For time series anomaly detection tasks, this parameter represents the length of the time series input to the model. The time series will be sliced according to this length, and the model will predict whether there are anomalies within this segment. The input length should be considered based on the specific scenario. For example, an input length of 96 indicates the desire to predict whether there are anomalies at 96 time points. | - | Required |  
-| feature_cols | str | Feature columns represent variables that can be used to determine whether a device is anomalous. For instance, whether a device is anomalous may be related to the amount of heat it generates during operation. Based on your data, set the column names of the feature variables. Multiple columns can be specified by separating them with commas. | - | Required |  
-| label_col | str | Represents the label indicating whether a time series point is anomalous. Anomalous points are labeled as 1, and normal points are labeled as 0. | - | Required |  
-  
-**Note**: The default values for these parameters are not specified ("-"), indicating that they must be explicitly provided by the user based on their specific dataset and requirements. 
-
-### Time series classification parameters
-
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| num_classes | int | The number of classes in the dataset. | - | Required |  
-| target_cols | str | The column(s) of the feature variable used to determine the class, which must be set according to your dataset in the time series dataset. Multiple columns can be specified by separating them with commas. | - | Required |  
-| freq | str or int | The frequency of the time series, which must be set according to your data. Examples include '1min', '5min', '1h'. | - | Required |  
-| group_id | str | A group ID represents a time series sample. Time series sequences with the same ID constitute a sample. Set the column name for the specified group ID according to your data, e.g., 'group_id'. | - | Required |  
-| static_cov_cols | str | Represents the class ID column for the time series. Samples within the same class share the same label. Set the column name for the class according to your data, e.g., 'label'. | - | Required |  
-  
-**Note**: The default values for these parameters are not specified ("-"), indicating that they must be explicitly provided by the user based on their specific dataset and requirements. 
+# Train
+### Common Parameters for Time Series Tasks
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| epochs_iters | int | The number of times the model repeats learning the training data | Number of iterations specified in the YAML file |
+| batch_size | int | Batch size | Batch size specified in the YAML file |
+| learning_rate | float | Initial learning rate | Initial learning rate specified in the YAML file |
+| time_col | str | Time column, set the column name of the time series dataset's time column based on your data. | Time column specified in the YAML file |
+| freq | str or int | Frequency, set the time frequency based on your data, e.g., 1min, 5min, 1h. | Frequency specified in the YAML file |
+### Time Series Forecasting Parameters
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| target_cols | str | Target variable column(s), set the column name(s) of the target variable(s) in the time series dataset, can be multiple, separated by commas | OT |
+| input_len | int | For time series forecasting tasks, this parameter represents the length of historical time series input to the model; the input length should be considered in conjunction with the prediction length, generally, the larger the setting, the more historical information can be referenced, and the higher the model accuracy. | 96 |
+| predict_len | int | The length of the future sequence that you want the model to predict; the prediction length should be considered in conjunction with the actual scenario, generally, the larger the setting, the longer the future sequence you want to predict, and the lower the model accuracy. | 96 |
+| patience | int | Early stopping mechanism parameter, indicating how many times the model's performance on the validation set can be continuously unimproved before stopping training; a larger patience value generally results in longer training time. | 10 |
+### Time Series Anomaly Detection
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| input_len | int | For time series anomaly detection tasks, this parameter represents the length of the time series input to the model, which will slice the time series according to this length to predict whether there is an anomaly in this segment of the time series; the input length should be considered in conjunction with the actual scenario. For example, an input length of 96 indicates that you want to predict whether there are anomalies in 96 time points. | 96 |
+| feature_cols | str | Feature variables indicating variables related to whether the device is abnormal, e.g., whether the device is abnormal may be related to the heat dissipation during its operation. Set the column name(s) of the feature variable(s) based on your data, can be multiple, separated by commas. | feature_0,feature_1 |
+| label_col | str | Represents the number indicating whether a time series point is abnormal, with 1 for abnormal points and 0 for normal points. | label |
+
+### Time Series Classification
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| target_cols | str | Feature variable columns used for category discrimination. You need to set the column names of the target variables in the time series dataset based on your own data. It can be multiple, separated by commas. | dim_0,dim_1,dim_2 |
+| freq | str or int | Frequency, which needs to be set based on your own data. Examples of time frequencies include: 1min, 5min, 1h. | 1 |
+| group_id | str | A group ID represents a time series sample. Time series sequences with the same ID constitute a sample. Set the column name of the specified group ID based on your own data, e.g., group_id. | group_id |
+| static_cov_cols | str | Represents the category number column of the time series. The labels of the same sample are the same. Set the column name of the category based on your own data, e.g., label. | label |
 
 # Evaluate
-
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| --- | --- | --- | --- | --- |  
-| weight_path | str | The path to the model weights for evaluation. | - | Required |  
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| weight_path | str | Evaluation model path | Default local path from training output, when specified as None, indicates using official weights |
 
 # Export
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| weight_path | str | Dynamic graph weight path for exporting the model | Default local path from training output, when specified as None, indicates using official weights |
 
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |    
-| -------------- | --------- | -------------------------------------------------- | ------------------- | ------------- |    
-| weight_path    | str       | The path to the dynamic graph weight file used for exporting the model |The official dynamic graph weight URLs for each model. | Required      |    
-  
 # Predict
-
-| Parameter Name | Data Type | Description | Default Value | Required/Optional |  
-| -------------- | --------- | ---------------------------------- | --------------- | ------------- |    
-| model_dir      | str       | Path to the directory containing the prediction model |The official weight | Optional      |  
-| input          | str       | Path to the input data for prediction | (No default, user must specify) | Required      |  
-| batch_size     | int       | The number of samples processed in each prediction batch | (No default, user must specify) | Required      |  
-
+| Parameter Name | Data Type | Description | Default Value |
+|-|-|-|-|
+| batch_size | int | Prediction batch size | The prediction batch size specified in the YAML file |
+| model_dir | str | Path to the prediction model | The default local inference model path produced by training. When specified as None, it indicates the use of official weights |
+| input | str | Path to the prediction input | The prediction input path specified in the YAML file |

+ 1 - 1
docs/module_usage/tutorials/cv_modules/anomaly_detection.md

@@ -24,7 +24,7 @@
 完成wheel包的安装后,几行代码即可完成图像异常检测模块的推理,可以任意切换该模块下的模型,您也可以将图像异常检测的模块中的模型推理集成到您的项目中。
 运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png)到本地。
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "STFPM"
 

+ 2 - 2
docs/module_usage/tutorials/cv_modules/face_detection.md

@@ -23,7 +23,7 @@
 完成whl包的安装后,几行代码即可完成人脸检测模块的推理,可以任意切换该模块下的模型,您也可以将人脸检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/face_detection.png)到本地。
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PicoDet_LCNet_x2_5_face"
 
@@ -103,7 +103,7 @@ python main.py -c paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 
 

+ 2 - 2
docs/module_usage/tutorials/cv_modules/human_detection.md

@@ -49,7 +49,7 @@
 完成wheel包的安装后,几行代码即可完成行人检测模块的推理,可以任意切换该模块下的模型,您也可以将行人检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/human_detection.jpg)到本地。
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PP-YOLOE-S_human"
 
@@ -132,7 +132,7 @@ python main.py -c paddlex/configs/human_detection/PP-YOLOE-S_human.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/ped_det/01.png)
 </details>

+ 3 - 3
docs/module_usage/tutorials/cv_modules/image_classification.md

@@ -686,13 +686,13 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/image_classification/01.png)
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
-在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
+在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
 
 <details>
   <summary>👉 <b>格式转换/数据集划分详情(点击展开)</b></summary>
@@ -757,7 +757,7 @@ python main.py -c paddlex/configs/image_classification/PP-LCNet_x1_0.yaml  \
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 7 - 7
docs/module_usage/tutorials/cv_modules/image_feature.md

@@ -152,7 +152,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml \
 * `attributes.train_sample_paths`:该数据集训练样本可视化图片相对路径列表;
 * `attributes.gallery_sample_paths`:该数据集被查询样本可视化图片相对路径列表;
 * `attributes.query_sample_paths`:该数据集查询样本可视化图片相对路径列表;
-另外,数据集校验还对数据集中图像数量和图像类别情况进行了分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中图像数量和图像类别情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/img_recognition/01.png)
 </details>
@@ -184,7 +184,7 @@ tar -xf ./dataset/image_classification_labelme_examples.tar -C ./dataset/
 ......
 CheckDataset:
   ......
-  convert: 
+  convert:
     enable: True
     src_dataset_type: LabelMe
   ......
@@ -205,7 +205,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml  \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/image_classification_labelme_examples \
     -o CheckDataset.convert.enable=True \
-    -o CheckDataset.convert.src_dataset_type=LabelMe 
+    -o CheckDataset.convert.src_dataset_type=LabelMe
 ```
 **(2)数据集划分**
 
@@ -248,7 +248,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml  \
     -o CheckDataset.split.enable=True \
     -o CheckDataset.split.train_percent=70 \
     -o CheckDataset.split.gallery_percent=20 \
-    -o CheckDataset.split.query_percent=10 
+    -o CheckDataset.split.query_percent=10
 ```
 > ❗注意 :由于图像特征模型评估的特殊性,当且仅当 train、query、gallery 集合属于同一类别体系下,数据切分才有意义,在图像特征模的评估过程中,必须满足 gallery 集合和 query 集合属于同一类别体系,其允许和 train 集合不在同一类别体系, 如果 gallery 集合和 query 集合与 train 集合不在同一类别体系,则数据划分后的评估没有意义,建议谨慎操作。
 
@@ -267,7 +267,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml \
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_rec.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -296,7 +296,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml \
 
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_rec.yaml`)
 * 指定模式为模型评估:`-o Global.mode=evaluate`
-* 指定验证数据集路径:`-o Global.dataset_dir`. 
+* 指定验证数据集路径:`-o Global.dataset_dir`.
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Evaluate`下的字段来进行设置,详细请参考[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
@@ -326,7 +326,7 @@ python main.py -c paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml  \
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-ShiTuV2_rec.yaml`)
 * 指定模式为模型推理预测:`-o Global.mode=predict`
 * 指定模型权重路径:`-o Predict.model_dir="./output/best_model/inference"`
-* 指定输入数据路径:`-o Predict.input="..."`. 
+* 指定输入数据路径:`-o Predict.input="..."`.
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Predict`下的字段来进行设置,详细请参考[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 > ❗ 注意:图像特征模型的推理结果为一组向量,需要配合检索模块完成图像的识别。

+ 18 - 17
docs/module_usage/tutorials/cv_modules/instance_segmentation.md

@@ -77,7 +77,7 @@
         <td>-</td>
         <td>-</td>
         <td>157.5 M</td>
-        <td rowspan="7">Mask R-CNN是由华盛顿首例即现投影卡的一个全任务深度学习模型,能够在一个模型中完成图片实例的分类和定位,并结合图像级的遮罩(Mask)来完成分割任务。</td>
+        <td rowspan="6">Mask R-CNN是由华盛顿首例即现投影卡的一个全任务深度学习模型,能够在一个模型中完成图片实例的分类和定位,并结合图像级的遮罩(Mask)来完成分割任务。</td>
     </tr>
     <tr>
         <td>MaskRCNN-ResNet50-vd-FPN</td>
@@ -87,13 +87,6 @@
         <td>157.5 M</td>
     </tr>
     <tr>
-        <td>MaskRCNN-ResNet50-vd-SSLDv2-FPN</td>
-        <td>38.2</td>
-        <td>-</td>
-        <td>-</td>
-        <td>127.2 M</td>
-    </tr>
-    <tr>
         <td>MaskRCNN-ResNet50</td>
         <td>32.8</td>
         <td>-</td>
@@ -130,6 +123,14 @@
         <td>31.5 M</td>
         <td>PP-YOLOE_seg 是一种基于PP-YOLOE的实例分割模型。该模型沿用了PP-YOLOE的backbone和head,通过设计PP-YOLOE实例分割头,大幅提升了实例分割的性能和推理速度。</td>
     </tr>
+    <tr>
+        <td>SOLOv2</td>
+        <td>35.5</td>
+        <td>-</td>
+        <td>-</td>
+        <td>179.1 M</td>
+        <td> SOLOv2 是一种按位置分割物体的实时实例分割算法。该模型是SOLO的改进版本,通过引入掩码学习和掩码NMS,实现了精度和速度上取得良好平衡。</td>
+    </tr>
 </table>
 
 
@@ -155,7 +156,7 @@ for res in output:
 关于更多 PaddleX 的单模型推理的 API 的使用方法,可以参考[PaddleX单模型Python脚本使用说明](../../instructions/model_python_API.md)。
 
 ## 四、二次开发
-如果你追求更高精度的现有模型,可以使用 PaddleX 的二次开发能力,开发更好的实例分割模型。在使用 PaddleX 开发实例分割模型之前,请务必安装 PaddleX 的 分割 相关模型训练插件,安装过程可以参考[PaddleX本地安装教程](https://ku.baidu-int.com/knowledge/HFVrC7hq1Q/yKeL8Lljko/y0mmii50BW/dF1VvOPZmZXXzn?t=mention&mt=doc&dt=doc)中的二次开发部分。
+如果你追求更高精度的现有模型,可以使用 PaddleX 的二次开发能力,开发更好的实例分割模型。在使用 PaddleX 开发实例分割模型之前,请务必安装 PaddleX 的 分割 相关模型训练插件,安装过程可以参考[PaddleX本地安装教程](../../../installation/installation.md)中的二次开发部分。
 
 ### 4.1 数据准备
 在进行模型训练前,需要准备相应任务模块的数据集。PaddleX 针对每一个模块提供了数据校验功能,**只有通过数据校验的数据才可以进行模型训练**。此外,PaddleX 为每一个模块都提供了 Demo 数据集,您可以基于官方提供的 Demo 数据完成后续的开发。若您希望用私有数据集进行后续的模型训练,可以参考[PaddleX实例分割任务模块数据标注教程](../../../data_annotations/cv_modules/instance_segmentation.md)。
@@ -215,7 +216,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 * `attributes.val_samples`:该数据集验证集样本数量为 19;
 * `attributes.train_sample_paths`:该数据集训练集样本可视化图片相对路径列表;
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
-另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/instanceseg/01.png)
 </details>
@@ -246,7 +247,7 @@ tar -xf ./dataset/instance_seg_labelme_examples.tar -C ./dataset/
 ......
 CheckDataset:
   ......
-  convert: 
+  convert:
     enable: True
     src_dataset_type: LabelMe
   ......
@@ -256,7 +257,7 @@ CheckDataset:
 ```bash
 python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml\
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples 
+    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 ```
 数据转换执行之后,原有标注文件会被在原路径下重命名为 `xxx.bak`。
 
@@ -296,7 +297,7 @@ CheckDataset:
 ```bash
 python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples 
+    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 ```
 数据划分执行之后,原有标注文件会被在原路径下重命名为 `xxx.bak`。
 
@@ -322,10 +323,10 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 ```
 需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`Mask-RT-DETR-L.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为 `Mask-RT-DETR-L.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -353,7 +354,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 ```
 与模型训练类似,需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`Mask-RT-DETR-L`)
+* 指定模型的`.yaml` 配置文件路径(此处为 `Mask-RT-DETR-L.yaml`)
 * 指定模式为模型评估:`-o Global.mode=evaluate`
 * 指定验证数据集路径:`-o Global.dataset_dir`
 其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Evaluate`下的字段来进行设置,详细请参考[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
@@ -382,7 +383,7 @@ python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
 ```
 与模型训练和评估类似,需要如下几步:
 
-* 指定模型的`.yaml` 配置文件路径(此处为`Mask-RT-DETR-L.yaml`)
+* 指定模型的`.yaml` 配置文件路径(此处为 `Mask-RT-DETR-L.yaml`)
 * 指定模式为模型推理预测:`-o Global.mode=predict`
 * 指定模型权重路径:`-o Predict.model_dir="./output/best_model/inference"`
 * 指定输入数据路径:`-o Predict.input="..."`

+ 13 - 12
docs/module_usage/tutorials/cv_modules/instance_segmentation_en.md

@@ -76,7 +76,7 @@ The instance segmentation module is a crucial component in computer vision syste
         <td>-</td>
         <td>-</td>
         <td>157.5 M</td>
-        <td rowspan="7">Mask R-CNN is a full-task deep learning model from Facebook AI Research (FAIR) that can perform object classification and localization in a single model, combined with image-level masks to complete segmentation tasks.</td>
+        <td rowspan="6">Mask R-CNN is a full-task deep learning model from Facebook AI Research (FAIR) that can perform object classification and localization in a single model, combined with image-level masks to complete segmentation tasks.</td>
     </tr>
     <tr>
         <td>MaskRCNN-ResNet50-vd-FPN</td>
@@ -86,13 +86,6 @@ The instance segmentation module is a crucial component in computer vision syste
         <td>157.5 M</td>
     </tr>
     <tr>
-        <td>MaskRCNN-ResNet50-vd-SSLDv2-FPN</td>
-        <td>38.2</td>
-        <td>-</td>
-        <td>-</td>
-        <td>127.2 M</td>
-    </tr>
-    <tr>
         <td>MaskRCNN-ResNet50</td>
         <td>32.8</td>
         <td>-</td>
@@ -129,6 +122,14 @@ The instance segmentation module is a crucial component in computer vision syste
         <td>31.5 M</td>
         <td>PP-YOLOE_seg is an instance segmentation model based on PP-YOLOE. This model inherits PP-YOLOE's backbone and head, significantly enhancing instance segmentation performance and inference speed through the design of a PP-YOLOE instance segmentation head.</td>
     </tr>
+        <tr>
+        <td>SOLOv2</td>
+        <td>35.5</td>
+        <td>-</td>
+        <td>-</td>
+        <td>179.1 M</td>
+        <td> SOLOv2 is a real-time instance segmentation algorithm that segments objects by location. This model is an improved version of SOLO, achieving a good balance between accuracy and speed through the introduction of mask learning and mask NMS.</td>
+    </tr>
 </table>
 
 
@@ -244,7 +245,7 @@ tar -xf ./dataset/instance_seg_labelme_examples.tar -C ./dataset/
 ......
 CheckDataset:
   ......
-  convert: 
+  convert:
     enable: True
     src_dataset_type: LabelMe
   ......
@@ -254,7 +255,7 @@ Then execute the command:
 ```bash
 python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml\
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples 
+    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 ```
 After the data conversion is executed, the original annotation files will be renamed to `xxx.bak` in the original path.
 
@@ -295,7 +296,7 @@ Then execute the command:
 ```bash
 python main.py -c paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml \
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples 
+    -o Global.dataset_dir=./dataset/instance_seg_labelme_examples
 ```
 After data splitting, the original annotation files will be renamed as `xxx.bak` in the original path.
 
@@ -323,7 +324,7 @@ The following steps are required:
 
 * Specify the path to the `.yaml` configuration file of the model (here it is `Mask-RT-DETR-L.yaml`)
 * Specify the mode as model training: `-o Global.mode=train`
-* Specify the path to the training dataset: `-o Global.dataset_dir`. 
+* Specify the path to the training dataset: `-o Global.dataset_dir`.
 Other related parameters can be set by modifying the fields under `Global` and `Train` in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify the first 2 GPUs for training: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration File Parameters Instructions](../../instructions/config_parameters_common_en.md).
 
 <details>

+ 2 - 2
docs/module_usage/tutorials/cv_modules/mainbody_detection.md

@@ -40,7 +40,7 @@
 完成whl包的安装后,几行代码即可完成主体检测模块的推理,可以任意切换该模块下的模型,您也可以将主体检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png)到本地。
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PP-ShiTuV2_det"
 
@@ -121,7 +121,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/subj_det/01.png)
 </details>

+ 4 - 4
docs/module_usage/tutorials/cv_modules/mainbody_detection_en.md

@@ -34,13 +34,13 @@ Mainbody detection is a fundamental task in object detection, aiming to identify
 **Note: The evaluation set for the above accuracy metrics is  PaddleClas mainbody detection dataset mAP(0.5:0.95). GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 </details>
 
-## III. Quick Integration  <a id="quick"> </a> 
+## III. Quick Integration  <a id="quick"> </a>
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Guide](../../../installation/installation_en.md)
 
 After installing the wheel package, you can perform mainbody detection inference with just a few lines of code. You can easily switch between models under this module, and integrate the mainbody detection model inference into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png) to your local machine.
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PP-ShiTuV2_det"
 
@@ -122,7 +122,7 @@ In the above validation results, `check_pass` being `True` indicates that the da
 * `attributes.val_sample_paths`: A list of relative paths to the visualized images of samples in the validation set of this dataset.
 
 
-The dataset validation also analyzes the distribution of sample counts across all classes in the dataset and generates a histogram (histogram.png) to visualize this distribution. 
+The dataset validation also analyzes the distribution of sample counts across all classes in the dataset and generates a histogram (histogram.png) to visualize this distribution.
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/subj_det/01.png)
 </details>
@@ -180,7 +180,7 @@ python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml  \
 </details>
 
 ### 4.2 Model Training
-Model training can be completed with a single command, taking the training of `PP-ShiTuV2_det.yaml` as an example:
+Model training can be completed with a single command, taking the training of `PP-ShiTuV2_det` as an example:
 
 ```bash
 python main.py -c paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml \

+ 8 - 8
docs/module_usage/tutorials/cv_modules/ml_classification.md

@@ -152,13 +152,13 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/ml_classification/01.png)
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
-在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
+在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
 
 <details>
   <summary>👉 <b>格式转换/数据集划分详情(点击展开)</b></summary>
@@ -171,7 +171,7 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
   * `convert`:
     * `enable`: 是否进行数据集格式转换,图像多标签分类支持 `COCO`格式的数据集转换为 `MLClsDataset`格式,默认为 `False`;
     * `src_dataset_type`: 如果进行数据集格式转换,则需设置源数据集格式,默认为 `null`,可选值为 `COCO` ;
-  
+
 例如,您想将`COCO`格式的数据集转换为 `MLClsDataset`格式,则需将配置文件修改为:
 
 ```bash
@@ -183,7 +183,7 @@ tar -xf ./dataset/det_coco_examples.tar -C ./dataset/
 ......
 CheckDataset:
   ......
-  convert: 
+  convert:
     enable: True
     src_dataset_type: COCO
   ......
@@ -193,7 +193,7 @@ CheckDataset:
 ```bash
 python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/det_coco_examples 
+    -o Global.dataset_dir=./dataset/det_coco_examples
 ```
 数据转换执行之后,原有标注文件会被在原路径下重命名为 `xxx.bak`。
 
@@ -233,7 +233,7 @@ CheckDataset:
 ```bash
 python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml \
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/det_coco_examples 
+    -o Global.dataset_dir=./dataset/det_coco_examples
 ```
 数据划分执行之后,原有标注文件会被在原路径下重命名为 `xxx.bak`。
 
@@ -262,7 +262,7 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_ML.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -327,7 +327,7 @@ python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yam
 
 1.**产线集成**
 
-图像多标签分类模块可以集成的PaddleX产线有[通用图像多标签分类产线](../../../pipeline_usage/tutorials/cv_pipelines/image_multi_label_lassification.md),只需要替换模型路径即可完成相关产线的图像多标签分类模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+图像多标签分类模块可以集成的PaddleX产线有[通用图像多标签分类产线](../../../pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md),只需要替换模型路径即可完成相关产线的图像多标签分类模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 2.**模块集成**
 

+ 33 - 34
docs/module_usage/tutorials/cv_modules/object_detection.md

@@ -32,7 +32,6 @@
     <td>-</td>
     <td>-</td>
     <td>246.2 M</td>
-    <td></td>
   </tr>
   <tr>
     <td>CenterNet-DLA-34</td>
@@ -48,7 +47,7 @@
     <td>-</td>
     <td>-</td>
     <td>319.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>DETR-R50</td>
@@ -72,7 +71,7 @@
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50-vd-FPN</td>
@@ -80,7 +79,7 @@
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50-vd-SSLDv2-FPN</td>
@@ -88,7 +87,7 @@
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50</td>
@@ -96,7 +95,7 @@
     <td>-</td>
     <td>-</td>
     <td>120.2 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet101-FPN</td>
@@ -104,7 +103,7 @@
     <td>-</td>
     <td>-</td>
     <td>216.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet101</td>
@@ -112,7 +111,7 @@
     <td>-</td>
     <td>-</td>
     <td>188.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNeXt101-vd-FPN</td>
@@ -120,7 +119,7 @@
     <td>-</td>
     <td>-</td>
     <td>360.6 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-Swin-Tiny-FPN</td>
@@ -128,7 +127,7 @@
     <td>-</td>
     <td>-</td>
     <td>159.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FCOS-ResNet50</td>
@@ -152,7 +151,7 @@
     <td>16.2311</td>
     <td>71.7257</td>
     <td>16.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PicoDet-S</td>
@@ -160,7 +159,7 @@
     <td>14.097</td>
     <td>37.6563</td>
     <td>4.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PicoDet-XS</td>
@@ -168,7 +167,7 @@
     <td>13.8102</td>
     <td>48.3139</td>
     <td>5.7 M</td>
-    <td></td>
+
   </tr>
     <tr>
     <td>PP-YOLOE_plus-L</td>
@@ -184,7 +183,7 @@
     <td>19.843</td>
     <td>449.261</td>
     <td>82.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus-S</td>
@@ -192,7 +191,7 @@
     <td>16.8884</td>
     <td>223.059</td>
     <td>28.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus-X</td>
@@ -200,7 +199,7 @@
     <td>57.8995</td>
     <td>1439.93</td>
     <td>349.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-H</td>
@@ -216,7 +215,7 @@
     <td>34.5252</td>
     <td>1454.27</td>
     <td>113.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-R18</td>
@@ -224,7 +223,7 @@
     <td>19.89</td>
     <td>784.824</td>
     <td>70.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-R50</td>
@@ -232,7 +231,7 @@
     <td>41.9327</td>
     <td>1625.95</td>
     <td>149.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-X</td>
@@ -240,7 +239,7 @@
     <td>61.8042</td>
     <td>2246.64</td>
     <td>232.9 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOv3-DarkNet53</td>
@@ -256,7 +255,7 @@
     <td>18.6692</td>
     <td>267.214</td>
     <td>83.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOv3-ResNet50_vd_DCN</td>
@@ -264,7 +263,7 @@
     <td>31.6276</td>
     <td>856.047</td>
     <td>163.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-L</td>
@@ -280,7 +279,7 @@
     <td>123.324</td>
     <td>688.071</td>
     <td>90.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-N</td>
@@ -288,7 +287,7 @@
     <td>79.1665</td>
     <td>155.59</td>
     <td>3.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-S</td>
@@ -296,7 +295,7 @@
     <td>184.828</td>
     <td>474.446</td>
     <td>32.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-T</td>
@@ -304,7 +303,7 @@
     <td>102.748</td>
     <td>212.52</td>
     <td>18.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-X</td>
@@ -312,7 +311,7 @@
     <td>227.361</td>
     <td>2067.84</td>
     <td>351.5 M</td>
-    <td></td>
+
   </tr>
 </table>
 
@@ -415,13 +414,13 @@ python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
 * `attributes.train_sample_paths`:该数据集训练集样本可视化图片相对路径列表;
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
-另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/obj_det/01.png)
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
-在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
+在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
 
 <details>
   <summary>👉 <b>格式转换/数据集划分详情(点击展开)</b></summary>
@@ -455,14 +454,14 @@ CheckDataset:
 随后执行命令:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples
 ```
 当然,以上参数同样支持通过追加命令行参数的方式进行设置,以 `LabelMe` 格式的数据集为例:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -492,16 +491,16 @@ CheckDataset:
 随后执行命令:
 
 ```bash
-python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/det_coco_examples 
+    -o Global.dataset_dir=./dataset/det_coco_examples
 ```
 数据划分执行之后,原有标注文件会被在原路径下重命名为 `xxx.bak`。
 
 以上参数同样支持通过追加命令行参数的方式进行设置:
 
 ```bash
-python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.split.enable=True \

+ 33 - 35
docs/module_usage/tutorials/cv_modules/object_detection_en.md

@@ -32,7 +32,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>246.2 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>CenterNet-DLA-34</td>
@@ -48,7 +48,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>319.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>DETR-R50</td>
@@ -72,7 +72,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50-vd-FPN</td>
@@ -80,7 +80,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50-vd-SSLDv2-FPN</td>
@@ -88,7 +88,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50</td>
@@ -96,7 +96,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>120.2 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet101-FPN</td>
@@ -104,7 +104,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>216.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet101</td>
@@ -112,7 +112,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>188.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNeXt101-vd-FPN</td>
@@ -120,7 +120,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>360.6 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-Swin-Tiny-FPN</td>
@@ -128,7 +128,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>-</td>
     <td>-</td>
     <td>159.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FCOS-ResNet50</td>
@@ -152,7 +152,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>16.2311</td>
     <td>71.7257</td>
     <td>16.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PicoDet-S</td>
@@ -160,7 +160,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>14.097</td>
     <td>37.6563</td>
     <td>4.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PicoDet-XS</td>
@@ -168,7 +168,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>13.8102</td>
     <td>48.3139</td>
     <td>5.7 M</td>
-    <td></td>
+
   </tr>
     <tr>
     <td>PP-YOLOE_plus-L</td>
@@ -184,7 +184,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>19.843</td>
     <td>449.261</td>
     <td>82.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus-S</td>
@@ -192,7 +192,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>16.8884</td>
     <td>223.059</td>
     <td>28.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus-X</td>
@@ -200,7 +200,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>57.8995</td>
     <td>1439.93</td>
     <td>349.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-H</td>
@@ -216,7 +216,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>34.5252</td>
     <td>1454.27</td>
     <td>113.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-R18</td>
@@ -224,7 +224,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>19.89</td>
     <td>784.824</td>
     <td>70.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-R50</td>
@@ -232,7 +232,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>41.9327</td>
     <td>1625.95</td>
     <td>149.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-X</td>
@@ -240,7 +240,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>61.8042</td>
     <td>2246.64</td>
     <td>232.9 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOv3-DarkNet53</td>
@@ -256,7 +256,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>18.6692</td>
     <td>267.214</td>
     <td>83.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOv3-ResNet50_vd_DCN</td>
@@ -264,7 +264,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>31.6276</td>
     <td>856.047</td>
     <td>163.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-L</td>
@@ -280,7 +280,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>123.324</td>
     <td>688.071</td>
     <td>90.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-N</td>
@@ -288,7 +288,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>79.1665</td>
     <td>155.59</td>
     <td>3.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-S</td>
@@ -296,7 +296,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>184.828</td>
     <td>474.446</td>
     <td>32.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-T</td>
@@ -304,7 +304,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>102.748</td>
     <td>212.52</td>
     <td>18.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-X</td>
@@ -312,7 +312,7 @@ The object detection module is a crucial component in computer vision systems, r
     <td>227.361</td>
     <td>2067.84</td>
     <td>351.5 M</td>
-    <td></td>
+
   </tr>
 </table>
 
@@ -460,14 +460,14 @@ CheckDataset:
 Then execute the command:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples
 ```
 Of course, the above parameters also support being set by appending command line arguments. Taking a `LabelMe` format dataset as an example:
 
 ```bash
-python main.py -c paddlex/configs/object_detection/PicoDet-L.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_labelme_examples \
     -o CheckDataset.convert.enable=True \
@@ -498,16 +498,16 @@ CheckDataset:
 Then execute the command:
 
 ```bash
-python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
-    -o Global.dataset_dir=./dataset/det_coco_examples 
+    -o Global.dataset_dir=./dataset/det_coco_examples
 ```
 After dataset splitting is executed, the original annotation files will be renamed to `xxx.bak` in the original path.
 
 The above parameters also support being set by appending command line arguments:
 
 ```bash
-python main.py -c paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml \
+python main.py -c paddlex/configs/object_detection/PicoDet-S.yaml \
     -o Global.mode=check_dataset \
     -o Global.dataset_dir=./dataset/det_coco_examples \
     -o CheckDataset.split.enable=True \
@@ -529,7 +529,7 @@ The following steps are required:
 
 * Specify the `.yaml` configuration file path for the model (here it is `PicoDet-S.yaml`)
 * Set the mode to model training: `-o Global.mode=train`
-* Specify the path to the training dataset: `-o Global.dataset_dir`. 
+* Specify the path to the training dataset: `-o Global.dataset_dir`.
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file, or adjusted by appending parameters in the command line. For example, to specify training on the first two GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the configuration file instructions for the corresponding task module of the model [PaddleX Common Configuration File Parameters](../../instructions/config_parameters_common_en.md).
 
 <details>
@@ -598,5 +598,3 @@ The object detection module can be integrated into the [General Object Detection
 2.**Module Integration**
 
 The weights you produce can be directly integrated into the object detection module. Refer to the Python example code in [Quick Integration](#iii-quick-integration), and simply replace the model with the path to your trained model.
-
-

+ 23 - 3
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md

@@ -33,6 +33,26 @@ for res in output:
 ```
 关于更多 PaddleX 的单模型推理的 API 的使用方法,可以参考[PaddleX单模型Python脚本使用说明](../../instructions/model_python_API.md)。
 
+**备注**:其中 `output` 的值索引为0表示是否佩戴帽子,索引值为1表示是否佩戴眼镜,索引值2-7表示上衣风格,索引值8-13表示下装风格,索引值14表示是否穿靴子,索引值15-17表示背的包的类型,索引值18表示正面是否持物,索引值19-21表示年龄,索引值22表示性别,索引值23-25表示朝向。具体地,属性包含以下类型:
+
+```
+- 性别:男、女
+- 年龄:小于18、18-60、大于60
+- 朝向:朝前、朝后、侧面
+- 配饰:眼镜、帽子、无
+- 正面持物:是、否
+- 包:双肩包、单肩包、手提包
+- 上衣风格:带条纹、带logo、带格子、拼接风格
+- 下装风格:带条纹、带图案
+- 短袖上衣:是、否
+- 长袖上衣:是、否
+- 长外套:是、否
+- 长裤:是、否
+- 短裤:是、否
+- 短裙&裙子:是、否
+- 穿靴:是、否
+```
+
 ## 四、二次开发
 如果你追求更高精度的现有模型,可以使用 PaddleX 的二次开发能力,开发更好的行人属性识别模型。在使用 PaddleX 开发行人属性识别之前,请务必安装 PaddleX 的分类相关模型训练插件,安装过程可以参考 [PaddleX本地安装教程](../../../installation/installation.md)中的二次开发部分。
 
@@ -113,7 +133,7 @@ python main.py -c paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-另外,数据集校验还对数据集中所有图片的长宽分布情况进行了分析分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有图片的长宽分布情况进行了分析分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/ped_attri/image.png)
 
@@ -186,7 +206,7 @@ python main.py -c paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_pedestrian_attribute.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -256,7 +276,7 @@ python main.py -c paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_
 
 1.**产线集成**
 
-行人属性识别模块可以集成的PaddleX产线有[通用图像多标签分类产线](../../../pipeline_usage/tutorials/cv_pipelines/image_multi_label_lassification.md),只需要替换模型路径即可完成相关产线的行人属性识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+行人属性识别模块可以集成的PaddleX产线有[通用图像多标签分类产线](../../../pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md),只需要替换模型路径即可完成相关产线的行人属性识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 2.**模块集成**
 

+ 20 - 0
docs/module_usage/tutorials/cv_modules/pedestrian_attribute_recognition_en.md

@@ -35,6 +35,26 @@ for res in output:
 ```
 For more information on using PaddleX's single-model inference API, refer to the [PaddleX Single Model Python Script Usage Instructions](../../instructions/model_python_API_en.md).
 
+**Note**: The index of the `output` value represents the following attributes: index 0 indicates whether a hat is worn, index 1 indicates whether glasses are worn, indexes 2-7 represent the style of the upper garment, indexes 8-13 represent the style of the lower garment, index 14 indicates whether boots are worn, indexes 15-17 represent the type of bag carried, index 18 indicates whether an object is held in front, indexes 19-21 represent age, index 22 represents gender, and indexes 23-25 represent orientation. Specifically, the attributes include the following types:
+
+```
+- Gender: Male, Female
+- Age: Under 18, 18-60, Over 60
+- Orientation: Front, Back, Side
+- Accessories: Glasses, Hat, None
+- Holding Object in Front: Yes, No
+- Bag: Backpack, Shoulder Bag, Handbag
+- Upper Garment Style: Striped, Logo, Plaid, Patchwork
+- Lower Garment Style: Striped, Patterned
+- Short-sleeved Shirt: Yes, No
+- Long-sleeved Shirt: Yes, No
+- Long Coat: Yes, No
+- Pants: Yes, No
+- Shorts: Yes, No
+- Skirt: Yes, No
+- Boots: Yes, No
+```
+
 ## IV. Custom Development
 If you seek higher accuracy from existing models, you can leverage PaddleX's custom development capabilities to develop better pedestrian attribute recognition models. Before developing pedestrian attribute recognition with PaddleX, ensure you have installed the classification-related model training plugins for PaddleX.  The installation process can be found in the custom development section of the [PaddleX Local Installation Guide](../../../installation/installation_en.md).
 

+ 1 - 1
docs/module_usage/tutorials/cv_modules/semantic_segmentation.md

@@ -120,7 +120,7 @@ python main.py -c paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/semanticseg/01.png)
 </details>

+ 3 - 3
docs/module_usage/tutorials/cv_modules/small_object_detection.md

@@ -29,7 +29,7 @@
     <td>1007.0</td>
     <td>324.93</td>
     <td rowspan="3">基于VisDrone训练的PP-YOLOE_plus小目标检测模型。VisDrone是针对无人机视觉数据的基准数据集,由于目标较小同时具有一定的挑战性而被用于小目标检测任务的训练和评测</td>
-    
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus_SOD-S</td>
@@ -58,7 +58,7 @@
 完成whl包的安装后,几行代码即可完成小目标检测模块的推理,可以任意切换该模块下的模型,您也可以将小目标检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/small_object_detection.jpg)到本地。
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PP-YOLOE_plus_SOD-S"
 
@@ -138,7 +138,7 @@ python main.py -c paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yam
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/smallobj_det/01.png)
 </details>

+ 6 - 4
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md

@@ -34,6 +34,8 @@ for res in output:
 ```
 关于更多 PaddleX 的单模型推理的 API 的使用方法,可以参考[PaddleX单模型Python脚本使用说明](../../instructions/model_python_API.md)。
 
+**备注**:其中 `output` 的值索引为0-9表示颜色属性,对应的颜色分别是:yellow(黄色), orange(橙色), green(绿色), gray(灰色), red(红色), blue(蓝色), white(白色), golden(金色), brown(棕色), black(黑色);索引为10-18表示车型属性,对应的车型分别是sedan(轿车), suv(越野车), van(面包车), hatchback(掀背车), mpv(多用途汽车), pickup(皮卡车), bus(公共汽车), truck(卡车), estate(旅行车)。
+
 ## 四、二次开发
 如果你追求更高精度的现有模型,可以使用 PaddleX 的二次开发能力,开发更好的车辆属性识别模型。在使用 PaddleX 开发车辆属性识别模型之前,请务必安装 PaddleX 的 分类 相关模型训练插件,安装过程可以参考[PaddleX本地安装教程](../../../installation/installation.md)。
 
@@ -115,13 +117,13 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 
 
 
-另外,数据集校验还对数据集中所有图片的长宽分布情况进行了分析分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有图片的长宽分布情况进行了分析分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/vehicle_attri/01.png)
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
-在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
+在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
 
 <details>
   <summary>👉 <b>格式转换/数据集划分详情(点击展开)</b></summary>
@@ -186,7 +188,7 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_vehicle_attribute.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -255,7 +257,7 @@ python main.py -c paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attrib
 
 1.**产线集成**
 
-车辆属性识别模块可以集成的PaddleX产线有[通用图像多标签分类产线](../../../pipeline_usage/tutorials/cv_pipelines/image_multi_label_lassification.md),只需要替换模型路径即可完成相关产线的车辆属性识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+车辆属性识别模块可以集成的PaddleX产线有[通用图像多标签分类产线](../../../pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md),只需要替换模型路径即可完成相关产线的车辆属性识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 2.**模块集成**
 

+ 4 - 0
docs/module_usage/tutorials/cv_modules/vehicle_attribute_recognition_en.md

@@ -19,6 +19,7 @@ Vehicle attribute recognition is a crucial component in computer vision systems.
 </details>
 
 ## <span id="lable">III. Quick Integration</span>
+
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Guide](../../../installation/installation_en.md)
 
 After installing the wheel package, a few lines of code can complete the inference of the vehicle attribute recognition module. You can easily switch models under this module, and you can also integrate the model inference of the vehicle attribute recognition module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_attribute_007.jpg) to your local machine.
@@ -34,6 +35,9 @@ for res in output:
 ```
 For more information on using PaddleX's single-model inference API, refer to [PaddleX Single Model Python Script Usage Instructions](../../instructions/model_python_API_en.md).
 
+**Note**: In the `output`, values indexed from 0-9 represent color attributes, corresponding to the following colors respectively: yellow, orange, green, gray, red, blue, white, golden, brown, black. Indices 10-18 represent vehicle type attributes, corresponding to the following vehicle types: sedan, suv, van, hatchback, mpv, pickup, bus, truck, estate.
+
+
 ## IV. Custom Development
 If you seek higher accuracy from existing models, you can leverage PaddleX's custom development capabilities to develop better vehicle attribute recognition models. Before using PaddleX to develop vehicle attribute recognition models, ensure you have installed the classification-related model training plugin for PaddleX. The installation process can be found in the [PaddleX Local Installation Guide](../../../installation/installation_en.md).
 

+ 2 - 2
docs/module_usage/tutorials/cv_modules/vehicle_detection.md

@@ -45,7 +45,7 @@
 完成wheel包的安装后,几行代码即可完成车辆检测模块的推理,可以任意切换该模块下的模型,您也可以将车辆检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_detection.jpg)到本地。
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PP-YOLOE-S_vehicle"
 
@@ -126,7 +126,7 @@ python main.py -c paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/vehicle_det/01.png)
 </details>

+ 2 - 2
docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md

@@ -113,7 +113,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/doc_img_ori_classification/01.png)
 </details>
@@ -184,7 +184,7 @@ python main.py -c paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yam
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-LCNet_x1_0_doc_ori.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 7 - 7
docs/module_usage/tutorials/ocr_modules/formula_recognition.md

@@ -27,7 +27,7 @@
     <td>89.7 M</td>
     <td>LaTeX-OCR是一种基于自回归大模型的公式识别算法,通过采用 Hybrid ViT 作为骨干网络,transformer作为解码器,显著提升了公式识别的准确性</td>
   </tr>
-  
+
 </table>
 
 **注:以上精度指标测量自 LaTeX-OCR公式识别测试集。**
@@ -123,12 +123,12 @@ python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/data_prepare/formula_recognition/01.jpg)
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
-在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
+在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
 
 <details>
   <summary>👉 <b>格式转换/数据集划分详情(点击展开)</b></summary>
@@ -141,14 +141,14 @@ python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml \
   * `convert`:
     * `enable`: 是否进行数据集格式转换,公式识别支持 `PKL`格式的数据集转换为 `LaTeXOCRDataset`格式,默认为 `True`;
     * `src_dataset_type`: 如果进行数据集格式转换,则需设置源数据集格式,默认为 `PKL`,可选值为 `PKL` ;
-  
+
 例如,您想将 `PKL`格式的数据集转换为 `LaTeXOCRDataset`格式,则需将配置文件修改为:
 
 ```bash
 ......
 CheckDataset:
   ......
-  convert: 
+  convert:
     enable: True
     src_dataset_type: PKL
   ......
@@ -216,7 +216,7 @@ python main.py -c  paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml \
 </details>
 
 ### 4.2 模型训练
-一条命令即可完成模型的训练,以此处公式识别模型 LaTeX_OCR_rec.yaml 的训练为例:
+一条命令即可完成模型的训练,以此处公式识别模型 LaTeX_OCR_rec 的训练为例:
 
 ```bash
 python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml  \
@@ -228,7 +228,7 @@ python main.py -c paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml  \
 * 指定模型的`.yaml` 配置文件路径(此处为`LaTeX_OCR_rec.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 3 - 3
docs/module_usage/tutorials/ocr_modules/layout_detection.md

@@ -26,7 +26,7 @@
 完成whl包的安装后,几行代码即可完成版面区域检测模块的推理,可以任意切换该模块下的模型,您也可以将版面区域检测模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg)到本地。
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PicoDet-L_layout_3cls"
 
@@ -109,7 +109,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/layout_dec/01.png)
 </details>
@@ -123,7 +123,7 @@ python main.py -c paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml
 
 **(1)数据集格式转换**
 
-人脸检测不支持数据格式转换。
+版面区域检测暂不支持数据格式转换。
 
 **(2)数据集划分**
 

+ 3 - 3
docs/module_usage/tutorials/ocr_modules/layout_detection_en.md

@@ -20,13 +20,13 @@ The core task of structure analysis is to parse and segment the content of input
 **Note: The evaluation set for the above accuracy metrics is PaddleOCR's self-built layout region analysis dataset, containing 10,000 images of common document types, including English and Chinese papers, magazines, research reports, etc. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 </details>
 
-## III. Quick Integration  <a id="quick"> </a> 
+## III. Quick Integration  <a id="quick"> </a>
 > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to [PaddleX Local Installation Tutorial](../../../installation/installation_en.md)
 
 After installing the wheel package, a few lines of code can complete the inference of the structure analysis module. You can switch models under this module freely, and you can also integrate the model inference of the structure analysis module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg) to your local machine.
 
 ```python
-from paddlex import create_model 
+from paddlex import create_model
 
 model_name = "PicoDet-L_layout_3cls"
 
@@ -124,7 +124,7 @@ After completing dataset verification, you can convert the dataset format or re-
 
 **(1) Dataset Format Conversion**
 
-Structure analysis does not support data format conversion.
+Layout detection does not support data format conversion.
 
 **(2) Dataset Splitting**
 

+ 2 - 2
docs/module_usage/tutorials/ocr_modules/seal_text_detection.md

@@ -112,7 +112,7 @@ python main.py -c paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.y
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png): 
+数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/curved_text_dec/01.png)
 </details>
@@ -183,7 +183,7 @@ python main.py -c paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.y
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_server_seal_det.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 4 - 4
docs/module_usage/tutorials/ocr_modules/table_structure_recognition.md

@@ -136,7 +136,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
-在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
+在您完成数据校验之后,可以通过**修改配置文件**或是**追加超参数**的方式对数据集的格式进行转换,也可以对数据集的训练/验证比例进行重新划分。
 
 <details>
   <summary>👉 <b>格式转换/数据集划分详情(点击展开)</b></summary>
@@ -201,7 +201,7 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml \
 * 指定模型的`.yaml` 配置文件路径(此处为`SLANet.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -267,9 +267,9 @@ python main.py -c paddlex/configs/table_recognition/SLANet.yaml  \
 
 1.**产线集成**
 
-表格结构识别模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelies/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的表格结构识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
+表格结构识别模块可以集成的PaddleX产线有[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的表格结构识别模块的模型更新。在产线集成中,你可以使用高性能部署和服务化部署来部署你得到的模型。
 
 
 2.**模块集成**
 
-您产出的权重可以直接集成到表格结构识别模块中,可以参考[快速集成]()的 Python 示例代码,只需要将模型替换为你训练的到的模型路径即可。
+您产出的权重可以直接集成到表格结构识别模块中,可以参考[快速集成](#三快速集成)的 Python 示例代码,只需要将模型替换为你训练的到的模型路径即可。

+ 1 - 1
docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md

@@ -211,7 +211,7 @@ the following steps are required:
 
 * During model training, PaddleX automatically saves the model weight files, with the default being `output`. If you need to specify a save path, you can set it through the `-o Global.output` field in the configuration file.
 * PaddleX shields you from the concepts of dynamic graph weights and static graph weights. During model training, both dynamic and static graph weights are produced, and static graph weights are selected by default for model inference.
-* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](https://ku.baidu-int.com/knowledge/HFVrC7hq1Q/pKzJfZczuc/GvMbk70MZz/0PKFjfhs0UN4Qs?t=mention&mt=doc&dt=doc). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
+* When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in [PaddleX Model List (CPU/GPU)](../../../support_list/models_list_en.md). After completing the model training, all outputs are saved in the specified output directory (default is `./output/`), typically including:
 
 * `train_result.json`: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
 * `train.log`: Training log file, recording changes in model metrics and loss during training;

+ 2 - 2
docs/module_usage/tutorials/ocr_modules/text_detection.md

@@ -19,7 +19,7 @@
 
 ## 三、快速集成
 > ❗ 在快速集成前,请先安装 PaddleX 的 wheel 包,详细请参考 [PaddleX本地安装教程](../../../installation/installation.md)。
-> 
+>
 几行代码即可完成文本检测模块的推理,可以任意切换该模块下的模型,您也可以将文本检测的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_001.png)到本地。
 
 ```python
@@ -95,7 +95,7 @@ python main.py -c paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 
 
-另外,数据集校验还对数据集中所有图片的长宽分布情况进行了分析分析,并绘制了分布直方图(histogram.png): 
+另外,数据集校验还对数据集中所有图片的长宽分布情况进行了分析分析,并绘制了分布直方图(histogram.png):
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/text_det/01.png)
 </details>

+ 6 - 6
docs/module_usage/tutorials/ocr_modules/text_recognition.md

@@ -104,7 +104,7 @@ for res in output:
 如果你追求更高精度的现有模型,可以使用 PaddleX 的二次开发能力,开发更好的文本识别模型。在使用 PaddleX 开发文本识别模型之前,请务必安装 PaddleX 的 OCR 相关模型训练插件,安装过程可以参考[PaddleX本地安装教程](../../../installation/installation.md)中的二次开发部分。
 
 ### 4.1 数据准备
-在进行模型训练前,需要准备相应任务模块的数据集。PaddleX 针对每一个模块提供了数据校验功能,**只有通过数据校验的数据才可以进行模型训练**。此外,PaddleX 为每一个模块都提供了 Demo 数据集,您可以基于官方提供的 Demo 数据完成后续的开发。若您希望用私有数据集进行后续的模型训练,可以参考[PaddleX文本检测/文本识别任务模块数据标注教程](https://ku.baidu-int.com/knowledge/HFVrC7hq1Q/yKeL8Lljko/y0mmii50BW/VtwlUU5Na5lpFB?t=mention&mt=doc&dt=doc)。
+在进行模型训练前,需要准备相应任务模块的数据集。PaddleX 针对每一个模块提供了数据校验功能,**只有通过数据校验的数据才可以进行模型训练**。此外,PaddleX 为每一个模块都提供了 Demo 数据集,您可以基于官方提供的 Demo 数据完成后续的开发。若您希望用私有数据集进行后续的模型训练,可以参考[PaddleX文本检测/文本识别任务模块数据标注教程](../../../data_annotations/ocr_modules/text_detection_recognition.md)。
 
 #### 4.1.1 Demo 数据下载
 您可以参考下面的命令将 Demo 数据集下载到指定文件夹:
@@ -161,7 +161,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 * `attributes.val_sample_paths`:该数据集验证集样本可视化图片相对路径列表;
 另外,数据集校验还对数据集中所有字符长度占比的分布情况进行了分析,并绘制了分布直方图(histogram.png):
 
-![](/tmp/images/modules/text_recog/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/text_recog/01.png)
 </details>
 
 #### 4.1.3 数据集格式转换/数据集划分(可选)
@@ -229,9 +229,9 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 * 指定模型的`.yaml` 配置文件路径(此处为`PP-OCRv4_mobile_rec.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX通用模型配置文件参数说明](../../instructions/config_parameters_common.md)。
+
 
-**更多说明(点击展开)**
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>
@@ -255,7 +255,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
-    
+
 ```
 与模型训练类似,需要如下几步:
 
@@ -299,7 +299,7 @@ python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
 
 1.**产线集成**
 
-文本识别模块可以集成的PaddleX产线有[通用 OCR 产线](../../../pipeline_usage/tutorials/ocr_pipelies/OCR.md)、[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelies/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的文本识别模块的模型更新。
+文本识别模块可以集成的PaddleX产线有[通用 OCR 产线](../../../pipeline_usage/tutorials/ocr_pipelines/OCR.md)、[通用表格识别产线](../../../pipeline_usage/tutorials/ocr_pipelines/table_recognition.md)、[文档场景信息抽取产线v3(PP-ChatOCRv3)](../../../pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md),只需要替换模型路径即可完成相关产线的文本识别模块的模型更新。
 
 2.**模块集成**
 

+ 6 - 7
docs/module_usage/tutorials/ocr_modules/text_recognition_en.md

@@ -3,7 +3,7 @@
 # Text Recognition Module Development Tutorial
 
 ## I. Overview
-The text recognition module is the core component of an OCR (Optical Character Recognition) system, responsible for extracting text information from text regions within images. The performance of this module directly impacts the accuracy and efficiency of the entire OCR system. The text recognition module typically receives bounding boxes (Bounding Boxes) of text regions output by the text detection module as input. Through complex image processing and deep learning algorithms, it converts the text in images into editable and searchable electronic text. The accuracy of text recognition results is crucial for subsequent applications such as information extraction and data mining.
+The text recognition module is the core component of an OCR (Optical Character Recognition) system, responsible for extracting text information from text regions within images. The performance of this module directly impacts the accuracy and efficiency of the entire OCR system. The text recognition module typically receives bounding boxes of text regions output by the text detection module as input. Through complex image processing and deep learning algorithms, it converts the text in images into editable and searchable electronic text. The accuracy of text recognition results is crucial for subsequent applications such as information extraction and data mining.
 
 ## II. Supported Model List
 
@@ -83,7 +83,7 @@ The text recognition module is the core component of an OCR (Optical Character R
 
 **Note: The evaluation set for the above accuracy metrics is the [OCR End-to-End Recognition Task of the PaddleOCR Algorithm Model Challenge - Track 1](https://aistudio.baidu.com/competition/detail/1131/0/introduction) B-rank. GPU inference time for all models is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
-</details>      
+</details>
 
 ## III. Quick Integration
 Before quick integration, you need to install the PaddleX wheel package. For the installation method, please refer to the [PaddleX Local Installation Tutorial](../../../installation/installation_en.md). After installing the wheel package, a few lines of code can complete the inference of the text recognition module. You can switch models under this module freely, and you can also integrate the model inference of the text recognition module into your project.
@@ -162,7 +162,7 @@ In the above validation result, `check_pass` being `true` indicates that the dat
 * `attributes.val_sample_paths`: A list of relative paths to the visualized validation set samples in this dataset;
 Additionally, the dataset validation also analyzes the distribution of character length ratios in the dataset and generates a distribution histogram (histogram.png):
 
-![](/tmp/images/modules/text_recog/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/modules/text_recog/01.png)
 </details>
 
 #### 4.1.3 Dataset Format Conversion/Dataset Splitting (Optional)
@@ -230,10 +230,9 @@ The steps required are:
 
 * Specify the path to the model's `.yaml` configuration file (here it's `PP-OCRv4_mobile_rec.yaml`)
 * Specify the mode as model training: `-o Global.mode=train`
-* Specify the path to the training dataset: `-o Global.dataset_dir`. 
+* Specify the path to the training dataset: `-o Global.dataset_dir`.
 Other related parameters can be set by modifying the `Global` and `Train` fields in the `.yaml` configuration file or adjusted by appending parameters in the command line. For example, to specify training on the first 2 GPUs: `-o Global.device=gpu:0,1`; to set the number of training epochs to 10: `-o Train.epochs_iters=10`. For more modifiable parameters and their detailed explanations, refer to the [PaddleX Common Configuration File Parameters](../../instructions/config_parameters_common_en.md).
 
-**More Information (Click to Expand)**
 
 <details>
   <summary>👉 <b>More Information (Click to Expand)</b></summary>
@@ -258,7 +257,7 @@ After completing model training, you can evaluate the specified model weights fi
 python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=evaluate \
     -o Global.dataset_dir=./dataset/ocr_rec_dataset_examples
-    
+
 ```
 Similar to model training, the following steps are required:
 
@@ -269,7 +268,7 @@ Other related parameters can be set by modifying the `Global` and `Evaluate` fie
 
 
 <details>
-  <summary>👉 <b>More Details (Click to Expand)</b></summary>
+  <summary>👉 <b>More Information (Click to Expand)</b></summary>
 
 When evaluating the model, you need to specify the model weights file path. Each configuration file has a default weight save path. If you need to change it, simply append the command line parameter to set it, such as `-o Evaluate.weight_path=./output/best_model/best_model.pdparams`.
 

+ 1 - 1
docs/module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md

@@ -227,7 +227,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml \
 * 指定模型的`.yaml` 配置文件路径(此处为`AutoEncoder_ad.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 1 - 1
docs/module_usage/tutorials/time_series_modules/time_series_classification.md

@@ -234,7 +234,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 * 指定模型的`.yaml` 配置文件路径(此处为`TimesNet_cls.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 1 - 1
docs/module_usage/tutorials/time_series_modules/time_series_forecasting.md

@@ -259,7 +259,7 @@ python main.py -c paddlex/configs/ts_forecast/DLinear.yaml \
 * 指定模型的`.yaml` 配置文件路径(此处为`DLinear.yaml`)
 * 指定模式为模型训练:`-o Global.mode=train`
 * 指定训练数据集路径:`-o Global.dataset_dir`
-其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
+其他相关参数均可通过修改`.yaml`配置文件中的`Global`和`Train`下的字段来进行设置,也可以通过在命令行中追加参数来进行调整。如指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。更多可修改的参数及其详细解释,可以查阅模型对应任务模块的配置文件说明[PaddleX时序任务模型配置文件参数说明](../../instructions/config_parameters_time_series.md)。
 
 <details>
   <summary>👉 <b>更多说明(点击展开)</b></summary>

+ 102 - 50
docs/pipeline_usage/pipeline_develop_guide.md

@@ -23,11 +23,12 @@ graph LR
 
 PaddleX 所提供的预训练的模型产线均可以**快速体验效果**,如果产线效果可以达到您的要求,您可以直接将预训练的模型产线进行**开发集成/部署**,如果效果不及预期,可以使用私有数据对产线中的模型进行**微调**,直到达到满意的效果。
 
-下面,让我们以登机牌识别的任务为例,介绍PaddleX模型产线工具的本地使用过程,在使用前,请确保您已经按照[PaddleX本地安装教程](../installation/installation.md)完成了PaddleX的安装。
+下面,让我们以登机牌识别的任务为例,介绍PaddleX模型产线工具的本地使用过程。
+在使用前,请确保您已经按照[PaddleX本地安装教程](../installation/installation.md)完成了PaddleX的安装。
 
 ## 1、选择产线
 
-PaddleX中每条产线都可以解决特定任务场景的问题如目标检测、时序预测、语义分割等,您需要根据具体任务选择后续进行开发的产线。例如此处为登机牌识别任务,对应 PaddleX 的【通用 OCR 产线】。更多任务与产线的对应关系可以在 [PaddleX产线列表(CPU/GPU)](../support_list/pipelines_list.md)查询。
+PaddleX中每条产线都可以解决特定任务场景的问题如目标检测、时序预测、语义分割等,您需要根据具体任务选择后续进行开发的产线。例如此处为登机牌识别任务,对应 PaddleX 的[通用OCR产线](./tutorials/ocr_pipelines/OCR.md)。更多任务与产线的对应关系可以在 [PaddleX产线列表(CPU/GPU)](../support_list/pipelines_list.md)查询。
 
 ## 2、快速体验
 
@@ -39,36 +40,34 @@ PaddleX提供了三种可以快速体验产线效果的方式,您可以根据
 * 命令行快速体验:[PaddleX产线命令行使用说明](../pipeline_usage/instructions/pipeline_CLI_usage.md)
 * Python脚本快速体验:[PaddleX产线Python脚本使用说明](../pipeline_usage/instructions/pipeline_python_API.md)
 
-以实现登机牌识别任务的通用OCR产线为例,一行命令即可快速体验产线效果:
+以实现登机牌识别任务的通用OCR产线为例,可以用三种方式体验产线效果:
 
-```bash
-paddlex --pipeline OCR --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --device gpu:0  --save ./output/
-```
-参数说明:
+**🌐 在线体验**
+  
+您可以在AI Studio[在线体验](https://aistudio.baidu.com/community/app/91660/webUI?source=appMineRecent)通用 OCR 产线的效果,用官方提供的 Demo 图片进行识别,例如:
 
-```bash
---pipeline:产线名称,此处为OCR产线
---input:待处理的输入图片的本地路径或URL
---device 使用的GPU序号(例如gpu:0表示使用第0块GPU,gpu:1,2表示使用第1、2块GPU),也可选择使用CPU(--device cpu)
-```
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/ocr/02.png)
 
-如需对产线配置进行修改,可获取配置文件后进行修改,获取配置文件方式如下:
+**💻 命令行方式体验**
 
+一行命令即可快速体验产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png),并将 `--input` 替换为本地路径,进行预测:
 ```bash
-paddlex --get_pipeline_config OCR
+paddlex --pipeline OCR --input general_ocr_002.png --device gpu:0
 ```
-
-获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./ocr.yaml`,只需执行:
+参数说明:
 
 ```bash
-paddlex --pipeline ./ocr.yaml --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --save ./output/
+--pipeline:产线名称,此处为OCR产线
+--input:待处理的输入图片的本地路径或URL
+--device 使用的GPU序号(例如gpu:0表示使用第0号GPU,gpu:1,2表示使用第1、2号GPU),也可选择使用CPU(--device cpu)
 ```
-其中,`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
+<details>
+   <summary> 👉点击查看运行结果</summary>
 
 运行后,得到的结果为:
 
 ```bash
-{'input_path': '/root/.paddlex/predict_input/general_ocr_002.png', 'dt_polys': [array([[ 6, 13],
+{'input_path': 'general_ocr_002.png', 'dt_polys': [array([[ 6, 13],
        [64, 13],
        [64, 31],
        [ 6, 31]], dtype=int16), array([[210,  14],
@@ -81,20 +80,79 @@ paddlex --pipeline ./ocr.yaml --input https://paddle-model-ecology.bj.bcebos.com
 可视化结果如下:
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/boardingpass.png)
+
+</details>
+
+在执行上述命令时,加载的是默认的OCR产线配置文件,若您需要自定义配置文件,可按照下面的步骤进行操作:
+
+<details>
+   <summary> 👉点击展开</summary>
+
+获取OCR产线配置文件:   
+```bash
+paddlex --get_pipeline_config OCR
+```
+
+执行后,OCR产线配置文件将被保存在当前路径。若您希望自定义保存位置,可执行如下命令(假设自定义保存位置为 `./my_path`):
+
+```bash
+paddlex --get_pipeline_config OCR --save_path ./my_path
+```
+
+获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./ocr.yaml`,只需执行:
+
+```bash
+paddlex --pipeline ./ocr.yaml --input general_ocr_002.png
+```
+其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
+</details>
+
+**💻Python脚本体验**
+
+几行代码即可快速体验产线效果:
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(pipeline="ocr")
+
+output = pipeline.predict("general_ocr_002.png")
+for batch in output:
+    for item in batch:
+        res = item['result']
+        res.print()
+        res.save_to_img("./output/")
+        res.save_to_json("./output/")
+```
+
+执行了如下几个步骤:
+
+* `create_pipeline()` 实例化产线对象
+* 传入图片并调用产线对象的 `predict` 方法进行推理预测
+* 对预测结果进行处理
+
+> ❗ Python脚本运行得到的结果与命令行方式相同。
+
+
+如果预训练模型产线的效果符合您的预期,即可直接进行[开发集成/部署](#6开发集成部署),如果不符合,再根据后续步骤对产线的效果进行优化。
 ## 3、模型选择(可选)
 
-由于一个产线中可能包含一个或多个模型,在进行模型微调时,您需要根据测试的情况确定微调其中的哪个模型。以此处登机牌识别任务的OCR产线为例,该产线包含文本检测模型(如 `PP-OCRv4_mobile_det`)和文本识别模型(如 `PP-OCRv4_mobile_rec`),如发现文字的定位不准,则需要微调文本检测模型,如果发现文字的识别不准,则需要微调文本识别模型。如果您不清楚产线中包含哪些模型,可以查阅各产线使用教程。
+由于一个产线中可能包含一个或多个单功能模块,在进行模型微调时,您需要根据测试的情况确定微调其中的哪个模块的模型。
+
+以此处登机牌识别任务的OCR产线为例,该产线包含文本检测模型(如 `PP-OCRv4_mobile_det`)和文本识别模型(如 `PP-OCRv4_mobile_rec`),如发现文字的定位不准,则需要微调文本检测模型,如果发现文字的识别不准,则需要微调文本识别模型。如果您不清楚产线中包含哪些模型,可以查阅[模型列表](../support_list/models_list.md)。
+
+
 
 ## 4、模型微调(可选)
 
-在确定好需要微调的模型后,您需要用私有数据集对模型进行训练,PaddleX提供了单模型开发工具,一行命令即可完成模型的训练:
+在确定好需要微调的模型后,您需要用私有数据集对模型进行训练,以文本识别模型( `PP-OCRv4_mobile_rec`)为例,一行命令即可完成模型的训练:
 
 ```bash
 python main.py -c paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml \
     -o Global.mode=train \
     -o Global.dataset_dir=your/dataset_dir
 ```
-此外,对于模型微调中私有数据集的准备、单模型的推理等内容,PaddleX也提供了详细的教程,详细请参考[PaddleX单模型开发工具使用教程](../module_usage/module_develop_guide.md)
+此外,对于模型微调中私有数据集的准备、单模型的推理等内容,PaddleX也提供了详细的教程,详细请参考[PaddleX模块使用教程](../../README.md#-文档)
 
 ## 5、产线测试(可选)
 
@@ -112,28 +170,19 @@ Pipeline:
   rec_device: "gpu"
 ......
 ```
-随后, 参考[快速体验](#2快速体验)中的命令行方式或Python脚本方式,加载修改后的产线配置文件即可。
+随后, 参考[快速体验](#2快速体验)中的命令行方式或[Python脚本方式](#6开发集成部署),加载修改后的产线配置文件即可。
 
 如果效果满意,即可将微调后的产线进行[开发集成/部署](#6开发集成部署),如果不满意,即可回到[模型选择](#3模型选择可选)尝试继续微调其他任务模块的模型,直到达到满意的效果。
 
 ## 6、开发集成/部署
 
-PaddleX提供了简洁的Python API,用几行代码即可将模型产线集成到您的项目中。此处用于集成登机牌识别的OCR产线示例代码如下:
+如果预训练的产线效果可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
 
-```bash
-from paddlex import create_pipeline
-pipeline = create_pipeline(pipeline="OCR")
-output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_001.png")
-for res in output:
-    res.print(json_format=False)
-    res.save_to_img("./output/")
-    res.save_to_json("./output/res.json")
-```
-更详细的模型产线Python集成方法请参考[PaddleX产线Python脚本使用说明](../pipeline_usage/instructions/pipeline_python_API.md)
+若您需要将产线直接应用在您的Python项目中,可以参考[PaddleX产线Python脚本使用说明](./instructions/pipeline_python_API.md)及[快速体验](#2快速体验)中的Python示例代码。
 
-同时,PaddleX提供了三种部署方式及详细的部署教程
+此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../pipeline_deploy/service_deploy.md)。
 
@@ -142,20 +191,23 @@ for res in output:
 
 
 
-> ❗ 温馨提醒:PaddleX为每个产线都提供了详细的使用说明,您可以根据需要进行选择,所有产线对应的使用说明如下:
+> ❗ PaddleX为每个产线都提供了详细的使用说明,您可以根据需要进行选择,所有产线对应的使用说明如下:
 
 | 产线名称           | 详细说明                                                                                                      |
 |--------------------|----------------------------------------------------------------------------------------------------------------|
-| 文档场景信息抽取v3   | [文档场景信息抽取v3产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md) |
-| 通用图像分类       | [通用图像分类产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/image_classification.md) |
-| 通用目标检测       | [通用目标检测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/image_classification.md) |
-| 通用实例分割       | [通用实例分割产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md) |
-| 通用语义分割       | [通用语义分割产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md) |
-| 通用图像多标签分类 | [通用图像多标签分类产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/image_multi_label_lassification.md) |
-| 小目标检测         |  [小目标检测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md) |
-| 图像异常检测       | [图像异常检测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md) |
-| 通用OCR            | [通用OCR产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/ocr_pipelies/OCR.md) |
-| 通用表格识别       | [通用表格识别产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/ocr_pipelies/table_recognition.md) |
-| 通用时序预测       | [通用时序预测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md) |
-| 通用时序异常检测   | [通用时序异常检测产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md) |
-| 通用时序分类       | [通用时序分类产线Python脚本使用说明](/docs_new/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md) |
+| 文档场景信息抽取v3   | [文档场景信息抽取v3产线使用教程](./tutorials/information_extration_pipelines/document_scene_information_extraction.md) |
+| 通用图像分类       | [通用图像分类产线使用教程](./tutorials/cv_pipelines/image_classification.md) |
+| 通用目标检测       | [通用目标检测产线使用教程](./tutorials/cv_pipelines/object_detection.md) |
+| 通用实例分割       | [通用实例分割产线使用教程](./tutorials/cv_pipelines/instance_segmentation.md) |
+| 通用语义分割       | [通用语义分割产线使用教程](./tutorials/cv_pipelines/semantic_segmentation.md) |
+| 通用图像多标签分类 | [通用图像多标签分类产线使用教程](./tutorials/cv_pipelines/image_multi_label_classification.md) |
+| 小目标检测         |  [小目标检测产线使用教程](./tutorials/cv_pipelines/small_object_detection.md) |
+| 图像异常检测       | [图像异常检测产线使用教程](./tutorials/cv_pipelines/image_anomaly_detection.md) |
+| 通用OCR            | [通用OCR产线使用教程](./tutorials/ocr_pipelines/OCR.md) |
+| 通用表格识别       | [通用表格识别产线使用教程](./tutorials/ocr_pipelines/table_recognition.md) |
+| 公式识别       | [公式识别产线使用教程](./tutorials/ocr_pipelines/formula_recognition.md) |
+| 印章识别       | [印章识别产线使用教程](./tutorials/ocr_pipelines/seal_recognition.md) |
+| 时序预测       | [通用时序预测产线使用教程](./tutorials/time_series_pipelines/time_series_forecasting.md) |
+| 时序异常检测   | [通用时序异常检测产线使用教程](./tutorials/time_series_pipelines/time_series_anomaly_detection.md) |
+| 时序分类       | [通用时序分类产线使用教程](./tutorials/time_series_pipelines/time_series_classification.md) |
+

Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 20 - 23
docs/pipeline_usage/pipeline_develop_guide_en.md


+ 4 - 4
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md

@@ -24,7 +24,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,您可
 ### 2.1 命令行方式体验
 一行命令即可快速体验图像异常检测产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline anomaly_detection --input uad_grid.png --device gpu:0
 ```
 参数说明:
@@ -52,8 +52,8 @@ paddlex --get_pipeline_config anomaly_detection --save_path ./my_path
 
 获取产线配置文件后,可将 --pipeline 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./anomaly_detection.yaml`,只需执行:
 
-```
-paddlex --pipeline ./anomaly_detection.yaml --input uad_grid.png
+```bash
+paddlex --pipeline ./anomaly_detection.yaml --input uad_grid.png --device gpu:0
 ```
 
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
@@ -63,7 +63,7 @@ paddlex --pipeline ./anomaly_detection.yaml --input uad_grid.png
 运行后,得到的结果为:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/uad_grid.png'}
+{'input_path': 'uad_grid.png'}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_anomaly_detection/02.png)
 

+ 4 - 4
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md

@@ -5,7 +5,7 @@
 ## 1. Introduction to Image Anomaly Detection Pipeline
 Image anomaly detection is an image processing technique that identifies unusual or non-conforming patterns within images through analysis. It is widely applied in industrial quality inspection, medical image analysis, and security monitoring. By leveraging machine learning and deep learning algorithms, image anomaly detection can automatically recognize potential defects, anomalies, or abnormal behaviors in images, enabling us to promptly identify issues and take corresponding actions. The image anomaly detection system is designed to automatically detect and mark anomalies in images, enhancing work efficiency and accuracy.
 
-![](/tmp/images/pipelines/image_anomaly_detection/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_anomaly_detection/01.png)
 
 **The image anomaly detection pipeline includes an unsupervised anomaly detection module, with the following model benchmarks**:
 
@@ -52,7 +52,7 @@ paddlex --get_pipeline_config anomaly_detection --save_path ./my_path
 After obtaining the pipeline configuration file, replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./anomaly_detection.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./anomaly_detection.yaml --input uad_grid.png
+paddlex --pipeline ./anomaly_detection.yaml --input uad_grid.png --device gpu:0
 ```
 
 Here, parameters such as `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If parameters are still specified, the specified parameters will take precedence.
@@ -62,9 +62,9 @@ Here, parameters such as `--model` and `--device` do not need to be specified, a
 After running, the result is:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/uad_grid.png'}
+{'input_path': 'uad_grid.png'}
 ```
-![](/tmp/images/pipelines/image_anomaly_detection/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_anomaly_detection/02.png)
 
 The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md

@@ -627,7 +627,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验图像分类产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0
 ```
 参数说明:
@@ -654,8 +654,8 @@ paddlex --get_pipeline_config image_classification --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./image_classification.yaml`,只需执行:
 
-```
-paddlex --pipeline ./image_classification.yaml --input general_image_classification_001.jpg
+```bash
+paddlex --pipeline ./image_classification.yaml --input general_image_classification_001.jpg --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -664,7 +664,7 @@ paddlex --pipeline ./image_classification.yaml --input general_image_classificat
 运行后,得到的结果为:
 
 ```
-{'img_path': './my_path/general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': [0.62736, 0.03752, 0.03256, 0.0323, 0.03194], 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}
+{'input_path': 'general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': [0.62736, 0.03752, 0.03256, 0.0323, 0.03194], 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_classification/03.png)
 
@@ -1234,12 +1234,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行图像分类产线的推理,使用的命令为:
 
-```
+```bash
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需将 `--device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device npu:0
 ```
 若您想在更多种类的硬件上使用通用图像分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 5 - 5
docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md

@@ -5,7 +5,7 @@
 ## 1. Introduction to the General Image Classification Pipeline
 Image classification is a technique that assigns images to predefined categories. It is widely applied in object recognition, scene understanding, and automatic annotation. Image classification can identify various objects such as animals, plants, traffic signs, and categorize them based on their features. By leveraging deep learning models, image classification can automatically extract image features and perform accurate classification.
 
-![](/tmp/images/pipelines/image_classification/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_classification/01.png)
 
 **The General Image Classification Pipeline includes an image classification module. If you prioritize model accuracy, choose a model with higher accuracy. If you prioritize inference speed, select a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage size.**
 
@@ -616,7 +616,7 @@ PaddleX provides pre-trained model pipelines that can be quickly experienced. Yo
 ### 2.1 Online Experience
 You can [experience online](https://aistudio.baidu.com/community/app/100061/webUI) the effects of the General Image Classification Pipeline using the demo images provided by the official. For example:
 
-![](/tmp/images/pipelines/image_classification/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_classification/02.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model within the pipeline**.
 
@@ -654,7 +654,7 @@ paddlex --get_pipeline_config image_classification --save_path ./my_path
 After obtaining the pipeline configuration file, replace `--pipeline` with the configuration file's save path to make the configuration file take effect. For example, if the configuration file's save path is `./image_classification.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./image_classification.yaml --input general_image_classification_001.jpg
+paddlex --pipeline ./image_classification.yaml --input general_image_classification_001.jpg --device gpu:0
 ```
 Here, parameters such as `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If you still specify parameters, the specified parameters will take precedence.
 
@@ -663,9 +663,9 @@ Here, parameters such as `--model` and `--device` do not need to be specified, a
 After running, the result will be:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': [0.62736, 0.03752, 0.03256, 0.0323, 0.03194], 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}
+{'input_path': 'general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': [0.62736, 0.03752, 0.03256, 0.0323, 0.03194], 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}
 ```
-![](/tmp/images/pipelines/image_classification/03.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_classification/03.png)
 
 
 The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md

@@ -35,7 +35,7 @@ PaddleX 支持在本地使用命令行或 Python 体验通用图像多标签分
 ### 2.1 命令行方式体验
 一行命令即可快速体验图像多标签分类产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline multi_label_image_classification --input general_image_classification_001.jpg --device gpu:0
 ```
 参数说明:
@@ -62,8 +62,8 @@ paddlex --get_pipeline_config multi_label_image_classification --save_path ./my_
 
 获取产线配置文件后,可将 --pipeline 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./multi_label_image_classification.yaml`,只需执行:
 
-```
-paddlex --pipeline ./multi_label_image_classification.yaml --input general_image_classification_001.jpg
+```bash
+paddlex --pipeline ./multi_label_image_classification.yaml --input general_image_classification_001.jpg --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -73,7 +73,7 @@ paddlex --pipeline ./multi_label_image_classification.yaml --input general_image
 运行后,得到的结果为:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_image_classification_001.jpg', 'class_ids': [21, 0, 30, 24], 'scores': [0.99257, 0.70596, 0.63001, 0.57852], 'label_names': ['bear', 'person', 'skis', 'backpack']}
+{'input_path': 'general_image_classification_001.jpg', 'class_ids': [21, 0, 30, 24], 'scores': [0.99257, 0.70596, 0.63001, 0.57852], 'label_names': ['bear', 'person', 'skis', 'backpack']}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_multi_label_classification/02.png)
 
@@ -643,12 +643,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行图像多标签分类产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline multi_label_image_classification --input general_image_classification_001.jpg --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline multi_label_image_classification --input general_image_classification_001.jpg --device npu:0
 ```
 若您想在更多种类的硬件上使用通用图像多标签分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 4 - 3
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md

@@ -60,7 +60,7 @@ paddlex --get_pipeline_config multi_label_image_classification --save_path ./my_
 After obtaining the pipeline configuration file, replace `--pipeline` with the saved path of the configuration file to make it effective. For example, if the configuration file is saved at `./multi_label_image_classification.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./multi_label_image_classification.yaml --input https://paddle-model-ecology.bj
+paddlex --pipeline ./multi_label_image_classification.yaml --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg --device gpu:0
 ```
 
 Where `--model`, `--device`, and other parameters are not specified, the parameters in the configuration file will be used. If parameters are specified, the specified parameters will take precedence.
@@ -70,7 +70,7 @@ Where `--model`, `--device`, and other parameters are not specified, the paramet
 After running, the result obtained is:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_image_classification_001.jpg', 'class_ids': [21, 0, 30, 24], 'scores': [0.99257, 0.70596, 0.63001, 0.57852], 'label_names': ['bear', 'person', 'skis', 'backpack']}
+{'input_path': 'general_image_classification_001.jpg', 'class_ids': [21, 0, 30, 24], 'scores': [0.99257, 0.70596, 0.63001, 0.57852], 'label_names': ['bear', 'person', 'skis', 'backpack']}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_multi_label_classification/02.png)
 
@@ -626,7 +626,8 @@ paddlex --pipeline multi_label_image_classification --input https://paddle-model
 ```
 
 At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
-```
+
+```bash
 paddlex --pipeline multi_label_image_classification --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg --device npu:0
 ```
 If you want to use the General Image Multi-label Classification Pipeline on more diverse hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../installation/multi_devices_use_guide_en.md).

+ 7 - 7
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md

@@ -23,12 +23,12 @@
 |Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN|39.1|-|-|254.7|
 |MaskRCNN-ResNet50-FPN|35.6|-|-|157.5 M|
 |MaskRCNN-ResNet50-vd-FPN|36.4|-|-|157.5 M|
-|MaskRCNN-ResNet50-vd-SSLDv2-FPN|38.2|-|-|157.2 M|
 |MaskRCNN-ResNet50|32.8|-|-|127.8 M|
 |MaskRCNN-ResNet101-FPN|36.6|-|-|225.4 M|
 |MaskRCNN-ResNet101-vd-FPN|38.1|-|-|225.1 M|
 |MaskRCNN-ResNeXt101-vd-FPN|39.5|-|-|370.0 M|
 |PP-YOLOE_seg-S|32.5|-|-|31.5 M|
+|SOLOv2| 35.5|-|-|179.1 M|
 
 **注:以上精度指标为 **[COCO2017](https://cocodataset.org/#home)** 验证集 Mask AP(0.5:0.95)。以上所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 
@@ -50,7 +50,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验实例分割产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_instance_segmentation_004.png),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device gpu:0
 ```
 参数说明:
@@ -77,8 +77,8 @@ paddlex --get_pipeline_config instance_segmentation --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 ./instance_segmentation.yaml,只需执行:
 
-```
-paddlex --pipeline ./instance_segmentation.yaml --input general_instance_segmentation_004.png
+```bash
+paddlex --pipeline ./instance_segmentation.yaml --input general_instance_segmentation_004.png --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -87,7 +87,7 @@ paddlex --pipeline ./instance_segmentation.yaml --input general_instance_segment
 运行后,得到的结果为:
 
 ```
-{'img_path': '/my_path/general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.8698326945304871, 'coordinate': [339, 0, 639, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8571141362190247, 'coordinate': [0, 0, 195, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8202633857727051, 'coordinate': [88, 113, 401, 574]}, {'cls_id': 0, 'label': 'person', 'score': 0.7108577489852905, 'coordinate': [522, 21, 767, 574]}, {'cls_id': 27, 'label': 'tie', 'score': 0.554280698299408, 'coordinate': [247, 311, 355, 574]}]}
+{'input_path': 'general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.8698326945304871, 'coordinate': [339, 0, 639, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8571141362190247, 'coordinate': [0, 0, 195, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8202633857727051, 'coordinate': [88, 113, 401, 574]}, {'cls_id': 0, 'label': 'person', 'score': 0.7108577489852905, 'coordinate': [522, 21, 767, 574]}, {'cls_id': 27, 'label': 'tie', 'score': 0.554280698299408, 'coordinate': [247, 311, 355, 574]}]}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/instance_segmentation/03.png)
 
@@ -673,12 +673,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行实例分割产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device npu:0
 ```
 若您想在更多种类的硬件上使用通用实例分割产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 4 - 4
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md

@@ -24,12 +24,12 @@ Instance segmentation is a computer vision task that not only identifies the obj
 |Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN|39.1|-|-|254.7|
 |MaskRCNN-ResNet50-FPN|35.6|-|-|157.5 M|
 |MaskRCNN-ResNet50-vd-FPN|36.4|-|-|157.5 M|
-|MaskRCNN-ResNet50-vd-SSLDv2-FPN|38.2|-|-|157.2 M|
 |MaskRCNN-ResNet50|32.8|-|-|127.8 M|
 |MaskRCNN-ResNet101-FPN|36.6|-|-|225.4 M|
 |MaskRCNN-ResNet101-vd-FPN|38.1|-|-|225.1 M|
 |MaskRCNN-ResNeXt101-vd-FPN|39.5|-|-|370.0 M|
 |PP-YOLOE_seg-S|32.5|-|-|31.5 M|
+|SOLOv2| 35.5|-|-|179.1 M|
 
 **Note: The above accuracy metrics are Mask AP(0.5:0.95) on the **[COCO2017](https://cocodataset.org/#home)** validation set. All GPU inference times are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
@@ -81,8 +81,8 @@ paddlex --get_pipeline_config instance_segmentation --save_path ./my_path
 
 After obtaining the pipeline configuration file, you can replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./instance_segmentation.yaml`, simply execute:
 
-```
-paddlex --pipeline ./instance_segmentation.yaml --input general_instance_segmentation_004.png
+```bash
+paddlex --pipeline ./instance_segmentation.yaml --input general_instance_segmentation_004.png --device gpu:0
 ```
 
 Where `--model`, `--device`, and other parameters do not need to be specified, and the parameters in the configuration file will be used. If parameters are still specified, the specified parameters will take precedence.
@@ -92,7 +92,7 @@ Where `--model`, `--device`, and other parameters do not need to be specified, a
 After running, the result is:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.8698326945304871, 'coordinate': [339, 0, 639, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8571141362190247, 'coordinate': [0, 0, 195, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8202633857727051, 'coordinate': [88, 113, 401, 574]}, {'cls_id': 0, 'label': 'person', 'score': 0.7108577489852905, 'coordinate': [522, 21, 767, 574]}, {'cls_id': 27, 'label': 'tie', 'score': 0.554280698299408, 'coordinate': [247, 311, 355, 574]}]}
+{'input_path': 'general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.8698326945304871, 'coordinate': [339, 0, 639, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8571141362190247, 'coordinate': [0, 0, 195, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8202633857727051, 'coordinate': [88, 113, 401, 574]}, {'cls_id': 0, 'label': 'person', 'score': 0.7108577489852905, 'coordinate': [522, 21, 767, 574]}, {'cls_id': 27, 'label': 'tie', 'score': 0.554280698299408, 'coordinate': [247, 311, 355, 574]}]}
 ```
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/instance_segmentation/03.png)

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md

@@ -340,7 +340,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验目标检测产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_object_detection_002.png),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline object_detection --input general_object_detection_002.png --device gpu:0
 ```
 参数说明:
@@ -368,8 +368,8 @@ paddlex --get_pipeline_config object_detection --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./object_detection.yaml`,只需执行:
 
-```
-paddlex --pipeline ./object_detection.yaml --input general_object_detection_002.png
+```bash
+paddlex --pipeline ./object_detection.yaml --input general_object_detection_002.png --device gpu:0
 ```
 
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
@@ -379,7 +379,7 @@ paddlex --pipeline ./object_detection.yaml --input general_object_detection_002.
 运行后,得到的结果为:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png', 'boxes': [{'cls_id': 49, 'label': 'orange', 'score': 0.8188097476959229, 'coordinate': [661, 93, 870, 305]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7743489146232605, 'coordinate': [76, 274, 330, 520]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7270504236221313, 'coordinate': [285, 94, 469, 297]}, {'cls_id': 46, 'label': 'banana', 'score': 0.5570532083511353, 'coordinate': [310, 361, 685, 712]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5484835505485535, 'coordinate': [764, 285, 924, 440]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5160726308822632, 'coordinate': [853, 169, 987, 303]}, {'cls_id': 60, 'label': 'dining table', 'score': 0.5142655968666077, 'coordinate': [0, 0, 1072, 720]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5101479291915894, 'coordinate': [57, 23, 213, 176]}]}
+{'input_path': 'general_object_detection_002.png', 'boxes': [{'cls_id': 49, 'label': 'orange', 'score': 0.8188097476959229, 'coordinate': [661, 93, 870, 305]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7743489146232605, 'coordinate': [76, 274, 330, 520]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7270504236221313, 'coordinate': [285, 94, 469, 297]}, {'cls_id': 46, 'label': 'banana', 'score': 0.5570532083511353, 'coordinate': [310, 361, 685, 712]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5484835505485535, 'coordinate': [764, 285, 924, 440]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5160726308822632, 'coordinate': [853, 169, 987, 303]}, {'cls_id': 60, 'label': 'dining table', 'score': 0.5142655968666077, 'coordinate': [0, 0, 1072, 720]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5101479291915894, 'coordinate': [57, 23, 213, 176]}]}
 ```
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/object_detection/03.png)
@@ -962,12 +962,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行目标检测产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline object_detection --input general_object_detection_002.png --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline object_detection --input general_object_detection_002.png --device npu:0
 ```
 若您想在更多种类的硬件上使用通用目标检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 2 - 2
docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md

@@ -369,7 +369,7 @@ paddlex --get_pipeline_config object_detection --save_path ./my_path
 After obtaining the pipeline configuration file, replace `--pipeline` with the configuration file save path to make the configuration file effective. For example, if the configuration file save path is `./object_detection.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./object_detection.yaml --input general_object_detection_002.png
+paddlex --pipeline ./object_detection.yaml --input general_object_detection_002.png --device gpu:0
 ```
 
 Here, parameters such as `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If these parameters are still specified, the specified parameters will take precedence.
@@ -379,7 +379,7 @@ Here, parameters such as `--model` and `--device` do not need to be specified, a
 After running, the result will be:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png', 'boxes': [{'cls_id': 49, 'label': 'orange', 'score': 0.8188097476959229, 'coordinate': [661, 93, 870, 305]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7743489146232605, 'coordinate': [76, 274, 330, 520]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7270504236221313, 'coordinate': [285, 94, 469, 297]}, {'cls_id': 46, 'label': 'banana', 'score': 0.5570532083511353, 'coordinate': [310, 361, 685, 712]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5484835505485535, 'coordinate': [764, 285, 924, 440]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5160726308822632, 'coordinate': [853, 169, 987, 303]}, {'cls_id': 60, 'label': 'dining table', 'score': 0.5142655968666077, 'coordinate': [0, 0, 1072, 720]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5101479291915894, 'coordinate': [57, 23, 213, 176]}]}
+{'input_path': 'general_object_detection_002.png', 'boxes': [{'cls_id': 49, 'label': 'orange', 'score': 0.8188097476959229, 'coordinate': [661, 93, 870, 305]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7743489146232605, 'coordinate': [76, 274, 330, 520]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7270504236221313, 'coordinate': [285, 94, 469, 297]}, {'cls_id': 46, 'label': 'banana', 'score': 0.5570532083511353, 'coordinate': [310, 361, 685, 712]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5484835505485535, 'coordinate': [764, 285, 924, 440]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5160726308822632, 'coordinate': [853, 169, 987, 303]}, {'cls_id': 60, 'label': 'dining table', 'score': 0.5142655968666077, 'coordinate': [0, 0, 1072, 720]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5101479291915894, 'coordinate': [57, 23, 213, 176]}]}
 ```
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/object_detection/03.png)

+ 8 - 8
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md

@@ -58,7 +58,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验语义分割产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/application/semantic_segmentation/makassaridn-road_demo.png),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline semantic_segmentation --input makassaridn-road_demo.png --device gpu:0
 ```
 参数说明:
@@ -85,8 +85,8 @@ paddlex --get_pipeline_config semantic_segmentation --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./semantic_segmentation.yaml`,只需执行:
 
-```
-paddlex --pipeline ./semantic_segmentation.yaml --input semantic_segmentation/makassaridn-road_demo.png
+```bash
+paddlex --pipeline ./semantic_segmentation.yaml --input makassaridn-road_demo.png --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -95,7 +95,7 @@ paddlex --pipeline ./semantic_segmentation.yaml --input semantic_segmentation/ma
 运行后,得到的结果为:
 
 ```
-{'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png'}
+{'input_path': 'general_object_detection_002.png'}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/03.png)
 可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下。
@@ -108,7 +108,7 @@ from paddlex import create_pipeline
 
 pipeline = create_pipeline(pipeline="semantic_segmentation")
 
-output = pipeline.predict("semantic_segmentation/makassaridn-road_demo.png")
+output = pipeline.predict("makassaridn-road_demo.png")
 for res in output:
     res.print() ## 打印预测的结构化输出
     res.save_to_img("./output/") ## 保存结果可视化图像
@@ -154,7 +154,7 @@ for res in output:
 ```python
 from paddlex import create_pipeline
 pipeline = create_pipeline(pipeline="./my_path/semantic_segmentation.yaml")
-output = pipeline.predict("semantic_segmentation/makassaridn-road_demo.png")
+output = pipeline.predict("makassaridn-road_demo.png")
 for res in output:
     res.print() ## 打印预测的结构化输出
     res.save_to_img("./output/") ## 保存结果可视化图像
@@ -643,12 +643,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行语义分割产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline semantic_segmentation --input semantic_segmentation/makassaridn-road_demo.png --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline semantic_segmentation --input semantic_segmentation/makassaridn-road_demo.png --device npu:0
 ```
 若您想在更多种类的硬件上使用通用语义分割产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 2 - 2
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md

@@ -87,7 +87,7 @@ paddlex --get_pipeline_config semantic_segmentation --save_path ./my_path
 After obtaining the pipeline configuration file, replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./semantic_segmentation.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./semantic_segmentation.yaml --input makassaridn-road_demo.png
+paddlex --pipeline ./semantic_segmentation.yaml --input makassaridn-road_demo.png --device gpu:0
 ```
 
 Here, parameters such as `--model` and `--device` do not need to be specified, and the parameters in the configuration file will be used. If parameters are still specified, the specified parameters will take precedence.
@@ -97,7 +97,7 @@ Here, parameters such as `--model` and `--device` do not need to be specified, a
 After running, the result is:
 
 ```bash
-{'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png'}
+{'input_path': 'general_object_detection_002.png'}
 ```
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/03.png)

Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 3 - 3
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md


Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md


+ 4 - 13
docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md

@@ -453,7 +453,6 @@ chat_result.print()
         |-|-|-|-|
         |`keys`|`array`|关键词列表。|是|
         |`vectorStore`|`object`|向量数据库序列化结果。由`buildVectorStore`操作提供。|是|
-        |`visionInfo`|`object`|图像中的关键信息。由`analyzeImage`操作提供。|是|
         |`llmName`|`string`|大语言模型名称。|否|
         |`llmParams`|`object`|大语言模型API参数。|否|
 
@@ -478,7 +477,7 @@ chat_result.print()
 
         |名称|类型|含义|
         |-|-|-|
-        |`retrievalResult`|`string`|知识检索结果,可用作其他操作的输入。|
+        |`retrievalResult`|`object`|知识检索结果,可用作其他操作的输入。|
 
 - **`chat`**
 
@@ -495,9 +494,8 @@ chat_result.print()
         |`taskDescription`|`string`|提示词任务。|否|
         |`rules`|`string`|提示词规则。用于自定义信息抽取规则,例如规范输出格式。|否|
         |`fewShot`|`string`|提示词示例。|否|
-        |`useVectorStore`|`boolean`|是否启用向量数据库。默认启用。|否|
         |`vectorStore`|`object`|向量数据库序列化结果。由`buildVectorStore`操作提供。|否|
-        |`retrievalResult`|`string`|知识检索结果。由`retrieveKnowledge`操作提供。|否|
+        |`retrievalResult`|`object`|知识检索结果。由`retrieveKnowledge`操作提供。|否|
         |`returnPrompts`|`boolean`|是否返回使用的提示词。默认启用。|否|
         |`llmName`|`string`|大语言模型名称。|否|
         |`llmParams`|`object`|大语言模型API参数。|否|
@@ -597,7 +595,6 @@ if __name__ == "__main__":
             f.write(base64.b64decode(res["layoutImage"]))
         print(f"Output images saved at {ocr_img_path} and {layout_img_path}")
         print("")
-    print("="*50 + "\n\n")
 
     payload = {
         "visionInfo": result_vision["visionInfo"],
@@ -614,12 +611,10 @@ if __name__ == "__main__":
         pprint.pp(resp_vector.json())
         sys.exit(1)
     result_vector = resp_vector.json()["result"]
-    print("="*50 + "\n\n")
 
     payload = {
         "keys": keys,
         "vectorStore": result_vector["vectorStore"],
-        "visionInfo": result_vision["visionInfo"],
         "llmName": LLM_NAME,
         "llmParams": LLM_PARAMS,
     }
@@ -631,9 +626,6 @@ if __name__ == "__main__":
         pprint.pp(resp_retrieval.json())
         sys.exit(1)
     result_retrieval = resp_retrieval.json()["result"]
-    print("Knowledge retrieval result:")
-    print(result_retrieval["retrievalResult"])
-    print("="*50 + "\n\n")
 
     payload = {
         "keys": keys,
@@ -641,7 +633,6 @@ if __name__ == "__main__":
         "taskDescription": "",
         "rules": "",
         "fewShot": "",
-        "useVectorStore": True,
         "vectorStore": result_vector["vectorStore"],
         "retrievalResult": result_retrieval["retrievalResult"],
         "returnPrompts": True,
@@ -656,12 +647,12 @@ if __name__ == "__main__":
         pprint.pp(resp_chat.json())
         sys.exit(1)
     result_chat = resp_chat.json()["result"]
-    print("Prompts:")
+    print("\nPrompts:")
     pprint.pp(result_chat["prompts"])
     print("Final result:")
     print(len(result_chat["chatResult"]))
 ```
-**注**:请在 `API_KEY`、`SECRET_KEY` 处填入您的 ak、sk
+**注**:请在 `API_KEY`、`SECRET_KEY` 处填入您的 API key 和 secret key
 </details>
 </details>
 <br/>

+ 2 - 3
docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md

@@ -445,7 +445,7 @@ Interact with large language models to extract key information.
 </details>
 
 <details>
-<summary>Multilingual Service Invocation Examples</summary>
+<summary>Multi-Language Service Invocation Examples</summary>
 
 <details>
 <summary>Python</summary>
@@ -527,7 +527,6 @@ if __name__ == "__main__":
     payload = {
         "keys": keys,
         "vectorStore": result_vector["vectorStore"],
-        "visionInfo": result_vision["visionInfo"],
         "llmName": LLM_NAME,
         "llmParams": LLM_PARAMS,
     }
@@ -569,7 +568,7 @@ if __name__ == "__main__":
     print("Final result:")
     print(len(result_chat["chatResult"]))
 ```
-**Note**: Please fill in your ak and sk at `API_KEY` and `SECRET_KEY`.
+**Note**: Please fill in your API key and secret key at `API_KEY` and `SECRET_KEY`.
 </details>
 </details>
 <br/>

Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 86 - 56
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md


Những thai đổi đã bị hủy bỏ vì nó quá lớn
+ 71 - 47
docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md


+ 3 - 3
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md

@@ -41,7 +41,7 @@ PaddleX 支持在本地使用命令行或 Python 体验公式识别产线的效
 ### 2.1 命令行方式体验
 一行命令即可快速体验公式识别产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/demo_image/general_formula_recognition.png),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline formula_recognition --input general_formula_recognition.png --device gpu:0
 ```
 参数说明:
@@ -68,8 +68,8 @@ paddlex --get_pipeline_config formula_recognition --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./formula_recognition.yaml`,只需执行:
 
-```
-paddlex --pipeline ./formula_recognition.yaml --input general_formula_recognition.png
+```bash
+paddlex --pipeline ./formula_recognition.yaml --input general_formula_recognition.png --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 

+ 2 - 2
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md

@@ -69,8 +69,8 @@ paddlex --get_pipeline_config formula_recognition --save_path ./my_path
 ```
 
 After obtaining the Pipeline configuration file, replace `--pipeline` with the configuration file's save path to make the configuration file effective. For example, if the configuration file is saved as  `./formula_recognition.yaml`, simply execute:
-```
-paddlex --pipeline ./formula_recognition.yaml --input general_formula_recognition.png
+```bash
+paddlex --pipeline ./formula_recognition.yaml --input general_formula_recognition.png --device gpu:0
 ```
 Here, parameters such as `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If parameters are still specified, the specified parameters will take precedence.
 

+ 344 - 0
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md

@@ -0,0 +1,344 @@
+简体中文 | [English](seal_recognition_en.md)
+
+# 印章文本识别产线使用教程
+
+## 1. 印章文本识别产线介绍
+印章文本识别是一种自动从文档或图像中提取和识别印章内容的技术,印章文本的识别是文档处理的一部分,在很多场景都有用途,例如合同比对,出入库审核以及发票报销审核等场景。
+
+
+![](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_seal/01.png)
+
+
+**印章文本识别**产线中包含版面区域分析模块、印章印章文本检测模块和文本识别模块。
+
+**如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型推理速度,请选择推理速度较快的模型,如您更考虑模型存储大小,请选择存储大小较小的模型**。
+
+<details>
+   <summary> 👉模型列表详情</summary>
+
+
+
+**版面区域分析模块模型:**
+
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|-|-|-|-|-|
+|PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|
+|RT-DETR-H_layout_3cls|95.9|114.644|3832.62|470.1M|
+|RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2M|
+
+**注:以上精度指标的评估集是 PaddleX 自建的版面区域分析数据集,包含 1w 张图片。以上所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
+
+**印章文本检测模块模型:**
+
+|模型|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
+|-|-|-|-|-|-|
+|PP-OCRv4_server_seal_det|98.21|84.341|2425.06|109|PP-OCRv4的服务端印章文本检测模型,精度更高,适合在较好的服务器上部署|
+|PP-OCRv4_mobile_seal_det|96.47|10.5878|131.813|4.6|PP-OCRv4的移动端印章文本检测模型,效率更高,适合在端侧部署|
+
+**注:以上精度指标的评估集是自建的数据集,包含500张圆形印章图像。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
+
+**文本识别模块模型:**
+
+|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|-|-|-|-|-|
+|PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|
+|PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|
+
+**注:以上精度指标的评估集是 PaddleOCR 自建的中文数据集 ,覆盖街景、网图、文档、手写多个场景,其中文本识别包含 1.1w 张图片。以上所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
+
+</details>
+
+## 2. 快速开始
+PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可以在线体验印章文本识别产线的效果,也可以在本地使用命令行或 Python 体验印章文本识别产线的效果。
+
+### 2.1 在线体验
+您可以[在线体验](https://aistudio.baidu.com/community/app/182491/webUI)文档场景信息抽取v3产线中的印章文本识别的效果,用官方提供的 Demo 图片进行识别,例如:
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/seal_recognition/02.png)
+
+如果您对产线运行的效果满意,可以直接对产线进行集成部署,如果不满意,您也可以利用私有数据**对产线中的模型进行在线微调**。
+
+### 2.2 本地体验
+在本地使用印章文本识别产线前,请确保您已经按照[PaddleX本地安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。
+
+### 2.1 命令行方式体验
+一行命令即可快速体验印章文本识别产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png),并将 `--input` 替换为本地路径,进行预测
+
+```
+paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --save_path ./output
+```
+参数说明:
+
+```
+--pipeline:产线名称,此处为印章文本识别产线
+--input:待处理的输入图片的本地路径或URL
+--device: 使用的GPU序号(例如gpu:0表示使用第0块GPU,gpu:1,2表示使用第1、2块GPU),也可选择使用CPU(--device cpu)
+--save_path: 输出结果保存路径
+```
+
+在执行上述 Python 脚本时,加载的是默认的印章文本识别产线配置文件,若您需要自定义配置文件,可执行如下命令获取:
+
+<details>
+   <summary> 👉点击展开</summary>
+
+```
+paddlex --get_pipeline_config seal_recognition
+```
+执行后,印章文本识别产线配置文件将被保存在当前路径。若您希望自定义保存位置,可执行如下命令(假设自定义保存位置为 `./my_path` ):
+
+```
+paddlex --get_pipeline_config seal_recognition --save_path ./my_path
+```
+
+获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./seal_recognition.yaml`,只需执行:
+
+```
+paddlex --pipeline seal_recognition.yaml --input seal_text_det.png --save_path ./output
+```
+其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
+
+</details>
+
+运行后,得到的结果为:
+
+<details>
+   <summary> 👉点击展开</summary>
+
+```
+{'input_path': 'seal_text_det.png', 'layout_result': {'input_path': 'seal_text_det.png', 'boxes': [{'cls_id': 2, 'label': 'seal', 'score': 0.9813116192817688, 'coordinate': [0, 5.2238655, 639.59766, 637.6985]}]}, 'ocr_result': [{'input_path': PosixPath('/root/.paddlex/temp/tmp19fn93y5.png'), 'dt_polys': [array([[468, 469],
+       [472, 469],
+       [477, 471],
+       [507, 501],
+       [509, 505],
+       [509, 509],
+       [508, 513],
+       [506, 514],
+       [456, 553],
+       [454, 555],
+       [391, 581],
+       [388, 581],
+       [309, 590],
+       [306, 590],
+       [234, 577],
+       [232, 577],
+       [172, 548],
+       [170, 546],
+       [121, 504],
+       [118, 501],
+       [118, 496],
+       [119, 492],
+       [121, 490],
+       [152, 463],
+       [156, 461],
+       [160, 461],
+       [164, 463],
+       [202, 495],
+       [252, 518],
+       [311, 530],
+       [371, 522],
+       [425, 501],
+       [464, 471]]), array([[442, 439],
+       [445, 442],
+       [447, 447],
+       [449, 490],
+       [448, 494],
+       [446, 497],
+       [440, 499],
+       [197, 500],
+       [193, 499],
+       [190, 496],
+       [188, 491],
+       [188, 448],
+       [189, 444],
+       [192, 441],
+       [197, 439],
+       [438, 438]]), array([[465, 341],
+       [470, 344],
+       [472, 346],
+       [476, 356],
+       [476, 419],
+       [475, 424],
+       [472, 428],
+       [467, 431],
+       [462, 433],
+       [175, 434],
+       [170, 433],
+       [166, 430],
+       [163, 426],
+       [161, 420],
+       [161, 354],
+       [162, 349],
+       [165, 345],
+       [170, 342],
+       [175, 340],
+       [460, 340]]), array([[326,  34],
+       [481,  85],
+       [485,  88],
+       [488,  90],
+       [584, 220],
+       [586, 225],
+       [587, 229],
+       [589, 378],
+       [588, 383],
+       [585, 388],
+       [581, 391],
+       [576, 393],
+       [570, 392],
+       [507, 373],
+       [502, 371],
+       [498, 367],
+       [496, 359],
+       [494, 255],
+       [423, 162],
+       [322, 129],
+       [246, 151],
+       [205, 169],
+       [144, 252],
+       [139, 360],
+       [137, 365],
+       [134, 369],
+       [128, 373],
+       [ 66, 391],
+       [ 61, 392],
+       [ 56, 390],
+       [ 51, 387],
+       [ 48, 382],
+       [ 47, 377],
+       [ 49, 230],
+       [ 50, 225],
+       [ 52, 221],
+       [149,  89],
+       [153,  86],
+       [157,  84],
+       [318,  34],
+       [322,  33]])], 'dt_scores': [0.9943362380813267, 0.9994290391836306, 0.9945320407374245, 0.9908104427126033], 'rec_text': ['5263647368706', '吗繁物', '发票专用章', '天津君和缘商贸有限公司'], 'rec_score': [0.9921098351478577, 0.997374951839447, 0.9999369382858276, 0.9901710152626038]}]}
+```
+</details>
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/seal_recognition/03.png)
+
+可视化图片默认保存在 `output` 目录下,您也可以通过 `--save_path` 进行自定义。
+
+
+### 2.2 Python脚本方式集成
+几行代码即可完成产线的快速推理,以印章文本识别产线为例:
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(pipeline="seal_recognition")
+
+output = pipeline.predict("seal_text_det.png")
+for res in output:
+    res.print() ## 打印预测的结构化输出
+    res.save_to_img("./output/") ## 保存可视化结果
+```
+得到的结果与命令行方式相同。
+
+在上述 Python 脚本中,执行了如下几个步骤:
+
+(1)实例化 `create_pipeline` 实例化产线对象:具体参数说明如下:
+
+|参数|参数说明|参数类型|默认值|
+|-|-|-|-|
+|`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
+|`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
+|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+
+(2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
+
+| 参数类型      | 参数说明                                                                                                  |
+|---------------|-----------------------------------------------------------------------------------------------------------|
+| Python Var    | 支持直接传入Python变量,如numpy.ndarray表示的图像数据。                                               |
+| str         | 支持传入待预测数据文件路径,如图像文件的本地路径:`/root/data/img.jpg`。                                   |
+| str           | 支持传入待预测数据文件URL,如图像文件的网络URL:[示例](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png)。|
+| str           | 支持传入本地目录,该目录下需包含待预测数据文件,如本地路径:`/root/data/`。                               |
+| dict          | 支持传入字典类型,字典的key需与具体任务对应,如图像分类任务对应\"img\",字典的val支持上述类型数据,例如:`{\"img\": \"/root/data1\"}`。|
+| list          | 支持传入列表,列表元素需为上述类型数据,如`[numpy.ndarray, numpy.ndarray],[\"/root/data/img1.jpg\", \"/root/data/img2.jpg\"]`,`[\"/root/data1\", \"/root/data2\"]`,`[{\"img\": \"/root/data1\"}, {\"img\": \"/root/data2/img.jpg\"}]`。|
+
+(3)调用`predict`方法获取预测结果:`predict` 方法为`generator`,因此需要通过调用获得预测结果,`predict`方法以batch为单位对数据进行预测,因此预测结果为list形式表示的一组预测结果。
+
+(4)对预测结果进行处理:每个样本的预测结果均为`dict`类型,且支持打印,或保存为文件,支持保存的类型与具体产线相关,如:
+
+
+| 方法         | 说明                        | 方法参数                                                                                               |
+|--------------|-----------------------------|--------------------------------------------------------------------------------------------------------|
+| print        | 打印结果到终端              | `- format_json`:bool类型,是否对输出内容进行使用json缩进格式化,默认为True;<br>`- indent`:int类型,json格式化设置,仅当format_json为True时有效,默认为4;<br>`- ensure_ascii`:bool类型,json格式化设置,仅当format_json为True时有效,默认为False; |
+| save_to_json | 将结果保存为json格式的文件   | `- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;<br>`- indent`:int类型,json格式化设置,默认为4;<br>`- ensure_ascii`:bool类型,json格式化设置,默认为False; |
+| save_to_img  | 将结果保存为图像格式的文件  | `- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致; |
+
+若您获取了配置文件,即可对印章文本识别产线各项配置进行自定义,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可。
+
+例如,若您的配置文件保存在 `./my_path/seal_recognition.yaml` ,则只需执行:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="./my_path/seal_recognition.yaml")
+output = pipeline.predict("seal_text_det.png")
+for res in output:
+    res.print() ## 打印预测的结构化输出
+    res.save_to_img("./output/") ## 保存可视化结果
+```
+## 3. 开发集成/部署
+如果产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
+
+若您需要将产线直接应用在您的Python项目中,可以参考 [2.2.2 Python脚本方式](#222-python脚本方式集成)中的示例代码。
+
+此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
+
+🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+
+☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
+
+下面是API参考和多语言服务调用示例:
+
+
+
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/lite_deploy.md)。
+您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
+
+## 4. 二次开发
+如果印章文本识别产线提供的默认模型权重在您的场景中,精度或速度不满意,您可以尝试利用**您自己拥有的特定领域或应用场景的数据**对现有模型进行进一步的**微调**,以提升印章文本识别产线的在您的场景中的识别效果。
+
+### 4.1 模型微调
+由于印章文本识别产线包含三个模块,模型产线的效果不及预期可能来自于其中任何一个模块。
+
+您可以对识别效果差的图片进行分析,参考如下规则进行分析和模型微调:
+
+* 印章区域在整体版面中定位错误,那么可能是版面区域定位模块存在不足,您需要参考[版面区域检测模块开发教程](../../../module_usage/tutorials/ocr_modules/layout_detection.md)中的[二次开发](../../../module_usage/tutorials/ocr_modules/layout_detection.md#四二次开发)章节,使用您的私有数据集对版面区域定位模型进行微调。
+* 有较多的文本未被检测出来(即文本漏检现象),那么可能是文本检测模型存在不足,您需要参考[印章文本检测模块开发教程](../../../module_usage/tutorials/ocr_modules/seal_text_detection.md)中的[二次开发](../../../module_usage/tutorials/ocr_modules/seal_text_detection.md#四二次开发)章节,使用您的私有数据集对文本检测模型进行微调。
+* 已检测到的文本中出现较多的识别错误(即识别出的文本内容与实际文本内容不符),这表明文本识别模型需要进一步改进,您需要参考[文本识别模块开发教程](../../../module_usage/tutorials/ocr_modules/text_recognition.md)中的[二次开发](../../../module_usage/tutorials/ocr_modules/text_recognition.md#四二次开发)章节对文本识别模型进行微调。
+
+### 4.2 模型应用
+当您使用私有数据集完成微调训练后,可获得本地模型权重文件。
+
+若您需要使用微调后的模型权重,只需对产线配置文件做修改,将微调后模型权重的本地路径替换至产线配置文件中的对应位置即可:
+
+```python
+......
+ Pipeline:
+  layout_model: RT-DETR-H_layout_3cls #可修改为微调后模型的本地路径
+  text_det_model: PP-OCRv4_server_seal_det  #可修改为微调后模型的本地路径
+  text_rec_model: PP-OCRv4_server_rec #可修改为微调后模型的本地路径
+  layout_batch_size: 1
+  text_rec_batch_size: 1
+  device: "gpu:0"
+......
+```
+随后, 参考本地体验中的命令行方式或 Python 脚本方式,加载修改后的产线配置文件即可。
+
+##  5. 多硬件支持
+
+PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多种主流硬件设备,**仅需修改 `--device` 参数**即可完成不同硬件之间的无缝切换。
+
+例如,您使用英伟达 GPU 进行印章文本识别产线的推理,使用的 Python 命令为:
+
+```
+paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --save_path output
+```
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为npu 即可:
+
+```
+paddlex --pipeline seal_recognition --input seal_text_det.png --device npu:0 --save_path output
+```
+若您想在更多种类的硬件上使用印章文本识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。

+ 354 - 0
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition_en.md

@@ -0,0 +1,354 @@
+[简体中文](seal_recognition.md) | English
+  
+# Tutorial for Using Seal Text Recognition Pipeline  
+  
+## 1. Introduction to the Seal Text Recognition Pipeline  
+Seal text recognition is a technology that automatically extracts and recognizes seal content from documents or images. The recognition of seal text is part of document processing and has various applications in many scenarios, such as contract comparison, inventory access approval, and invoice reimbursement approval.  
+  
+![](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_seal/01.png)  
+  
+The **Seal Text Recognition** pipeline includes a layout area analysis module, a seal text detection module, and a text recognition module.  
+  
+**If you prioritize model accuracy, please choose a model with higher accuracy. If you prioritize inference speed, please choose a model with faster inference. If you prioritize model storage size, please choose a model with a smaller storage footprint.**  
+  
+<details>  
+   <summary> 👉 Detailed Model List </summary>  
+  
+
+**Layout Analysis Module Models:**
+  
+|Model Name|mAP (%)|GPU Inference Time (ms)|CPU Inference Time|Model Size (M)|
+|-|-|-|-|-|
+|PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|
+|RT-DETR-H_layout_3cls|95.9|114.644|3832.62|470.1M|
+|RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2M|
+
+**Note: The evaluation set for the above accuracy indicators is a self-built layout area analysis dataset from PaddleX, containing 10,000 images. The GPU inference time for all models above is based on an NVIDIA Tesla T4 machine with a precision type of FP32. The CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads, and the precision type is also FP32.**
+
+
+**Seal Text Detection Module Models**:
+
+| Model | Detection Hmean (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
+|-------|---------------------|-------------------------|-------------------------|--------------|-------------|
+| PP-OCRv4_server_seal_det | 98.21 | 84.341 | 2425.06 | 109 | PP-OCRv4's server-side seal text detection model, featuring higher accuracy, suitable for deployment on better-equipped servers |
+| PP-OCRv4_mobile_seal_det | 96.47 | 10.5878 | 131.813 | 4.6 | PP-OCRv4's mobile seal text detection model, offering higher efficiency, suitable for deployment on edge devices |
+
+**Note: The above accuracy metrics are evaluated on a self-built dataset containing 500 circular seal images. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
+
+**Text Recognition Module Models**:
+
+
+| Model Name | Average Recognition Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time | Model Size (M) |
+|-|-|-|-|-|
+|PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|
+|PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|
+
+**Note: The evaluation set for the above accuracy indicators is a self-built Chinese dataset from PaddleOCR, covering various scenarios such as street scenes, web images, documents, and handwriting. The text recognition subset includes 11,000 images. The GPU inference time for all models above is based on an NVIDIA Tesla T4 machine with a precision type of FP32. The CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads, and the precision type is also FP32.**
+
+</details>  
+
+## 2.  Quick Start
+The pre trained model production line provided by PaddleX can quickly experience the effect. You can experience the effect of the seal text recognition production line online, or use the command line or Python locally to experience the effect of the seal text recognition production line.
+
+### 2.1 Online Experience
+You can [experience online](https://aistudio.baidu.com/community/app/182491/webUI) the effect of seal text recognition in the v3 production line for extracting document scene information, using official demo images for recognition, for example:
+
+! []( https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/seal_recognition/02.png )
+
+If you are satisfied with the performance of the production line, you can directly integrate and deploy the production line. If you are not satisfied, you can also use private data to fine tune the models in the production line online.
+
+### 2.2 Local Experience
+Before using the seal text recognition production line locally, please ensure that you have completed the wheel package installation of PaddleX according to the  [PaddleX Local Installation Guide](../../../installation/installation_en.md).
+
+### 2.3 Command line experience
+One command can quickly experience the effect of seal text recognition production line, use [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png), and replace ` --input ` with the local path for prediction
+
+```
+paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --save_path output 
+```
+
+Parameter description:
+
+```
+--Pipeline: Production line name, here is the seal text recognition production line
+--Input: The local path or URL of the input image to be processed
+--The GPU serial number used by the device (e.g. GPU: 0 indicates the use of the 0th GPU, GPU: 1,2 indicates the use of the 1st and 2nd GPUs), or the CPU (-- device CPU) can be selected for use
+```
+
+When executing the above Python script, the default seal text recognition production line configuration file is loaded. If you need to customize the configuration file, you can execute the following command to obtain it:
+
+<details>
+<summary>  👉 Click to expand</summary>
+
+```bash
+paddlex --get_pipeline_config seal_recognition
+```
+
+After execution, the seal text recognition production line configuration file will be saved in the current path. If you want to customize the save location, you can execute the following command (assuming the custom save location is `./my_path `):
+
+```bash
+paddlex --get_pipeline_config seal_recognition --save_path ./my_path --save_path output 
+```
+
+After obtaining the production line configuration file, you can replace '-- pipeline' with the configuration file save path to make the configuration file effective. For example, if the configuration file save path is `/ seal_recognition.yaml`, Just need to execute:
+
+```bash
+paddlex --pipeline ./seal_recognition.yaml --input seal_text_det.png --save_path output 
+```
+Among them, parameters such as `--model` and `--device` do not need to be specified and will use the parameters in the configuration file. If the parameters are still specified, the specified parameters will prevail.
+
+</details>
+
+After running, the result obtained is:
+
+<details>
+<summary>  👉 Click to expand</summary>
+
+```
+{'input_path': 'seal_text_det.png', 'layout_result': {'input_path': 'seal_text_det.png', 'boxes': [{'cls_id': 2, 'label': 'seal', 'score': 0.9813116192817688, 'coordinate': [0, 5.2238655, 639.59766, 637.6985]}]}, 'ocr_result': [{'input_path': PosixPath('/root/.paddlex/temp/tmp19fn93y5.png'), 'dt_polys': [array([[468, 469],
+       [472, 469],
+       [477, 471],
+       [507, 501],
+       [509, 505],
+       [509, 509],
+       [508, 513],
+       [506, 514],
+       [456, 553],
+       [454, 555],
+       [391, 581],
+       [388, 581],
+       [309, 590],
+       [306, 590],
+       [234, 577],
+       [232, 577],
+       [172, 548],
+       [170, 546],
+       [121, 504],
+       [118, 501],
+       [118, 496],
+       [119, 492],
+       [121, 490],
+       [152, 463],
+       [156, 461],
+       [160, 461],
+       [164, 463],
+       [202, 495],
+       [252, 518],
+       [311, 530],
+       [371, 522],
+       [425, 501],
+       [464, 471]]), array([[442, 439],
+       [445, 442],
+       [447, 447],
+       [449, 490],
+       [448, 494],
+       [446, 497],
+       [440, 499],
+       [197, 500],
+       [193, 499],
+       [190, 496],
+       [188, 491],
+       [188, 448],
+       [189, 444],
+       [192, 441],
+       [197, 439],
+       [438, 438]]), array([[465, 341],
+       [470, 344],
+       [472, 346],
+       [476, 356],
+       [476, 419],
+       [475, 424],
+       [472, 428],
+       [467, 431],
+       [462, 433],
+       [175, 434],
+       [170, 433],
+       [166, 430],
+       [163, 426],
+       [161, 420],
+       [161, 354],
+       [162, 349],
+       [165, 345],
+       [170, 342],
+       [175, 340],
+       [460, 340]]), array([[326,  34],
+       [481,  85],
+       [485,  88],
+       [488,  90],
+       [584, 220],
+       [586, 225],
+       [587, 229],
+       [589, 378],
+       [588, 383],
+       [585, 388],
+       [581, 391],
+       [576, 393],
+       [570, 392],
+       [507, 373],
+       [502, 371],
+       [498, 367],
+       [496, 359],
+       [494, 255],
+       [423, 162],
+       [322, 129],
+       [246, 151],
+       [205, 169],
+       [144, 252],
+       [139, 360],
+       [137, 365],
+       [134, 369],
+       [128, 373],
+       [ 66, 391],
+       [ 61, 392],
+       [ 56, 390],
+       [ 51, 387],
+       [ 48, 382],
+       [ 47, 377],
+       [ 49, 230],
+       [ 50, 225],
+       [ 52, 221],
+       [149,  89],
+       [153,  86],
+       [157,  84],
+       [318,  34],
+       [322,  33]])], 'dt_scores': [0.9943362380813267, 0.9994290391836306, 0.9945320407374245, 0.9908104427126033], 'rec_text': ['5263647368706', '吗繁物', '发票专用章', '天津君和缘商贸有限公司'], 'rec_score': [0.9921098351478577, 0.997374951839447, 0.9999369382858276, 0.9901710152626038]}]}
+```
+</details>
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/seal_recognition/03.png)
+
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path. 
+
+
+###  2.2 Python Script Integration
+A few lines of code can complete the fast inference of the production line. Taking the seal text recognition production line as an example:
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(pipeline="seal_recognition")
+
+output = pipeline.predict("seal_text_det.png")
+for res in output:
+    res.print() 
+    res.save_to_img("./output/") # Save the results in img
+```
+
+The result obtained is the same as the command line method.
+
+In the above Python script, the following steps were executed:
+
+(1)Instantiate the  production line object using `create_pipeline`: Specific parameter descriptions are as follows:
+
+| Parameter | Description | Type | Default |
+|-|-|-|-|
+|`pipeline`| The name of the production line or the path to the production line configuration file. If it is the name of the production line, it must be supported by PaddleX. |`str`|None|
+|`device`| The device for production line model inference. Supports: "gpu", "cpu". |`str`|`gpu`|
+|`use_hpip`| Whether to enable high-performance inference, only available if the production line supports it. |`bool`|`False`|
+
+(2)Invoke the `predict` method of the  production line object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
+
+| Parameter Type | Parameter Description |
+|---------------|-----------------------------------------------------------------------------------------------------------|
+| Python Var    | Supports directly passing in Python variables, such as numpy.ndarray representing image data. |
+| str         | Supports passing in the path of the file to be predicted, such as the local path of an image file: `/root/data/img.jpg`. |
+| str           | Supports passing in the URL of the file to be predicted, such as the network URL of an image file: [Example](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png). |
+| str           | Supports passing in a local directory, which should contain files to be predicted, such as the local path: `/root/data/`. |
+| dict          | Supports passing in a dictionary type, where the key needs to correspond to a specific task, such as "img" for image classification tasks. The value of the dictionary supports the above types of data, for example: `{"img": "/root/data1"}`. |
+| list          | Supports passing in a list, where the list elements need to be of the above types of data, such as `[numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"], [{"img": "/root/data1"}, {"img": "/root/data2/img.jpg"}]`. |
+
+(3)Obtain the prediction results by calling the `predict` method: The `predict` method is a `generator`, so prediction results need to be obtained through iteration. The `predict` method predicts data in batches, so the prediction results are in the form of a list.
+
+(4)Process the prediction results: The prediction result for each sample is of `dict` type and supports printing or saving to files, with the supported file types depending on the specific pipeline. For example:
+
+| Method | Description | Method Parameters |
+|--------|-------------|-------------------|
+| save_to_img | Save the results as an img format file | `- save_path`: str, the path to save the file. When it's a directory, the saved file name will be consistent with the input file type; |
+
+Where `save_to_img` can save visualization results (including OCR result images, layout analysis result images).
+
+If you have a configuration file, you can customize the configurations of the seal recognition  pipeline by simply modifying the `pipeline` parameter in the `create_pipeline` method to the path of the pipeline configuration file.
+
+For example, if your configuration file is saved in `/ my_path/seal_recognition.yaml` , Then only need to execute:
+
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="./my_path/seal_recognition.yaml")
+output = pipeline.predict("seal_text_det.png")
+for res in output:
+    res.print() ## 打印预测的结构化输出
+    res.save_to_img("./output/") ## 保存可视化结果
+```
+
+## 3. Development integration/deployment
+If the production line can meet your requirements for inference speed and accuracy, you can directly develop integration/deployment.
+
+If you need to directly apply the production line to your Python project, you can refer to the example code in [2.2.2 Python scripting] (# 222 python scripting integration).
+
+In addition, PaddleX also offers three other deployment methods, detailed as follows:
+
+🚀 ** High performance deployment: In actual production environments, many applications have strict standards for the performance indicators of deployment strategies, especially response speed, to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin aimed at deep performance optimization of model inference and pre-processing, achieving significant acceleration of end-to-end processes. For a detailed high-performance deployment process, please refer to the [PaddleX High Performance Deployment Guide] (../../../pipelin_deploy/high_performance_deploy. md).
+
+☁️ ** Service deployment * *: Service deployment is a common form of deployment in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users to achieve service-oriented deployment of production lines at low cost. For detailed service-oriented deployment processes, please refer to the PaddleX Service Deployment Guide (../../../ipeline_deploy/service_deploy. md).
+
+Here are API references and examples of calling multilingual services:
+
+
+
+## 4.  Secondary development
+If the default model weights provided by the seal text recognition production line are not satisfactory in terms of accuracy or speed in your scenario, you can try using your own specific domain or application scenario data to further fine tune the existing model to improve the recognition performance of the seal text recognition production line in your scenario.
+
+### 4.1 Model fine-tuning
+Due to the fact that the seal text recognition production line consists of three modules, the performance of the model production line may not be as expected due to any of these modules.
+
+You can analyze images with poor recognition performance and refer to the following rules for analysis and model fine-tuning:
+
+* If the seal area is incorrectly located within the overall layout, the layout detection module may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/layout_detection_en.md#customization) section in the [Layout Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/layout_detection_en.md) and use your private dataset to fine-tune the layout detection model.
+* If there is a significant amount of text that has not been detected (i.e. text miss detection phenomenon), it may be due to the shortcomings of the text detection model. You need to refer to the [Secondary Development](../../../module_usage/tutorials/ocr_modules/seal_text_detection_en.md#customization) section in the [Seal Text Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/seal_text_detection_en.md) to fine tune the text detection model using your private dataset.
+* If seal texts are undetected (i.e., text miss detection), the text detection model may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/text_recognition_en.md#customization) section in the [Text Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/text_recognition_en.md) and use your private dataset to fine-tune the text detection model.
+
+* If many detected texts contain recognition errors (i.e., the recognized text content does not match the actual text content), the text recognition model requires further improvement. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/text_recognition_en.md#customization) section.
+
+### 4.2 Model Application
+After completing fine-tuning training using a private dataset, you can obtain a local model weight file.
+
+If you need to use the fine tuned model weights, simply modify the production line configuration file and replace the local path of the fine tuned model weights with the corresponding position in the production line configuration file
+
+```python
+......
+ Pipeline:
+  layout_model: RT-DETR-H_layout_3cls #can be modified to the local path of the fine tuned model
+  text_det_model: PP-OCRv4_server_seal_det  #can be modified to the local path of the fine tuned model
+  text_rec_model: PP-OCRv4_server_rec #can be modified to the local path of the fine tuned model
+  layout_batch_size: 1
+  text_rec_batch_size: 1
+  device: "gpu:0"
+......
+```
+Subsequently, refer to the command line or Python script in the local experience to load the modified production line configuration file.
+
+##  5.  Multiple hardware support
+PaddleX supports various mainstream hardware devices such as Nvidia GPU, Kunlun Core XPU, Ascend NPU, and Cambrian MLU, and can seamlessly switch between different hardware devices by simply modifying the **`--device`** parameter.
+
+For example, if you use Nvidia GPU for inference on a seal text recognition production line, the Python command you use is:
+
+```
+paddlex --pipeline seal_recognition --input seal_text_det.png --device gpu:0 --save_path output
+```
+
+At this point, if you want to switch the hardware to Ascend NPU, simply modify the ` --device ` in the Python command to NPU:
+
+```
+paddlex --pipeline seal_recognition --input seal_text_det.png --device npu:0 --save_path output
+```
+
+If you want to use the seal text recognition production line on a wider range of hardware, please refer to the [PaddleX Multi Hardware Usage Guide](../../../other_devices_support/installation_other_devices_en.md)。
+
+
+
+
+
+
+
+

+ 7 - 7
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md

@@ -92,7 +92,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 ### 2.1 命令行方式体验
 一行命令即可快速体验表格识别产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline table_recognition --input table_recognition.jpg --device gpu:0
 ```
 参数说明:
@@ -119,8 +119,8 @@ paddlex --get_pipeline_config table_recognition --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./table_recognition.yaml`,只需执行:
 
-```
-paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg
+```bash
+paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -132,7 +132,7 @@ paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg
    <summary> 👉点击展开</summary>
 
 ```
-{'input_path': '/root/.paddlex/predict_input/table_recognition.jpg', 'layout_result': {'input_path': '/root/.paddlex/predict_input/table_recognition.jpg', 'boxes': [{'cls_id': 3, 'label': 'Table', 'score': 0.6014542579650879, 'coordinate': [0, 21, 551, 118]}]}, 'ocr_result': {'dt_polys': [array([[37., 40.],
+{'input_path': 'table_recognition.jpg', 'layout_result': {'input_path': 'table_recognition.jpg', 'boxes': [{'cls_id': 3, 'label': 'Table', 'score': 0.6014542579650879, 'coordinate': [0, 21, 551, 118]}]}, 'ocr_result': {'dt_polys': [array([[37., 40.],
        [75., 40.],
        [75., 60.],
        [37., 60.]], dtype=float32), array([[123.,  37.],
@@ -165,7 +165,7 @@ paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg
        [278., 118.]], dtype=float32), array([[446., 102.],
        [504., 104.],
        [503., 118.],
-       [445., 118.]], dtype=float32)], 'rec_text': ['Dres', '连续工作3', '取出来放在网上,没想', '江、整江等八大', 'Abstr', 'rSrivi', '$709.', 'cludingGiv', '2.72', 'Ingcubic', '$744.78'], 'rec_score': [0.9934158325195312, 0.9990204572677612, 0.9967061877250671, 0.9375461935997009, 0.9947397112846375, 0.9972746968269348, 0.9904290437698364, 0.973427414894104, 0.9983080625534058, 0.993423342704773, 0.9964120984077454], 'input_path': '/root/.paddlex/predict_input/table_recognition.jpg'}, 'table_result': [{'input_path': '/root/.paddlex/predict_input/table_recognition.jpg', 'layout_bbox': [0, 21, 551, 118], 'bbox': array([[  4.395736 ,  25.238262 , 113.31014  ,  25.316246 , 115.454315 ,
+       [445., 118.]], dtype=float32)], 'rec_text': ['Dres', '连续工作3', '取出来放在网上,没想', '江、整江等八大', 'Abstr', 'rSrivi', '$709.', 'cludingGiv', '2.72', 'Ingcubic', '$744.78'], 'rec_score': [0.9934158325195312, 0.9990204572677612, 0.9967061877250671, 0.9375461935997009, 0.9947397112846375, 0.9972746968269348, 0.9904290437698364, 0.973427414894104, 0.9983080625534058, 0.993423342704773, 0.9964120984077454], 'input_path': 'table_recognition.jpg'}, 'table_result': [{'input_path': 'table_recognition.jpg', 'layout_bbox': [0, 21, 551, 118], 'bbox': array([[  4.395736 ,  25.238262 , 113.31014  ,  25.316246 , 115.454315 ,
          71.8867   ,   3.7177477,  71.7937   ],
        [110.727455 ,  25.94007  , 210.07187  ,  26.028755 , 209.66394  ,
          65.96484  , 109.59861  ,  66.09809  ],
@@ -826,12 +826,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行表格识别产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline table_recognition --input table_recognition.jpg --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为npu 即可:
 
-```
+```bash
 paddlex --pipeline table_recognition --input table_recognition.jpg --device npu:0
 ```
 若您想在更多种类的硬件上使用通用表格识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md

@@ -116,7 +116,7 @@ paddlex --get_pipeline_config table_recognition --save_path ./my_path
 After obtaining the pipeline configuration file, replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./table_recognition.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg
+paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg --device gpu:0
 ```
 
 Here, parameters like `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If they are still specified, the specified parameters will take precedence.

+ 6 - 6
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md

@@ -46,7 +46,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验时序异常检测产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_ad.csv),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline ts_ad --input ts_ad.csv --device gpu:0
 ```
 参数说明:
@@ -73,8 +73,8 @@ paddlex --get_pipeline_config ts_ad --save_path ./my_path
 
 获取产线配置文件后,可将` --pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./ts_ad.yaml`,只需执行:
 
-```
-paddlex --pipeline ./ts_ad.yaml --input ts_ad.cs
+```bash
+paddlex --pipeline ./ts_ad.yaml --input ts_ad.cs --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -83,7 +83,7 @@ paddlex --pipeline ./ts_ad.yaml --input ts_ad.cs
 运行后,得到的结果为:
 
 ```
-{'ts_path': '/root/.paddlex/predict_input/ts_ad.csv', 'anomaly':            label
+{'input_path': 'ts_ad.csv', 'anomaly':            label
 timestamp
 220226         0
 220227         0
@@ -627,12 +627,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行时序异常检测产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline ts_ad --input ts_ad.cs --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的` --device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline ts_ad --input ts_ad.cs --device npu:0
 ```
 若您想在更多种类的硬件上使用通用时序异常检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 4 - 4
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md

@@ -5,7 +5,7 @@
 ## 1. Introduction to the General Time Series Anomaly Detection Pipeline
 Time series anomaly detection is a technique for identifying abnormal patterns or behaviors in time series data. It is widely applied in fields such as network security, equipment monitoring, and financial fraud detection. By analyzing normal trends and patterns in historical data, it discovers events that significantly deviate from expected behaviors, such as sudden spikes in network traffic or unusual transaction activities. Time series anomaly detection enable automatic identification of anomalies in data. This technology provides real-time alerts for enterprises and organizations, helping them promptly address potential risks and issues. It plays a crucial role in ensuring system stability and security.
 
-![](/tmp/images/pipelines/time_series/05.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/05.png)
 
 **The General Time Series Anomaly Detection Pipeline includes a time series anomaly detection module. If you prioritize model accuracy, choose a model with higher precision. If you prioritize inference speed, select a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage footprint.**
 
@@ -30,7 +30,7 @@ The pre-trained model pipelines provided by PaddleX allow for quick experience o
 ### 2.1 Online Experience
 You can [experience online](https://aistudio.baidu.com/community/app/105706/webUI?source=appCenter) the effects of the General Time Series Anomaly Detection Pipeline using the official demo for recognition, for example:
 
-![](/tmp/images/pipelines/time_series/06.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/06.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model within the pipeline online**.
 
@@ -67,7 +67,7 @@ paddlex --get_pipeline_config ts_ad --save_path ./my_path
 After obtaining the pipeline configuration file, you can replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./ts_ad.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./ts_ad.yaml --input ts_ad.csv
+paddlex --pipeline ./ts_ad.yaml --input ts_ad.csv --device gpu:0
 ```
 
 Here, parameters such as `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If parameters are still specified, the specified parameters will take precedence.
@@ -77,7 +77,7 @@ Here, parameters such as `--model` and `--device` do not need to be specified, a
 After running, the result obtained is:
 
 ```json
-{'ts_path': '/root/.paddlex/predict_input/ts_ad.csv', 'anomaly':            label
+{'input_path': 'ts_ad.csv', 'anomaly':            label
 timestamp
 220226         0
 220227         0

+ 6 - 6
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md

@@ -39,7 +39,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验时序分类产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_cls.csv),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline ts_cls --input ts_cls.csv --device gpu:0
 ```
 参数说明:
@@ -66,8 +66,8 @@ paddlex --get_pipeline_config ts_cls --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./ts_cls.yaml`,只需执行:
 
-```
-paddlex --pipeline ./ts_cls.yaml --input ts_cls.csv
+```bash
+paddlex --pipeline ./ts_cls.yaml --input ts_cls.csv --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -76,7 +76,7 @@ paddlex --pipeline ./ts_cls.yaml --input ts_cls.csv
 运行后,得到的结果为:
 
 ```
-{'ts_path': '/root/.paddlex/predict_input/ts_cls.csv', 'classification':         classid     score
+{'input_path': 'ts_cls.csv', 'classification':         classid     score
 sample
 0             0  0.617688}
 ```
@@ -563,12 +563,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行时序分类产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline ts_cls --input ts_cls.csv --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 进行修改即可:
 
-```
+```bash
 paddlex --pipeline ts_cls --input ts_cls.csv --device npu:0
 ```
 若您想在更多种类的硬件上使用通用时序分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 4 - 4
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md

@@ -5,7 +5,7 @@
 ## 1. Introduction to General Time Series Classification Pipeline
 Time series classification is a technique that categorizes time-series data into predefined classes, widely applied in fields such as behavior recognition and financial trend analysis. By analyzing features that vary over time, it identifies different patterns or events, for example, classifying a speech signal as "greeting" or "request," or categorizing stock price movements as "rising" or "falling." Time series classification typically employs machine learning and deep learning models, effectively capturing temporal dependencies and variation patterns to provide accurate classification labels for data. This technology plays a pivotal role in applications such as intelligent monitoring and market forecasting.
 
-![](/tmp/images/pipelines/time_series/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/01.png)
 
 **The General Time Series Classification Pipeline includes a Time Series Classification module.**
 
@@ -26,7 +26,7 @@ PaddleX provides pre-trained model pipelines that can be quickly experienced. Yo
 ### 2.1 Online Experience
 You can [experience online](https://aistudio.baidu.com/community/app/105707/webUI?source=appCenter) the effects of the General Time Series Classification Pipeline using the official demo for recognition, for example:
 
-![](/tmp/images/pipelines/time_series/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/02.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model in the pipeline online**.
 
@@ -68,7 +68,7 @@ paddlex --get_pipeline_config ts_cls --save_path ./my_path
 After obtaining the pipeline configuration file, you can replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./ts_ad.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./ts_cls.yaml --input ts_cls.csv
+paddlex --pipeline ./ts_cls.yaml --input ts_cls.csv --device gpu:0
 ```
 
 In this command, parameters such as `--model` and `--device` are not required to be specified, as they will use the parameters defined in the configuration file. If these parameters are specified, the specified values will take precedence.
@@ -78,7 +78,7 @@ In this command, parameters such as `--model` and `--device` are not required to
 After execution, the result is:
 
 ```bash
-{'ts_path': '/root/.paddlex/predict_input/ts_cls.csv', 'classification':         classid     score
+{'input_path': 'ts_cls.csv', 'classification':         classid     score
 sample
 0             0  0.617688}
 ```

+ 6 - 6
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md

@@ -44,7 +44,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 #### 2.2.1 命令行方式体验
 一行命令即可快速体验时序预测产线效果,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/ts/demo_ts/ts_fc.csv),并将 `--input` 替换为本地路径,进行预测
 
-```
+```bash
 paddlex --pipeline ts_fc --input ts_fc.csv --device gpu:0
 ```
 参数说明:
@@ -71,8 +71,8 @@ paddlex --get_pipeline_config ts_fc --save_path ./my_path
 
 获取产线配置文件后,可将 `--pipeline` 替换为配置文件保存路径,即可使配置文件生效。例如,若配置文件保存路径为 `./ts_fc.yaml`,只需执行:
 
-```
-paddlex --pipeline ./ts_fc.yaml --input ts_fc.csv
+```bash
+paddlex --pipeline ./ts_fc.yaml --input ts_fc.csv --device gpu:0
 ```
 其中,`--model`、`--device` 等参数无需指定,将使用配置文件中的参数。若依然指定了参数,将以指定的参数为准。
 
@@ -81,7 +81,7 @@ paddlex --pipeline ./ts_fc.yaml --input ts_fc.csv
 运行后,得到的结果为:
 
 ```
-{'ts_path': '/root/.paddlex/predict_input/ts_fc.csv', 'forecast':                            OT
+{'input_path': 'ts_fc.csv', 'forecast':                            OT
 date
 2018-06-26 20:00:00  9.586131
 2018-06-26 21:00:00  9.379762
@@ -625,12 +625,12 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 
 例如,您使用英伟达 GPU 进行时序预测产线的推理,使用的 Python 命令为:
 
-```
+```bash
 paddlex --pipeline ts_fc --input ts_fc.csv --device gpu:0
 ```
 此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
-```
+```bash
 paddlex --pipeline ts_fc --input ts_fc.csv --device npu:0
 ```
 若您想在更多种类的硬件上使用通用时序预测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 4 - 4
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md

@@ -5,7 +5,7 @@
 ## 1. Introduction to the General Time Series Forecasting Pipeline
 Time series forecasting is a technique that utilizes historical data to predict future trends by analyzing the patterns of change in time series data. It is widely applied in fields such as financial markets, weather forecasting, and sales prediction. Time series forecasting often employs statistical methods or deep learning models (e.g., LSTM, ARIMA), capable of handling temporal dependencies in data to provide accurate predictions, assisting decision-makers in better planning and response. This technology plays a crucial role in various industries, including energy management, supply chain optimization, and market analysis.
 
-![](/tmp/images/pipelines/time_series/03.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/03.png)
 
 **The General Time Series Forecasting Pipeline includes a time series forecasting module. If you prioritize model accuracy, choose a model with higher accuracy. If you prioritize inference speed, select a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage size.**
 
@@ -32,7 +32,7 @@ The pre-trained model pipelines provided by PaddleX allow for quick experience o
 ### 2.1 Online Experience
 You can [experience the General Time Series Forecasting Pipeline online](https://aistudio.baidu.com/community/app/105706/webUI?source=appCenter) using the demo provided by the official team, for example:
 
-![](/tmp/images/pipelines/time_series/04.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/04.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model within the pipeline online**.
 
@@ -69,7 +69,7 @@ paddlex --get_pipeline_config ts_fc --save_path ./my_path
 After obtaining the pipeline configuration file, you can replace `--pipeline` with the configuration file save path to make the configuration file take effect. For example, if the configuration file save path is `./ts_fc.yaml`, simply execute:
 
 ```bash
-paddlex --pipeline ./ts_fc.yaml --input ts_fc.csv
+paddlex --pipeline ./ts_fc.yaml --input ts_fc.csv --device gpu:0
 ```
 
 Here, parameters such as `--model` and `--device` do not need to be specified, as they will use the parameters in the configuration file. If parameters are still specified, the specified parameters will take precedence.
@@ -79,7 +79,7 @@ Here, parameters such as `--model` and `--device` do not need to be specified, a
 After running, the result is:
 
 ```bash
-{'ts_path': '/root/.paddlex/predict_input/ts_fc.csv', 'forecast':                            OT
+{'input_path': 'ts_fc.csv', 'forecast':                            OT
 date
 2018-06-26 20:00:00  9.586131
 2018-06-26 21:00:00  9.379762

+ 438 - 0
docs/practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial.md

@@ -0,0 +1,438 @@
+简体中文 | [English](document_scene_information_extraction(layout_detection)_tutorial_en.md)
+
+# PaddleX 3.0 文档场景信息抽取v3(PP-ChatOCRv3_doc) -- 论文文献信息抽取教程
+
+
+PaddleX 提供了丰富的模型产线,模型产线由一个或多个模型组合实现,每个模型产线都能够解决特定的场景任务问题。PaddleX 所提供的模型产线均支持快速体验,如果效果不及预期,也同样支持使用私有数据微调模型,并且 PaddleX 提供了 Python API,方便将产线集成到个人项目中。在使用之前,您首先需要安装 PaddleX, 安装方式请参考 [PaddleX本地安装教程](../installation/installation.md)。此处以一个论文文献的文档场景信息抽取任务为例子,介绍该产线的在实际场景中的使用流程。
+
+
+## 1. 选择产线
+
+文档信息抽取是文档处理的一部分,在众多场景中都有着广泛的应用,例如学术研究、图书馆管理、科技情报分析、文献综述撰写等场景。通过文档信息抽取技术,我们可以从论文文献中自动提取出标题、作者、摘要、关键词、发表年份、期刊名称、引用信息等关键信息,并以结构化的形式存储,便于后续的检索、分析与应用。这不仅提升了科研人员的工作效率,也为学术研究的深入发展提供了强有力的支持。
+
+
+首先,需要根据任务场景,选择对应的 PaddleX 产线,本节以论文文献的信息抽取为例,介绍如何进行 文档场景信息抽取v3 产线相关任务的二次开发,对应 PaddleX 的文档场景信息抽取v3。如果无法确定任务和产线的对应关系,您可以在 PaddleX 支持的[模型产线列表](../support_list/pipelines_list.md)中了解相关产线的能力介绍。
+
+
+## 2. 快速体验
+
+PaddleX 提供了两种体验的方式,你可以在线体验文档场景信息抽取v3产线的效果,也可以在本地使用  Python 体验文档场景信息抽取v3产线的效果。
+
+### 2.1 在线体验
+
+您可以在AI Studio 星河社区体验文档场景信息抽取v3产线的效果,点击链接下载 [论文文献测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg),上传至[官方文档场景信息抽取v3 应用](https://aistudio.baidu.com/community/app/182491/webUI?source=appCenter) 体验抽取效果。如下:
+
+![](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/06.png)
+
+
+### 2.2 本地体验
+
+在本地使用文档场景信息抽取v3产线前,请确保您已经按照[PaddleX本地安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。几行代码即可完成产线的快速推理:
+
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(
+    pipeline="PP-ChatOCRv3-doc",
+    llm_name="ernie-3.5",
+    llm_params={"api_type": "qianfan", "ak": "", "sk": ""} # 请填入您的ak与sk,否则无法调用大模型
+    )
+
+visual_result, visual_info = pipeline.visual_predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg")
+
+for res in visual_result:
+    res.save_to_img("./output")
+    res.save_to_html('./output')
+    res.save_to_xlsx('./output')
+
+vector = pipeline.build_vector(visual_info=visual_info)
+chat_result = pipeline.chat(
+    key_list=["页眉", "图表标题"],
+    visual_info=visual_info,
+    vector=vector,
+    )
+chat_result.print()
+```
+
+**注**:请先在[百度云千帆平台](https://console.bce.baidu.com/qianfan/ais/console/onlineService)获取自己的ak与sk(详细流程请参考[AK和SK鉴权调用API流程](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Hlwerugt8)),将ak与sk填入至指定位置后才能正常调用大模型。
+
+
+输出打印的结果如下:
+
+```
+The result has been saved in output/tmpfnss9sq9_layout.jpg.
+The result has been saved in output/tmpfnss9sq9_ocr.jpg.
+The result has been saved in output/tmpfnss9sq9_table.jpg.
+The result has been saved in output/tmpfnss9sq9_table.jpg.
+The result has been saved in output/tmpfnss9sq9/tmpfnss9sq9.html.
+The result has been saved in output/tmpfnss9sq9/tmpfnss9sq9.html.
+The result has been saved in output/tmpfnss9sq9/tmpfnss9sq9.xlsx.
+The result has been saved in output/tmpfnss9sq9/tmpfnss9sq9.xlsx.
+
+{'chat_res': {'页眉': '未知', '图表标题': '未知'}, 'prompt': ''}
+
+```
+
+在`output` 目录中,保存了版面区域检测、OCR、表格识别可视化结果以及表格html和xlsx结果。
+
+其中版面区域定位结果可视化如下:
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/PP-ChatOCRv3_doc/layout_detection_01.png)
+
+
+通过上面的文档场景信息抽取抽取的在线体验可以进行 Badcase 分析,发现文档场景信息抽取产线的官方模型,存在下面的问题:由于官方模型目前只区分了图,表格和印章三个类别,因此目前无法准确的定位并抽取出页眉和表格标题等其他信息,在`{'chat_res': {'页眉': '未知', '图表标题': '未知'}, 'prompt': ''}`中的结果是未知。因此,本节工作聚焦于论文文献的场景,利用论文文档数据集,以页眉和图表标题信息的抽取为例,对文档场景信息抽取产线中的版面分析模型进行微调,从而达到能够精确提取文档中页眉和表格标题信息的能力。
+
+
+
+## 3. 选择模型
+
+PaddleX 提供了 4 个端到端的版面区域定位模型,具体可参考 [模型列表](../support_list/models_list.md),其中版面区域检测模型的 benchmark 如下:
+
+|模型|mAP(0.5)(%)|GPU推理耗时(ms)|CPU推理耗时 (ms)|模型存储大小(M)|介绍|
+|-|-|-|-|-|-|
+|PicoDet_layout_1x|86.8|13.0|91.3|7.4|基于PicoDet-1x在PubLayNet数据集训练的高效率版面区域定位模型,可定位包含文字、标题、表格、图片以及列表这5类区域|
+|PicoDet-L_layout_3cls|89.3|15.7|159.8|22.6|基于PicoDet-L在中英文论文、杂志和研报等场景上自建数据集训练的高效率版面区域定位模型,包含3个类别:表格,图像和印章|
+|RT-DETR-H_layout_3cls|95.9|114.6|3832.6|470.1|基于RT-DETR-H在中英文论文、杂志和研报等场景上自建数据集训练的高精度版面区域定位模型,包含3个类别:表格,图像和印章|
+|RT-DETR-H_layout_17cls|92.6|115.1|3827.2|470.2|基于RT-DETR-H在中英文论文、杂志和研报等场景上自建数据集训练的高精度版面区域定位模型,包含17个版面常见类别,分别是:段落标题、图片、文本、数字、摘要、内容、图表标题、公式、表格、表格标题、参考文献、文档标题、脚注、页眉、算法、页脚、印章|
+
+**注:以上精度指标的评估集是 PaddleOCR 自建的版面区域分析数据集,包含中英文论文、杂志和研报等常见的 1w 张文档类型图片。GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为 8,精度类型为 FP32。**
+
+
+## 4. 数据准备和校验
+### 4.1 数据准备
+
+本教程采用 `论文文献数据集` 作为示例数据集,可通过以下命令获取示例数据集。如果您使用自备的已标注数据集,需要按照 PaddleX 的格式要求对自备数据集进行调整,以满足 PaddleX 的数据格式要求。关于数据格式介绍,您可以参考 [PaddleX 目标检测模块数据标注教程](../data_annotations/cv_modules/object_detection.md)。
+
+数据集获取命令:
+```bash
+cd /path/to/paddlex
+wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/paperlayout.tar -P ./dataset
+tar -xf ./dataset/paperlayout.tar -C ./dataset/
+```
+
+### 4.2 数据集校验
+
+在对数据集校验时,只需一行命令:
+
+```bash
+python main.py -c paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml \
+    -o Global.mode=check_dataset \
+    -o Global.dataset_dir=./dataset/paperlayout/
+```
+
+执行上述命令后,PaddleX 会对数据集进行校验,并统计数据集的基本信息。命令运行成功后会在 log 中打印出 `Check dataset passed !` 信息,同时相关产出会保存在当前目录的 `./output/check_dataset` 目录下,产出目录中包括可视化的示例样本图片和样本分布直方图。校验结果文件保存在 `./output/check_dataset_result.json`,校验结果文件具体内容为
+```
+{
+  "done_flag": true,
+  "check_pass": true,
+  "attributes": {
+    "num_classes": 4,
+    "train_samples": 4734,
+    "train_sample_paths": [
+      "check_dataset\/demo_img\/train_4612.jpg",
+      "check_dataset\/demo_img\/train_4844.jpg",
+      "check_dataset\/demo_img\/train_0084.jpg",
+      "check_dataset\/demo_img\/train_0448.jpg",
+      "check_dataset\/demo_img\/train_4703.jpg",
+      "check_dataset\/demo_img\/train_3572.jpg",
+      "check_dataset\/demo_img\/train_4516.jpg",
+      "check_dataset\/demo_img\/train_2836.jpg",
+      "check_dataset\/demo_img\/train_1353.jpg",
+      "check_dataset\/demo_img\/train_0225.jpg"
+    ],
+    "val_samples": 928,
+    "val_sample_paths": [
+      "check_dataset\/demo_img\/val_0982.jpg",
+      "check_dataset\/demo_img\/val_0607.jpg",
+      "check_dataset\/demo_img\/val_0623.jpg",
+      "check_dataset\/demo_img\/val_0890.jpg",
+      "check_dataset\/demo_img\/val_0036.jpg",
+      "check_dataset\/demo_img\/val_0654.jpg",
+      "check_dataset\/demo_img\/val_0895.jpg",
+      "check_dataset\/demo_img\/val_0059.jpg",
+      "check_dataset\/demo_img\/val_0142.jpg",
+      "check_dataset\/demo_img\/val_0088.jpg"
+    ]
+  },
+  "analysis": {
+    "histogram": "check_dataset\/histogram.png"
+  },
+  "dataset_path": ".\/dataset\/paperlayout\/",
+  "show_type": "image",
+  "dataset_type": "COCODetDataset"
+}
+```
+上述校验结果中,check_pass 为 True 表示数据集格式符合要求,其他部分指标的说明如下:
+
+- attributes.num_classes:该数据集类别数为 4,此处类别数量为后续训练需要传入的类别数量;
+- attributes.train_samples:该数据集训练集样本数量为 4734;
+- attributes.val_samples:该数据集验证集样本数量为 928;
+- attributes.train_sample_paths:该数据集训练集样本可视化图片相对路径列表;
+- attributes.val_sample_paths:该数据集验证集样本可视化图片相对路径列表;
+
+另外,数据集校验还对数据集中所有类别的样本数量分布情况进行了分析,并绘制了分布直方图(histogram.png):
+
+<center>
+
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/PP-ChatOCRv3_doc/layout_detection_02.png" width=600>
+
+</center>
+
+**注**:只有通过数据校验的数据才可以训练和评估。
+
+
+### 4.3 数据集划分(非必选)
+
+如需对数据集格式进行转换或是重新划分数据集,可通过修改配置文件或是追加超参数的方式进行设置。
+
+数据集校验相关的参数可以通过修改配置文件中 `CheckDataset` 下的字段进行设置,配置文件中部分参数的示例说明如下:
+
+* `CheckDataset`:
+    * `split`:
+        * `enable`: 是否进行重新划分数据集,为 `True` 时进行数据集格式转换,默认为 `False`;
+        * `train_percent`: 如果重新划分数据集,则需要设置训练集的百分比,类型为 0-100 之间的任意整数,需要保证和 `val_percent` 值加和为 100;
+        * `val_percent`: 如果重新划分数据集,则需要设置验证集的百分比,类型为 0-100 之间的任意整数,需要保证和 `train_percent` 值加和为 100;
+
+数据划分时,原有标注文件会被在原路径下重命名为 `xxx.bak`,以上参数同样支持通过追加命令行参数的方式进行设置,例如重新划分数据集并设置训练集与验证集比例:`-o CheckDataset.split.enable=True -o CheckDataset.split.train_percent=80 -o CheckDataset.split.val_percent=20`。
+
+
+## 5. 模型训练和评估
+### 5.1 模型训练
+
+在训练之前,请确保您已经对数据集进行了校验。完成 PaddleX 模型的训练,只需如下一条命令:
+
+```bash
+python main.py -c paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml \
+    -o Global.mode=train \
+    -o Global.dataset_dir=./dataset/paperlayout \
+    -o Train.num_classes=4
+```
+
+在 PaddleX 中模型训练支持:修改训练超参数、单机单卡/多卡训练等功能,只需修改配置文件或追加命令行参数。
+
+PaddleX 中每个模型都提供了模型开发的配置文件,用于设置相关参数。模型训练相关的参数可以通过修改配置文件中 `Train` 下的字段进行设置,配置文件中部分参数的示例说明如下:
+
+* `Global`:
+    * `mode`:模式,支持数据校验(`check_dataset`)、模型训练(`train`)、模型评估(`evaluate`);
+    * `device`:训练设备,可选`cpu`、`gpu`、`xpu`、`npu`、`mlu`,除 cpu 外,多卡训练可指定卡号,如:`gpu:0,1,2,3`;
+* `Train`:训练超参数设置;
+    * `epochs_iters`:训练轮次数设置;
+    * `learning_rate`:训练学习率设置;
+
+更多超参数介绍,请参考 [PaddleX 通用模型配置文件参数说明](../module_usage/instructions/config_parameters_common.md)。
+
+**注:**
+- 以上参数可以通过追加令行参数的形式进行设置,如指定模式为模型训练:`-o Global.mode=train`;指定前 2 卡 gpu 训练:`-o Global.device=gpu:0,1`;设置训练轮次数为 10:`-o Train.epochs_iters=10`。
+- 模型训练过程中,PaddleX 会自动保存模型权重文件,默认为`output`,如需指定保存路径,可通过配置文件中 `-o Global.output` 字段
+- PaddleX 对您屏蔽了动态图权重和静态图权重的概念。在模型训练的过程中,会同时产出动态图和静态图的权重,在模型推理时,默认选择静态图权重推理。
+
+**训练产出解释:**
+
+在完成模型训练后,所有产出保存在指定的输出目录(默认为`./output/`)下,通常有以下产出:
+
+* train_result.json:训练结果记录文件,记录了训练任务是否正常完成,以及产出的权重指标、相关文件路径等;
+* train.log:训练日志文件,记录了训练过程中的模型指标变化、loss 变化等;
+* config.yaml:训练配置文件,记录了本次训练的超参数的配置;
+* .pdparams、.pdopt、.pdstates、.pdiparams、.pdmodel:模型权重相关文件,包括网络参数、优化器、静态图网络参数、静态图网络结构等;
+
+
+### 5.2 模型评估
+
+在完成模型训练后,可以对指定的模型权重文件在验证集上进行评估,验证模型精度。使用 PaddleX 进行模型评估,只需一行命令:
+
+```bash
+python main.py -c paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml \
+    -o Global.mode=evaluate \
+    -o Global.dataset_dir=./dataset/paperlayout
+```
+
+与模型训练类似,模型评估支持修改配置文件或追加命令行参数的方式设置。
+
+**注:** 在模型评估时,需要指定模型权重文件路径,每个配置文件中都内置了默认的权重保存路径,如需要改变,只需要通过追加命令行参数的形式进行设置即可,如`-o Evaluate.weight_path=./output/best_model/best_model.pdparams`。
+
+### 5.3 模型调优
+
+在学习了模型训练和评估后,我们可以通过调整超参数来提升模型的精度。通过合理调整训练轮数,您可以控制模型的训练深度,避免过拟合或欠拟合;而学习率的设置则关乎模型收敛的速度和稳定性。因此,在优化模型性能时,务必审慎考虑这两个参数的取值,并根据实际情况进行灵活调整,以获得最佳的训练效果。
+
+推荐在调试参数时遵循控制变量法:
+1. 首先固定训练轮次为 30,批大小为 4。
+2. 基于 RT-DETR-H_layout 模型启动四个实验,学习率分别为:0.001,0.0005,0.0001,0.00001。
+3. 可以发现实验二精度最高的配置为学习率为 0.0001,同时观察验证集分数,精度在最后几轮仍在上涨。因此可以提升训练轮次为 50、100,模型精度会有进一步的提升。
+
+学习率探寻实验结果:
+
+<center>
+
+| 实验ID           | 学习率 | mAP@0\.5|
+| --------------- | ------------- | -------------------- |
+| 1 | 0.00001     | 88.90        |
+| **2** | **0.0001**   | **92.41**      |
+| 3 | 0.0005       | 92.27    |
+| 4 | 0.001     | 90.66      | 
+
+</center>
+
+接下来,我们可以在学习率设置为 0.001 的基础上,增加训练轮次,对比下面实验 [2,4,5] 可知,训练轮次增大,模型精度有了进一步的提升。
+
+<center>
+
+
+| 实验ID           | 训练轮次 |  mAP@0\.5| 
+| --------------- | ------------- | -------------------- |
+| 2 | 30    |92.41   |
+| 4 | 50    |92.63   |
+| **5**  | **100**   | **92.88**    |
+
+</center>
+
+** 注:本教程为 4 卡教程,如果您只有 1 张 GPU,可通过调整训练卡数完成本次实验,但最终指标未必和上述指标完全对齐,属正常情况。**
+
+在选择训练环境时,要考虑训练卡数和总 batch_size,以及学习率的关系。首先训练卡数乘以单卡 batch_size 等于总 batch_size。其次,总 batch_size 和学习率是相关的,学习率应与总 batch_size 保持同步调整。 目前默认学习率对应基于 4 卡训练的总 batch_size,若您打算在单卡环境下进行训练,则设置学习率时需相应除以 4。若您打算在 8 卡环境下进行训练,则设置学习率时需相应乘以 2。
+
+调整不同参数执行训练的命令可以参考:
+
+```bash
+python main.py -c paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml \
+    -o Global.mode=train \
+    -o Global.dataset_dir=./dataset/paperlayout \
+    -o Train.num_classes=4 \
+    -o Train.learning_rate=0.0001 \
+    -o Train.epochs_iters=30 \
+    -o Train.batch_size=4
+```
+
+### 5.4 模型测试
+
+可以将微调后的单模型进行测试,使用 [测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg),进行预测:
+
+```bash
+python main.py -c paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml \
+    -o Global.mode=predict \
+    -o Predict.model_dir="output/best_model/inference" \
+    -o Predict.input="https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg"
+```
+
+通过上述可在`./output`下生成预测结果,其中`test.jpg`的预测结果如下:
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/PP-ChatOCRv3_doc/layout_detection_03.png)
+
+
+## 6. 产线测试
+
+将产线中的模型替换为微调后的模型进行测试,使用 [论文文献测试文件](https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg),进行预测:
+
+首先获取并更新文档信息抽取v3的配置文件,执行下面的命令获取配置文件,(假设自定义保存位置为 `./my_path` ):
+
+```bash
+paddlex --get_pipeline_config PP-ChatOCRv3-doc --save_path ./my_path
+```
+
+将`PP-ChatOCRv3-doc.yaml`中的`Pipeline.layout_model`字段修改为上面微调后的模型路径,修改后配置如下:
+
+```yaml
+Pipeline:
+  layout_model: ./output/best_model/inference
+  table_model: SLANet_plus
+  text_det_model: PP-OCRv4_server_det
+  text_rec_model: PP-OCRv4_server_rec
+  seal_text_det_model: PP-OCRv4_server_seal_det
+  doc_image_ori_cls_model: null
+  doc_image_unwarp_model: null
+  llm_name: "ernie-3.5"
+  llm_params:
+    api_type: qianfan
+    ak: 
+    sk:
+```
+
+修改后,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可应用配置。
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(
+    pipeline="./my_path/PP-ChatOCRv3-doc.yaml",
+    llm_name="ernie-3.5",
+    llm_params={"api_type": "qianfan", "ak": "", "sk": ""} # 请填入您的ak与sk,否则无法调用大模型
+    )
+
+visual_result, visual_info = pipeline.visual_predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg")
+
+for res in visual_result:
+    res.save_to_img("./output_ft")
+    res.save_to_html('./output_ft')
+    res.save_to_xlsx('./output_ft')
+
+vector = pipeline.build_vector(visual_info=visual_info)
+chat_result = pipeline.chat(
+    key_list=["页眉", "table caption"],
+    visual_info=visual_info,
+    vector=vector,
+    )
+chat_result.print()
+```
+
+通过上述可在`./output_ft`下生成预测结果,打印的关键信息抽取结果:
+
+
+```
+{'chat_res': {'页眉': '第43卷\n 航空发动机\n 44', '表格标题': '表1模拟来流Ma=5飞行的空气加热器工作参数'}, 'prompt': ''}
+```
+可以发现,在模型微调之后,关键信息已经被正确的提取出来。
+
+版面的可视化结果如下,已经正确增加了页眉和表格标题的区域定位能力:
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/practical_tutorials/PP-ChatOCRv3_doc/layout_detection_04.png)
+
+
+## 7. 开发集成/部署
+
+如果文档场景信息抽取v3产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
+
+1. 直接将训练好的模型产线应用在您的 Python 项目中,如下面代码所示:
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(
+    pipeline="./my_path/PP-ChatOCRv3-doc.yaml",
+    llm_name="ernie-3.5",
+    llm_params={"api_type": "qianfan", "ak": "", "sk": ""} # 请填入您的ak与sk,否则无法调用大模型
+    )
+
+visual_result, visual_info = pipeline.visual_predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/PaddleX3.0/doc_images/practical_tutorial/PP-ChatOCRv3_doc_layout/test.jpg")
+
+for res in visual_result:
+    res.save_to_img("./output")
+    res.save_to_html('./output')
+    res.save_to_xlsx('./output')
+
+vector = pipeline.build_vector(visual_info=visual_info)
+chat_result = pipeline.chat(
+    key_list=["页眉", "图表标题"],
+    visual_info=visual_info,
+    vector=vector,
+    )
+chat_result.print()
+```
+
+更多参数请参考 [文档场景信息抽取v3产线使用教程](../pipeline_usage/tutorials/cv_pipelines/image_classification.md)。
+
+2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
+
+* 高性能部署:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考 [PaddleX 高性能部署指南](../pipeline_deploy/high_performance_deploy.md)。
+* 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考 [PaddleX 服务化部署指南](../pipeline_deploy/service_deploy.md)。
+* 端侧部署:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考 [PaddleX端侧部署指南](../pipeline_deploy/lite_deploy.md)。
+
+您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
+
+
+
+
+
+
+
+
+
+

+ 1 - 1
docs/practical_tutorials/image_classification_garbage_tutorial_en.md

@@ -31,7 +31,7 @@ After trying the pipeline, determine if it meets your expectations (including ac
 
 ## 3. Choosing a Model
 
-PaddleX provides 80 end-to-end image classification models, which can be referenced in the [Model List](../support_list/models_list.md). Some of the benchmarks for these models are as follows:
+PaddleX provides 80 end-to-end image classification models, which can be referenced in the [Model List](../support_list/models_list_en.md). Some of the benchmarks for these models are as follows:
 
 | Model List          | Top-1 Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) |
 | ------------------- | ------------------ | ----------------------- | ----------------------- | -------------- |

+ 1 - 1
docs/practical_tutorials/ocr_det_license_tutorial.md

@@ -250,7 +250,7 @@ for res in output:
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
 ```
-更多参数请参考 [OCR 产线使用教程](../pipeline_usage/tutorials/ocr_pipelies/OCR.md)。
+更多参数请参考 [OCR 产线使用教程](../pipeline_usage/tutorials/ocr_pipelines/OCR.md)。
 
 2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 

+ 1 - 1
docs/practical_tutorials/ocr_rec_chinese_tutorial.md

@@ -252,7 +252,7 @@ for res in output:
     res.save_to_img("./output/") # 保存结果可视化图像
     res.save_to_json("./output/") # 保存预测的结构化输出
 ```
-更多参数请参考 [OCR 产线使用教程](../pipeline_usage/tutorials/ocr_pipelies/OCR.md)。
+更多参数请参考 [OCR 产线使用教程](../pipeline_usage/tutorials/ocr_pipelines/OCR.md)。
 
 2. 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 

+ 7 - 7
docs/practical_tutorials/ts_anomaly_detection_en.md

@@ -39,7 +39,7 @@ PaddleX provides five end-to-end time series anomaly detection models. For detai
 
 To demonstrate the entire process of time series anomaly detection, we will use the publicly available MSL (Mars Science Laboratory) dataset for model training and validation. The PSM (Planetary Science Mission) dataset, sourced from NASA, comprises 55 dimensions and includes telemetry anomaly data reported by the spacecraft's monitoring system for unexpected event anomalies (ISA). With its practical application background, it better reflects real-world anomaly scenarios and is commonly used to test and validate the performance of time series anomaly detection models. This tutorial will perform anomaly detection based on this dataset.
 
-We have converted the dataset into a standard data format, and you can obtain a sample dataset using the following command. For an introduction to the data format, please refer to the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md).
+We have converted the dataset into a standard data format, and you can obtain a sample dataset using the following command. For an introduction to the data format, please refer to the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md).
 
 
 You can use the following commands to download the demo dataset to a specified folder:
@@ -54,7 +54,7 @@ tar -xf ./dataset/msl.tar -C ./dataset/
  * Time series anomaly detection is an unsupervised learning task, thus labeled training data is not required. The collected training samples should ideally consist solely of normal data, i.e., devoid of anomalies, with the label column in the training set set to 0 or, alternatively, the label column can be omitted entirely. For the validation set, to assess accuracy, labeling is necessary. Points that are anomalous at a particular timestamp should have their labels set to 1, while normal points should have labels of 0.
  * Handling Missing Values: To ensure data quality and integrity, missing values can be imputed based on expert knowledge or statistical methods.
  * Non-Repetitiveness: Ensure that data is collected in chronological order by row, with no duplication of timestamps.
-  
+
 ### 4.2 Data Validation
 Data Validation can be completed with just one command:
 
@@ -102,7 +102,7 @@ The above verification results have omitted some data parts. `check_pass` being
 **Note**: Only data that passes the verification can be used for training and evaluation.
 
 ### 4.3 Dataset Format Conversion/Dataset Splitting (Optional)
-If you need to convert the dataset format or re-split the dataset, refer to Section 4.1.3 in the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_anomaly_detection_en.md).
+If you need to convert the dataset format or re-split the dataset, refer to Section 4.1.3 in the [Time Series Anomaly Detection Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md).
 
 ## 5. Model Training and Evaluation
 ### 5.1 Model Training
@@ -119,7 +119,7 @@ python main.py -c paddlex/configs/ts_anomaly_detection/PatchTST_ad.yaml \
 -o Train.feature_cols=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54 \
 -o Train.freq=1 \
 -o Train.label_col=label \
--o Train.seq_len=96 
+-o Train.seq_len=96
 ```
 PaddleX supports modifying training hyperparameters and single-machine single-GPU training (time series models only support single-GPU training). Simply modify the configuration file or append command-line parameters.
 
@@ -133,7 +133,7 @@ Each model in PaddleX provides a configuration file for model development to set
   * `learning_rate`: Training learning rate.
   * `batch_size`: Training batch size for a single GPU.
   * `time_col`: Time column, set the column name of the time series dataset's time column based on your data.
-  * `feature_cols`: Feature variables indicating variables related to whether the device is abnormal. 
+  * `feature_cols`: Feature variables indicating variables related to whether the device is abnormal.
   * `freq`: Frequency of the time series dataset.
   * `input_len`: The length of the time series input to the model. The time series will be sliced according to this length, and the model will predict whether there is an anomaly in this segment of the time series for that length. The recommended input length should be considered in the context of the actual scenario. In this tutorial, the input length is 96, which means we hope to predict whether there are anomalies at 96 time points.
   * `label`: Represents the number indicating whether a time point in the time series is abnormal. Anomalous points are labeled as 1, and normal points are labeled as 0. In this tutorial, the anomaly monitoring dataset uses label for this purpose.
@@ -228,8 +228,8 @@ from paddlex import create_pipeline
 pipeline = create_pipeline(pipeline="ts_anomaly_detection")
 output = pipeline.predict("pre_ts.csv")
 for res in output:
-    res.print() 
-    res.save_to_csv("./output/") 
+    res.print()
+    res.save_to_csv("./output/")
 ```
 For more parameters, please refer to the [Time Series Anomaly Detection Pipeline Usage Tutorial](../pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md)
 

+ 3 - 3
docs/practical_tutorials/ts_classification_en.md

@@ -36,7 +36,7 @@ PaddleX provides a time series classification model. Refer to the [Model List](.
 ### 4.1 Data Preparation
 To demonstrate the entire time series classification process, we will use the public [Heartbeat Dataset](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ts_classify_examples.tar) for model training and validation. The Heartbeat Dataset is part of the UEA Time Series Classification Archive, addressing the practical task of heartbeat monitoring for medical diagnosis. The dataset comprises multiple time series groups, with each data point consisting of a label variable, group ID, and 61 feature variables. This dataset is commonly used to test and validate the performance of time series classification prediction models.
 
-We have converted the dataset into a standard format, which can be obtained using the following commands. For data format details, refer to the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_classification_en.md).
+We have converted the dataset into a standard format, which can be obtained using the following commands. For data format details, refer to the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_classification_en.md).
 
 Dataset Acquisition Command:
 
@@ -97,7 +97,7 @@ The above verification results have omitted some data parts. `check_pass` being
 **Note**: Only data that passes the verification can be used for training and evaluation.
 
 ### 4.3 Dataset Format Conversion / Dataset Splitting (Optional)
-If you need to convert the dataset format or re-split the dataset, please refer to Section 4.1.3 in the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_classification_en.md).
+If you need to convert the dataset format or re-split the dataset, please refer to Section 4.1.3 in the [Time Series Classification Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_classification_en.md).
 
 ## 5. Model Training and Evaluation
 
@@ -115,7 +115,7 @@ python main.py -c paddlex/configs/ts_classification/TimesNet_cls.yaml \
 -o Train.target_cols=dim_0,dim_1,dim_2 \
 -o Train.freq=1 \
 -o Train.group_id=group_id \
--o Train.static_cov_cols=label 
+-o Train.static_cov_cols=label
 ```
 PaddleX supports modifying training hyperparameters and single-machine single-GPU training (time-series models only support single-GPU training). Simply modify the configuration file or append command-line parameters.
 

+ 5 - 5
docs/practical_tutorials/ts_forecast_en.md

@@ -42,7 +42,7 @@ Based on your actual usage scenario, select an appropriate model for training. A
 ### 4.1 Data Preparation
 To demonstrate the entire time series forecasting process, we will use the [Electricity](https://archive.ics.uci.edu/dataset/321/electricityloaddiagrams20112014) dataset for model training and validation. This dataset collects electricity consumption at a certain node from 2012 to 2014, with data collected every hour. Each data point consists of the current timestamp and corresponding electricity consumption. This dataset is commonly used to test and validate the performance of time series forecasting models.
 
-In this tutorial, we will use this dataset to predict the electricity consumption for the next 96 hours. We have already converted this dataset into a standard data format, and you can obtain a sample dataset by running the following command. For an introduction to the data format, you can refer to the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_forecast_en.md).
+In this tutorial, we will use this dataset to predict the electricity consumption for the next 96 hours. We have already converted this dataset into a standard data format, and you can obtain a sample dataset by running the following command. For an introduction to the data format, you can refer to the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_forecast_en.md).
 
 
 You can use the following commands to download the demo dataset to a specified folder:
@@ -176,7 +176,7 @@ After executing the above command, PaddleX will validate the dataset, summarize
   "dataset_path": "./dataset/electricity",
   "show_type": "csv",
   "dataset_type": "TSDataset"
-} 
+}
 ```
 
 The above verification results have omitted some data parts. `check_pass` being True indicates that the dataset format meets the requirements. Explanations for other indicators are as follows:
@@ -190,7 +190,7 @@ The above verification results have omitted some data parts. `check_pass` being
 **Note**: Only data that passes the verification can be used for training and evaluation.
 
 ### 4.3 Dataset Format Conversion/Dataset Splitting (Optional)
-If you need to convert the dataset format or re-split the dataset, you can modify the configuration file or append hyperparameters for settings. Refer to Section 4.1.3 in the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/time_series_modules/time_series_forecast_en.md).
+If you need to convert the dataset format or re-split the dataset, you can modify the configuration file or append hyperparameters for settings. Refer to Section 4.1.3 in the [Time Series Prediction Module Development Tutorial](../module_usage/tutorials/ts_modules/time_series_forecast_en.md).
 
 ## 5. Model Training and Evaluation
 
@@ -343,8 +343,8 @@ from paddlex import create_pipeline
 pipeline = create_pipeline(pipeline="ts_forecast")
 output = pipeline.predict("pre_ts.csv")
 for res in output:
-    res.print() 
-    res.save_to_csv("./output/") 
+    res.print()
+    res.save_to_csv("./output/")
 ```
 For more parameters, please refer to the [Time Series forecast Pipeline Usage Tutorial](../pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md)
 

+ 284 - 284
docs/support_list/models_list.md

@@ -4,370 +4,370 @@
 
 PaddleX 内置了多条产线,每条产线都包含了若干模块,每个模块包含若干模型,具体使用哪些模型,您可以根据下边的 benchmark 数据来选择。如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型推理速度,请选择推理速度较快的模型,如您更考虑模型存储大小,请选择存储大小较小的模型。
 
-## 图像分类模块
-|模型名称|Top1 Acc(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|CLIP_vit_base_patch16_224|85.36|13.1957|285.493|306.5 M|
-|CLIP_vit_large_patch14_224|88.1|51.1284|1131.28|1.04 G|
-|ConvNeXt_base_224|83.84|12.8473|1513.87|313.9 M|
-|ConvNeXt_base_384|84.90|31.7607|3967.05|313.9 M|
-|ConvNeXt_large_224|84.26|26.8103|2463.56|700.7 M|
-|ConvNeXt_large_384|85.27|66.4058|6598.92|700.7 M|
-|ConvNeXt_small|83.13|9.74075|1127.6|178.0 M|
-|ConvNeXt_tiny|82.03|5.48923|672.559|101.4 M|
-|FasterNet-L|83.5|23.4415|-|357.1 M|
-|FasterNet-M|83.0|21.8936|-|204.6 M|
-|FasterNet-S|81.3|13.0409|-|119.3 M|
-|FasterNet-T0|71.9|12.2432|-|15.1 M|
-|FasterNet-T1|75.9|11.3562|-|29.2 M|
-|FasterNet-T2|79.1|10.703|-|57.4 M|
-|MobileNetV1_x0_5|63.5|1.86754|7.48297|4.8 M|
-|MobileNetV1_x0_25|51.4|1.83478|4.83674|1.8 M|
-|MobileNetV1_x0_75|68.8|2.57903|10.6343|9.3 M|
-|MobileNetV1_x1_0|71.0|2.78781|13.98|15.2 M|
-|MobileNetV2_x0_5|65.0|4.94234|11.1629|7.1 M|
-|MobileNetV2_x0_25|53.2|4.50856|9.40991|5.5 M|
-|MobileNetV2_x1_0|72.2|6.12159|16.0442|12.6 M|
-|MobileNetV2_x1_5|74.1|6.28385|22.5129|25.0 M|
-|MobileNetV2_x2_0|75.2|6.12888|30.8612|41.2 M|
-|MobileNetV3_large_x0_5|69.2|6.31302|14.5588|9.6 M|
-|MobileNetV3_large_x0_35|64.3|5.76207|13.9041|7.5 M|
-|MobileNetV3_large_x0_75|73.1|8.41737|16.9506|14.0 M|
-|MobileNetV3_large_x1_0|75.3|8.64112|19.1614|19.5 M|
-|MobileNetV3_large_x1_25|76.4|8.73358|22.1296|26.5 M|
-|MobileNetV3_small_x0_5|59.2|5.16721|11.2688|6.8 M|
-|MobileNetV3_small_x0_35|53.0|5.22053|11.0055|6.0 M|
-|MobileNetV3_small_x0_75|66.0|5.39831|12.8313|8.5 M|
-|MobileNetV3_small_x1_0|68.2|6.00993|12.9598|10.5 M|
-|MobileNetV3_small_x1_25|70.7|6.9589|14.3995|13.0 M|
-|MobileNetV4_conv_large|83.4|12.5485|51.6453|125.2 M|
-|MobileNetV4_conv_medium|79.9|9.65509|26.6157|37.6 M|
-|MobileNetV4_conv_small|74.6|5.24172|11.0893|14.7 M|
-|MobileNetV4_hybrid_large|83.8|20.0726|213.769|145.1 M|
-|MobileNetV4_hybrid_medium|80.5|19.7543|62.2624|42.9 M|
-|PP-HGNet_base|85.0|14.2969|327.114|249.4 M|
-|PP-HGNet_small|81.51|5.50661|119.041|86.5 M|
-|PP-HGNet_tiny|79.83|5.22006|69.396|52.4 M|
-|PP-HGNetV2-B0|77.77|6.53694|23.352|21.4 M|
-|PP-HGNetV2-B1|79.18|6.56034|27.3099|22.6 M|
-|PP-HGNetV2-B2|81.74|9.60494|43.1219|39.9 M|
-|PP-HGNetV2-B3|82.98|11.0042|55.1367|57.9 M|
-|PP-HGNetV2-B4|83.57|9.66407|54.2462|70.4 M|
-|PP-HGNetV2-B5|84.75|15.7091|115.926|140.8 M|
-|PP-HGNetV2-B6|86.30|21.226|255.279|268.4 M|
-|PP-LCNet_x0_5|63.14|3.67722|6.66857|6.7 M|
-|PP-LCNet_x0_25|51.86|2.65341|5.81357|5.5 M|
-|PP-LCNet_x0_35|58.09|2.7212|6.28944|5.9 M|
-|PP-LCNet_x0_75|68.18|3.91032|8.06953|8.4 M|
-|PP-LCNet_x1_0|71.32|3.84845|9.23735|10.5 M|
-|PP-LCNet_x1_5|73.71|3.97666|12.3457|16.0 M|
-|PP-LCNet_x2_0|75.18|4.07556|16.2752|23.2 M|
-|PP-LCNet_x2_5|76.60|4.06028|21.5063|32.1 M|
-|PP-LCNetV2_base|77.05|5.23428|19.6005|23.7 M|
-|PP-LCNetV2_large |78.51|6.78335|30.4378|37.3 M|
-|PP-LCNetV2_small|73.97|3.89762|13.0273|14.6 M|
-|ResNet18_vd|72.3|3.53048|31.3014|41.5 M|
-|ResNet18|71.0|2.4868|27.4601|41.5 M|
-|ResNet34_vd|76.0|5.60675|56.0653|77.3 M|
-|ResNet34|74.6|4.16902|51.925|77.3 M|
-|ResNet50_vd|79.1|10.1885|68.446|90.8 M|
-|ResNet50|76.5|9.62383|64.8135|90.8 M|
-|ResNet101_vd|80.2|20.0563|124.85|158.4 M|
-|ResNet101|77.6|19.2297|121.006|158.7 M|
-|ResNet152_vd|80.6|29.6439|181.678|214.3 M|
-|ResNet152|78.3|30.0461|177.707|214.2 M|
-|ResNet200_vd|80.9|39.1628|235.185|266.0 M|
-|StarNet-S1|73.6|9.895|23.0465|11.2 M|
-|StarNet-S2|74.8|7.91279|21.9571|14.3 M|
-|StarNet-S3|77.0|10.7531|30.7656|22.2 M|
-|StarNet-S4|79.0|15.2868|43.2497|28.9 M|
-|SwinTransformer_base_patch4_window7_224|83.37|16.9848|383.83|310.5 M|
-|SwinTransformer_base_patch4_window12_384|84.17|37.2855|1178.63|311.4 M|
-|SwinTransformer_large_patch4_window7_224|86.19|27.5498|689.729|694.8 M|
-|SwinTransformer_large_patch4_window12_384|87.06|74.1768|2105.22|696.1 M|
-|SwinTransformer_small_patch4_window7_224|83.21|16.3982|285.56|175.6 M|
-|SwinTransformer_tiny_patch4_window7_224|81.10|8.54846|156.306|100.1 M|
+## [图像分类模块](../module_usage/tutorials/cv_modules/image_classification.md)
+|模型名称|Top1 Acc(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|CLIP_vit_base_patch16_224|85.36|13.1957|285.493|306.5 M|[CLIP_vit_base_patch16_224.yaml](../../paddlex/configs/image_classification/CLIP_vit_base_patch16_224.yaml)|
+|CLIP_vit_large_patch14_224|88.1|51.1284|1131.28|1.04 G|[CLIP_vit_large_patch14_224.yaml](../../paddlex/configs/image_classification/CLIP_vit_large_patch14_224.yaml)|
+|ConvNeXt_base_224|83.84|12.8473|1513.87|313.9 M|[ConvNeXt_base_224.yaml](../../paddlex/configs/image_classification/ConvNeXt_base_224.yaml)|
+|ConvNeXt_base_384|84.90|31.7607|3967.05|313.9 M|[ConvNeXt_base_384.yaml](../../paddlex/configs/image_classification/ConvNeXt_base_384.yaml)|
+|ConvNeXt_large_224|84.26|26.8103|2463.56|700.7 M|[ConvNeXt_large_224.yaml](../../paddlex/configs/image_classification/ConvNeXt_large_224.yaml)|
+|ConvNeXt_large_384|85.27|66.4058|6598.92|700.7 M|[ConvNeXt_large_384.yaml](../../paddlex/configs/image_classification/ConvNeXt_large_384.yaml)|
+|ConvNeXt_small|83.13|9.74075|1127.6|178.0 M|[ConvNeXt_small.yaml](../../paddlex/configs/image_classification/ConvNeXt_small.yaml)|
+|ConvNeXt_tiny|82.03|5.48923|672.559|101.4 M|[ConvNeXt_tiny.yaml](../../paddlex/configs/image_classification/ConvNeXt_tiny.yaml)|
+|FasterNet-L|83.5|23.4415|-|357.1 M|[FasterNet-L.yaml](../../paddlex/configs/image_classification/FasterNet-L.yaml)|
+|FasterNet-M|83.0|21.8936|-|204.6 M|[FasterNet-M.yaml](../../paddlex/configs/image_classification/FasterNet-M.yaml)|
+|FasterNet-S|81.3|13.0409|-|119.3 M|[FasterNet-S.yaml](../../paddlex/configs/image_classification/FasterNet-S.yaml)|
+|FasterNet-T0|71.9|12.2432|-|15.1 M|[FasterNet-T0.yaml](../../paddlex/configs/image_classification/FasterNet-T0.yaml)|
+|FasterNet-T1|75.9|11.3562|-|29.2 M|[FasterNet-T1.yaml](../../paddlex/configs/image_classification/FasterNet-T1.yaml)|
+|FasterNet-T2|79.1|10.703|-|57.4 M|[FasterNet-T2.yaml](../../paddlex/configs/image_classification/FasterNet-T2.yaml)|
+|MobileNetV1_x0_5|63.5|1.86754|7.48297|4.8 M|[MobileNetV1_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV1_x0_5.yaml)|
+|MobileNetV1_x0_25|51.4|1.83478|4.83674|1.8 M|[MobileNetV1_x0_25.yaml](../../paddlex/configs/image_classification/MobileNetV1_x0_25.yaml)|
+|MobileNetV1_x0_75|68.8|2.57903|10.6343|9.3 M|[MobileNetV1_x0_75.yaml](../../paddlex/configs/image_classification/MobileNetV1_x0_75.yaml)|
+|MobileNetV1_x1_0|71.0|2.78781|13.98|15.2 M|[MobileNetV1_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV1_x1_0.yaml)|
+|MobileNetV2_x0_5|65.0|4.94234|11.1629|7.1 M|[MobileNetV2_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV2_x0_5.yaml)|
+|MobileNetV2_x0_25|53.2|4.50856|9.40991|5.5 M|[MobileNetV2_x0_25.yaml](../../paddlex/configs/image_classification/MobileNetV2_x0_25.yaml)|
+|MobileNetV2_x1_0|72.2|6.12159|16.0442|12.6 M|[MobileNetV2_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV2_x1_0.yaml)|
+|MobileNetV2_x1_5|74.1|6.28385|22.5129|25.0 M|[MobileNetV2_x1_5.yaml](../../paddlex/configs/image_classification/MobileNetV2_x1_5.yaml)|
+|MobileNetV2_x2_0|75.2|6.12888|30.8612|41.2 M|[MobileNetV2_x2_0.yaml](../../paddlex/configs/image_classification/MobileNetV2_x2_0.yaml)|
+|MobileNetV3_large_x0_5|69.2|6.31302|14.5588|9.6 M|[MobileNetV3_large_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x0_5.yaml)|
+|MobileNetV3_large_x0_35|64.3|5.76207|13.9041|7.5 M|[MobileNetV3_large_x0_35.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x0_35.yaml)|
+|MobileNetV3_large_x0_75|73.1|8.41737|16.9506|14.0 M|[MobileNetV3_large_x0_75.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x0_75.yaml)|
+|MobileNetV3_large_x1_0|75.3|8.64112|19.1614|19.5 M|[MobileNetV3_large_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x1_0.yaml)|
+|MobileNetV3_large_x1_25|76.4|8.73358|22.1296|26.5 M|[MobileNetV3_large_x1_25.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x1_25.yaml)|
+|MobileNetV3_small_x0_5|59.2|5.16721|11.2688|6.8 M|[MobileNetV3_small_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x0_5.yaml)|
+|MobileNetV3_small_x0_35|53.0|5.22053|11.0055|6.0 M|[MobileNetV3_small_x0_35.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x0_35.yaml)|
+|MobileNetV3_small_x0_75|66.0|5.39831|12.8313|8.5 M|[MobileNetV3_small_x0_75.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x0_75.yaml)|
+|MobileNetV3_small_x1_0|68.2|6.00993|12.9598|10.5 M|[MobileNetV3_small_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x1_0.yaml)|
+|MobileNetV3_small_x1_25|70.7|6.9589|14.3995|13.0 M|[MobileNetV3_small_x1_25.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x1_25.yaml)|
+|MobileNetV4_conv_large|83.4|12.5485|51.6453|125.2 M|[MobileNetV4_conv_large.yaml](../../paddlex/configs/image_classification/MobileNetV4_conv_large.yaml)|
+|MobileNetV4_conv_medium|79.9|9.65509|26.6157|37.6 M|[MobileNetV4_conv_medium.yaml](../../paddlex/configs/image_classification/MobileNetV4_conv_medium.yaml)|
+|MobileNetV4_conv_small|74.6|5.24172|11.0893|14.7 M|[MobileNetV4_conv_small.yaml](../../paddlex/configs/image_classification/MobileNetV4_conv_small.yaml)|
+|MobileNetV4_hybrid_large|83.8|20.0726|213.769|145.1 M|[MobileNetV4_hybrid_large.yaml](../../paddlex/configs/image_classification/MobileNetV4_hybrid_large.yaml)|
+|MobileNetV4_hybrid_medium|80.5|19.7543|62.2624|42.9 M|[MobileNetV4_hybrid_medium.yaml](../../paddlex/configs/image_classification/MobileNetV4_hybrid_medium.yaml)|
+|PP-HGNet_base|85.0|14.2969|327.114|249.4 M|[PP-HGNet_base.yaml](../../paddlex/configs/image_classification/PP-HGNet_base.yaml)|
+|PP-HGNet_small|81.51|5.50661|119.041|86.5 M|[PP-HGNet_small.yaml](../../paddlex/configs/image_classification/PP-HGNet_small.yaml)|
+|PP-HGNet_tiny|79.83|5.22006|69.396|52.4 M|[PP-HGNet_tiny.yaml](../../paddlex/configs/image_classification/PP-HGNet_tiny.yaml)|
+|PP-HGNetV2-B0|77.77|6.53694|23.352|21.4 M|[PP-HGNetV2-B0.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B0.yaml)|
+|PP-HGNetV2-B1|79.18|6.56034|27.3099|22.6 M|[PP-HGNetV2-B1.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B1.yaml)|
+|PP-HGNetV2-B2|81.74|9.60494|43.1219|39.9 M|[PP-HGNetV2-B2.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B2.yaml)|
+|PP-HGNetV2-B3|82.98|11.0042|55.1367|57.9 M|[PP-HGNetV2-B3.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B3.yaml)|
+|PP-HGNetV2-B4|83.57|9.66407|54.2462|70.4 M|[PP-HGNetV2-B4.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B4.yaml)|
+|PP-HGNetV2-B5|84.75|15.7091|115.926|140.8 M|[PP-HGNetV2-B5.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B5.yaml)|
+|PP-HGNetV2-B6|86.30|21.226|255.279|268.4 M|[PP-HGNetV2-B6.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B6.yaml)|
+|PP-LCNet_x0_5|63.14|3.67722|6.66857|6.7 M|[PP-LCNet_x0_5.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_5.yaml)|
+|PP-LCNet_x0_25|51.86|2.65341|5.81357|5.5 M|[PP-LCNet_x0_25.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_25.yaml)|
+|PP-LCNet_x0_35|58.09|2.7212|6.28944|5.9 M|[PP-LCNet_x0_35.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_35.yaml)|
+|PP-LCNet_x0_75|68.18|3.91032|8.06953|8.4 M|[PP-LCNet_x0_75.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_75.yaml)|
+|PP-LCNet_x1_0|71.32|3.84845|9.23735|10.5 M|[PP-LCNet_x1_0.yaml](../../paddlex/configs/image_classification/PP-LCNet_x1_0.yaml)|
+|PP-LCNet_x1_5|73.71|3.97666|12.3457|16.0 M|[PP-LCNet_x1_5.yaml](../../paddlex/configs/image_classification/PP-LCNet_x1_5.yaml)|
+|PP-LCNet_x2_0|75.18|4.07556|16.2752|23.2 M|[PP-LCNet_x2_0.yaml](../../paddlex/configs/image_classification/PP-LCNet_x2_0.yaml)|
+|PP-LCNet_x2_5|76.60|4.06028|21.5063|32.1 M|[PP-LCNet_x2_5.yaml](../../paddlex/configs/image_classification/PP-LCNet_x2_5.yaml)|
+|PP-LCNetV2_base|77.05|5.23428|19.6005|23.7 M|[PP-LCNetV2_base.yaml](../../paddlex/configs/image_classification/PP-LCNetV2_base.yaml)|
+|PP-LCNetV2_large |78.51|6.78335|30.4378|37.3 M|[PP-LCNetV2_large.yaml](../../paddlex/configs/image_classification/PP-LCNetV2_large.yaml)|
+|PP-LCNetV2_small|73.97|3.89762|13.0273|14.6 M|[PP-LCNetV2_small.yaml](../../paddlex/configs/image_classification/PP-LCNetV2_small.yaml)|
+|ResNet18_vd|72.3|3.53048|31.3014|41.5 M|[ResNet18_vd.yaml](../../paddlex/configs/image_classification/ResNet18_vd.yaml)|
+|ResNet18|71.0|2.4868|27.4601|41.5 M|[ResNet18.yaml](../../paddlex/configs/image_classification/ResNet18.yaml)|
+|ResNet34_vd|76.0|5.60675|56.0653|77.3 M|[ResNet34_vd.yaml](../../paddlex/configs/image_classification/ResNet34_vd.yaml)|
+|ResNet34|74.6|4.16902|51.925|77.3 M|[ResNet34.yaml](../../paddlex/configs/image_classification/ResNet34.yaml)|
+|ResNet50_vd|79.1|10.1885|68.446|90.8 M|[ResNet50_vd.yaml](../../paddlex/configs/image_classification/ResNet50_vd.yaml)|
+|ResNet50|76.5|9.62383|64.8135|90.8 M|[ResNet50.yaml](../../paddlex/configs/image_classification/ResNet50.yaml)|
+|ResNet101_vd|80.2|20.0563|124.85|158.4 M|[ResNet101_vd.yaml](../../paddlex/configs/image_classification/ResNet101_vd.yaml)|
+|ResNet101|77.6|19.2297|121.006|158.7 M|[ResNet101.yaml](../../paddlex/configs/image_classification/ResNet101.yaml)|
+|ResNet152_vd|80.6|29.6439|181.678|214.3 M|[ResNet152_vd.yaml](../../paddlex/configs/image_classification/ResNet152_vd.yaml)|
+|ResNet152|78.3|30.0461|177.707|214.2 M|[ResNet152.yaml](../../paddlex/configs/image_classification/ResNet152.yaml)|
+|ResNet200_vd|80.9|39.1628|235.185|266.0 M|[ResNet200_vd.yaml](../../paddlex/configs/image_classification/ResNet200_vd.yaml)|
+|StarNet-S1|73.6|9.895|23.0465|11.2 M|[StarNet-S1.yaml](../../paddlex/configs/image_classification/StarNet-S1.yaml)|
+|StarNet-S2|74.8|7.91279|21.9571|14.3 M|[StarNet-S2.yaml](../../paddlex/configs/image_classification/StarNet-S2.yaml)|
+|StarNet-S3|77.0|10.7531|30.7656|22.2 M|[StarNet-S3.yaml](../../paddlex/configs/image_classification/StarNet-S3.yaml)|
+|StarNet-S4|79.0|15.2868|43.2497|28.9 M|[StarNet-S4.yaml](../../paddlex/configs/image_classification/StarNet-S4.yaml)|
+|SwinTransformer_base_patch4_window7_224|83.37|16.9848|383.83|310.5 M|[SwinTransformer_base_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_base_patch4_window7_224.yaml)|
+|SwinTransformer_base_patch4_window12_384|84.17|37.2855|1178.63|311.4 M|[SwinTransformer_base_patch4_window12_384.yaml](../../paddlex/configs/image_classification/SwinTransformer_base_patch4_window12_384.yaml)|
+|SwinTransformer_large_patch4_window7_224|86.19|27.5498|689.729|694.8 M|[SwinTransformer_large_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_large_patch4_window7_224.yaml)|
+|SwinTransformer_large_patch4_window12_384|87.06|74.1768|2105.22|696.1 M|[SwinTransformer_large_patch4_window12_384.yaml](../../paddlex/configs/image_classification/SwinTransformer_large_patch4_window12_384.yaml)|
+|SwinTransformer_small_patch4_window7_224|83.21|16.3982|285.56|175.6 M|[SwinTransformer_small_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_small_patch4_window7_224.yaml)|
+|SwinTransformer_tiny_patch4_window7_224|81.10|8.54846|156.306|100.1 M|[SwinTransformer_tiny_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_tiny_patch4_window7_224.yaml)|
 
 **注:以上精度指标为 **[ImageNet-1k](https://www.image-net.org/index.php)** 验证集 Top1 Acc。**
 
-## 图像多标签分类模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|CLIP_vit_base_patch16_448_ML|89.15|-|-|325.6 M|
-|PP-HGNetV2-B0_ML|80.98|-|-|39.6 M|
-|PP-HGNetV2-B4_ML|87.96|-|-|88.5 M|
-|PP-HGNetV2-B6_ML|91.25|-|-|286.5 M|
-|PP-LCNet_x1_0_ML|77.96|-|-|29.4 M|
-|ResNet50_ML|83.50|-|-|108.9 M|
+## [图像多标签分类模块](../module_usage/tutorials/cv_modules/ml_classification.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|CLIP_vit_base_patch16_448_ML|89.15|-|-|325.6 M|[CLIP_vit_base_patch16_448_ML.yaml](../../paddlex/configs/multilabel_classification/CLIP_vit_base_patch16_448_ML.yaml)|
+|PP-HGNetV2-B0_ML|80.98|-|-|39.6 M|[PP-HGNetV2-B0_ML.yaml](../../paddlex/configs/multilabel_classification/PP-HGNetV2-B0_ML.yaml)|
+|PP-HGNetV2-B4_ML|87.96|-|-|88.5 M|[PP-HGNetV2-B4_ML.yaml](../../paddlex/configs/multilabel_classification/PP-HGNetV2-B4_ML.yaml)|
+|PP-HGNetV2-B6_ML|91.25|-|-|286.5 M|[PP-HGNetV2-B6_ML.yaml](../../paddlex/configs/multilabel_classification/PP-HGNetV2-B6_ML.yaml)|
+|PP-LCNet_x1_0_ML|77.96|-|-|29.4 M|[PP-LCNet_x1_0_ML.yaml](../../paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml)|
+|ResNet50_ML|83.50|-|-|108.9 M|[ResNet50_ML.yaml](../../paddlex/configs/multilabel_classification/ResNet50_ML.yaml)|
 
 **注:以上精度指标为 [COCO2017](https://cocodataset.org/#home) 的多标签分类任务mAP。**
 
-## 行人属性模块
-|模型名称|mA(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-LCNet_x1_0_pedestrian_attribute|92.2|3.84845|9.23735|6.7 M  |
+## [行人属性模块](../module_usage/tutorials/cv_modules/pedestrian_attribute_recognition.md)
+|模型名称|mA(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-LCNet_x1_0_pedestrian_attribute|92.2|3.84845|9.23735|6.7 M  |[PP-LCNet_x1_0_pedestrian_attribute.yaml](../../paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_attribute.yaml)|
 
 **注:以上精度指标为 PaddleX 内部自建数据集mA。**
 
-## 车辆属性模块
-|模型名称|mA(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-LCNet_x1_0_vehicle_attribute|91.7|3.84845|9.23735|6.7 M|
+## [车辆属性模块](../module_usage/tutorials/cv_modules/vehicle_attribute_recognition.md)
+|模型名称|mA(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-LCNet_x1_0_vehicle_attribute|91.7|3.84845|9.23735|6.7 M|[PP-LCNet_x1_0_vehicle_attribute.yaml](../../paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attribute.yaml)|
 
 **注:以上精度指标为 VeRi 数据集 mA。**
 
-## 图像特征模块
-|模型名称|recall@1(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-ShiTuV2_rec|84.2|5.23428|19.6005|16.3 M|
-|PP-ShiTuV2_rec_CLIP_vit_base|88.69|13.1957|285.493|306.6 M|
-|PP-ShiTuV2_rec_CLIP_vit_large|91.03|51.1284|1131.28|1.05 G|
+## [图像特征模块](../module_usage/tutorials/cv_modules/image_feature.md)
+|模型名称|recall@1(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-ShiTuV2_rec|84.2|5.23428|19.6005|16.3 M|[PP-ShiTuV2_rec.yaml](../../paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml)|
+|PP-ShiTuV2_rec_CLIP_vit_base|88.69|13.1957|285.493|306.6 M|[PP-ShiTuV2_rec_CLIP_vit_base.yaml](../../paddlex/configs/general_recognition/PP-ShiTuV2_rec_CLIP_vit_base.yaml)|
+|PP-ShiTuV2_rec_CLIP_vit_large|91.03|51.1284|1131.28|1.05 G|[PP-ShiTuV2_rec_CLIP_vit_large.yaml](../../paddlex/configs/general_recognition/PP-ShiTuV2_rec_CLIP_vit_large.yaml)|
 
 **注:以上精度指标为 AliProducts recall@1。**
 
-## 文档方向分类模块
-|模型名称|Top-1 Acc(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-LCNet_x1_0_doc_ori|99.26|3.84845|9.23735|7.1 M|
+## [文档方向分类模块](../module_usage/tutorials/ocr_modules/doc_img_orientation_classification.md)
+|模型名称|Top-1 Acc(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-LCNet_x1_0_doc_ori|99.26|3.84845|9.23735|7.1 M|[PP-LCNet_x1_0_doc_ori.yaml](../../paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml)|
 
 **注:以上精度指标为 PaddleX 内部自建数据集 Top-1 Acc 。**
 
-## 主体检测模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-ShiTuV2_det|41.5|33.7426|537.003|27.6 M|
+## [主体检测模块](../module_usage/tutorials/cv_modules/mainbody_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-ShiTuV2_det|41.5|33.7426|537.003|27.6 M|[PP-ShiTuV2_det.yaml](../../paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml)|
 
 **注:以上精度指标为 [PaddleClas主体检测数据集](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/training/PP-ShiTu/mainbody_detection.md) mAP(0.5:0.95)。**
 
-## 目标检测模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|Cascade-FasterRCNN-ResNet50-FPN|41.1|-|-|245.4 M|
-|Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN|45.0|-|-|246.2 M|
-|CenterNet-DLA-34|37.6|-|-|75.4 M|
-|CenterNet-ResNet50|38.9|-|-|319.7 M|
-|DETR-R50|42.3|59.2132|5334.52|159.3 M|
-|FasterRCNN-ResNet34-FPN|37.8|-|-|137.5 M|
-|FasterRCNN-ResNet50-FPN|38.4|-|-|148.1 M|
-|FasterRCNN-ResNet50-vd-FPN|39.5|-|-|148.1 M|
-|FasterRCNN-ResNet50-vd-SSLDv2-FPN|41.4|-|-|148.1 M|
-|FasterRCNN-ResNet50|36.7|-|-|120.2 M|
-|FasterRCNN-ResNet101-FPN|41.4|-|-|216.3 M|
-|FasterRCNN-ResNet101|39.0|-|-|188.1 M|
-|FasterRCNN-ResNeXt101-vd-FPN|43.4|-|-|360.6 M|
-|FasterRCNN-Swin-Tiny-FPN|42.6|-|-|159.8 M|
-|FCOS-ResNet50|39.6|103.367|3424.91|124.2 M|
-|PicoDet-L|42.6|16.6715|169.904|20.9 M|
-|PicoDet-M|37.5|16.2311|71.7257|16.8 M|
-|PicoDet-S|29.1|14.097|37.6563|4.4 M |
-|PicoDet-XS|26.2|13.8102|48.3139|5.7M |
-|PP-YOLOE_plus-L|52.9|33.5644|814.825|185.3 M|
-|PP-YOLOE_plus-M|49.8|19.843|449.261|83.2 M|
-|PP-YOLOE_plus-S|43.7|16.8884|223.059|28.3 M|
-|PP-YOLOE_plus-X|54.7|57.8995|1439.93|349.4 M|
-|RT-DETR-H|56.3|114.814|3933.39|435.8 M|
-|RT-DETR-L|53.0|34.5252|1454.27|113.7 M|
-|RT-DETR-R18|46.5|19.89|784.824|70.7 M|
-|RT-DETR-R50|53.1|41.9327|1625.95|149.1 M|
-|RT-DETR-X|54.8|61.8042|2246.64|232.9 M|
-|YOLOv3-DarkNet53|39.1|40.1055|883.041|219.7 M|
-|YOLOv3-MobileNetV3|31.4|18.6692|267.214|83.8 M|
-|YOLOv3-ResNet50_vd_DCN|40.6|31.6276|856.047|163.0 M|
-|YOLOX-L|50.1|185.691|1250.58|192.5 M|
-|YOLOX-M|46.9|123.324|688.071|90.0 M|
-|YOLOX-N|26.1|79.1665|155.59|3.4M|
-|YOLOX-S|40.4|184.828|474.446|32.0 M|
-|YOLOX-T|32.9|102.748|212.52|18.1 M|
-|YOLOX-X|51.8|227.361|2067.84|351.5 M|
+## [目标检测模块](../module_usage/tutorials/cv_modules/object_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|Cascade-FasterRCNN-ResNet50-FPN|41.1|-|-|245.4 M|[Cascade-FasterRCNN-ResNet50-FPN.yaml](../../paddlex/configs/object_detection/Cascade-FasterRCNN-ResNet50-FPN.yaml)|
+|Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN|45.0|-|-|246.2 M|[Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml](../../paddlex/configs/object_detection/Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml)|
+|CenterNet-DLA-34|37.6|-|-|75.4 M|[CenterNet-DLA-34.yaml](../../paddlex/configs/object_detection/CenterNet-DLA-34.yaml)|
+|CenterNet-ResNet50|38.9|-|-|319.7 M|[CenterNet-ResNet50.yaml](../../paddlex/configs/object_detection/CenterNet-ResNet50.yaml)|
+|DETR-R50|42.3|59.2132|5334.52|159.3 M|[DETR-R50.yaml](../../paddlex/configs/object_detection/DETR-R50.yaml)|
+|FasterRCNN-ResNet34-FPN|37.8|-|-|137.5 M|[FasterRCNN-ResNet34-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet34-FPN.yaml)|
+|FasterRCNN-ResNet50-FPN|38.4|-|-|148.1 M|[FasterRCNN-ResNet50-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50-FPN.yaml)|
+|FasterRCNN-ResNet50-vd-FPN|39.5|-|-|148.1 M|[FasterRCNN-ResNet50-vd-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50-vd-FPN.yaml)|
+|FasterRCNN-ResNet50-vd-SSLDv2-FPN|41.4|-|-|148.1 M|[FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml)|
+|FasterRCNN-ResNet50|36.7|-|-|120.2 M|[FasterRCNN-ResNet50.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50.yaml)|
+|FasterRCNN-ResNet101-FPN|41.4|-|-|216.3 M|[FasterRCNN-ResNet101-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet101-FPN.yaml)|
+|FasterRCNN-ResNet101|39.0|-|-|188.1 M|[FasterRCNN-ResNet101.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet101.yaml)|
+|FasterRCNN-ResNeXt101-vd-FPN|43.4|-|-|360.6 M|[FasterRCNN-ResNeXt101-vd-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNeXt101-vd-FPN.yaml)|
+|FasterRCNN-Swin-Tiny-FPN|42.6|-|-|159.8 M|[FasterRCNN-Swin-Tiny-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-Swin-Tiny-FPN.yaml)|
+|FCOS-ResNet50|39.6|103.367|3424.91|124.2 M|[FCOS-ResNet50.yaml](../../paddlex/configs/object_detection/FCOS-ResNet50.yaml)|
+|PicoDet-L|42.6|16.6715|169.904|20.9 M|[PicoDet-L.yaml](../../paddlex/configs/object_detection/PicoDet-L.yaml)|
+|PicoDet-M|37.5|16.2311|71.7257|16.8 M|[PicoDet-M.yaml](../../paddlex/configs/object_detection/PicoDet-M.yaml)|
+|PicoDet-S|29.1|14.097|37.6563|4.4 M |[PicoDet-S.yaml](../../paddlex/configs/object_detection/PicoDet-S.yaml)|
+|PicoDet-XS|26.2|13.8102|48.3139|5.7M |[PicoDet-XS.yaml](../../paddlex/configs/object_detection/PicoDet-XS.yaml)|
+|PP-YOLOE_plus-L|52.9|33.5644|814.825|185.3 M|[PP-YOLOE_plus-L.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-L.yaml)|
+|PP-YOLOE_plus-M|49.8|19.843|449.261|83.2 M|[PP-YOLOE_plus-M.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-M.yaml)|
+|PP-YOLOE_plus-S|43.7|16.8884|223.059|28.3 M|[PP-YOLOE_plus-S.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml)|
+|PP-YOLOE_plus-X|54.7|57.8995|1439.93|349.4 M|[PP-YOLOE_plus-X.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-X.yaml)|
+|RT-DETR-H|56.3|114.814|3933.39|435.8 M|[RT-DETR-H.yaml](../../paddlex/configs/object_detection/RT-DETR-H.yaml)|
+|RT-DETR-L|53.0|34.5252|1454.27|113.7 M|[RT-DETR-L.yaml](../../paddlex/configs/object_detection/RT-DETR-L.yaml)|
+|RT-DETR-R18|46.5|19.89|784.824|70.7 M|[RT-DETR-R18.yaml](../../paddlex/configs/object_detection/RT-DETR-R18.yaml)|
+|RT-DETR-R50|53.1|41.9327|1625.95|149.1 M|[RT-DETR-R50.yaml](../../paddlex/configs/object_detection/RT-DETR-R50.yaml)|
+|RT-DETR-X|54.8|61.8042|2246.64|232.9 M|[RT-DETR-X.yaml](../../paddlex/configs/object_detection/RT-DETR-X.yaml)|
+|YOLOv3-DarkNet53|39.1|40.1055|883.041|219.7 M|[YOLOv3-DarkNet53.yaml](../../paddlex/configs/object_detection/YOLOv3-DarkNet53.yaml)|
+|YOLOv3-MobileNetV3|31.4|18.6692|267.214|83.8 M|[YOLOv3-MobileNetV3.yaml](../../paddlex/configs/object_detection/YOLOv3-MobileNetV3.yaml)|
+|YOLOv3-ResNet50_vd_DCN|40.6|31.6276|856.047|163.0 M|[YOLOv3-ResNet50_vd_DCN.yaml](../../paddlex/configs/object_detection/YOLOv3-ResNet50_vd_DCN.yaml)|
+|YOLOX-L|50.1|185.691|1250.58|192.5 M|[YOLOX-L.yaml](../../paddlex/configs/object_detection/YOLOX-L.yaml)|
+|YOLOX-M|46.9|123.324|688.071|90.0 M|[YOLOX-M.yaml](../../paddlex/configs/object_detection/YOLOX-M.yaml)|
+|YOLOX-N|26.1|79.1665|155.59|3.4M|[YOLOX-N.yaml](../../paddlex/configs/object_detection/YOLOX-N.yaml)|
+|YOLOX-S|40.4|184.828|474.446|32.0 M|[YOLOX-S.yaml](../../paddlex/configs/object_detection/YOLOX-S.yaml)|
+|YOLOX-T|32.9|102.748|212.52|18.1 M|[YOLOX-T.yaml](../../paddlex/configs/object_detection/YOLOX-T.yaml)|
+|YOLOX-X|51.8|227.361|2067.84|351.5 M|[YOLOX-X.yaml](../../paddlex/configs/object_detection/YOLOX-X.yaml)|
 
 **注:以上精度指标为 **[COCO2017](https://cocodataset.org/#home)** 验证集 mAP(0.5:0.95)。**
 
-## 小目标检测模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-YOLOE_plus_SOD-S|25.1|65.4608|324.37|77.3 M|
-|PP-YOLOE_plus_SOD-L|31.9|57.1448|1006.98|325.0 M|
-|PP-YOLOE_plus_SOD-largesize-L|42.7|458.521|11172.7|340.5 M|
+## [小目标检测模块](../module_usage/tutorials/cv_modules/small_object_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-YOLOE_plus_SOD-S|25.1|65.4608|324.37|77.3 M|[PP-YOLOE_plus_SOD-S.yaml](../../paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml)|
+|PP-YOLOE_plus_SOD-L|31.9|57.1448|1006.98|325.0 M|[PP-YOLOE_plus_SOD-L.yaml](../../paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-L.yaml)|
+|PP-YOLOE_plus_SOD-largesize-L|42.7|458.521|11172.7|340.5 M|[PP-YOLOE_plus_SOD-largesize-L.yaml](../../paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-largesize-L.yaml)|
 
 **注:以上精度指标为 **[VisDrone-DET](https://github.com/VisDrone/VisDrone-Dataset)** 验证集 mAP(0.5:0.95)。**
 
-## 行人检测模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-YOLOE-L_human|48.0|32.7754|777.691|196.1 M|
-|PP-YOLOE-S_human|42.5|15.0118|179.317|28.8 M|
+## [行人检测模块](../module_usage/tutorials/cv_modules/human_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-YOLOE-L_human|48.0|32.7754|777.691|196.1 M|[PP-YOLOE-L_human.yaml](../../paddlex/configs/human_detection/PP-YOLOE-L_human.yaml)|
+|PP-YOLOE-S_human|42.5|15.0118|179.317|28.8 M|[PP-YOLOE-S_human.yaml](../../paddlex/configs/human_detection/PP-YOLOE-S_human.yaml)|
 
 **注:以上精度指标为 **[CrowdHuman](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip)** 验证集 mAP(0.5:0.95)。**
 
-## 车辆检测模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-YOLOE-L_vehicle|63.9|32.5619|775.633|196.1 M|
-|PP-YOLOE-S_vehicle|61.3|15.3787|178.441|28.8 M|
+## [车辆检测模块](../module_usage/tutorials/cv_modules/vehicle_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-YOLOE-L_vehicle|63.9|32.5619|775.633|196.1 M|[PP-YOLOE-L_vehicle.yaml](../../paddlex/configs/vehicle_detection/PP-YOLOE-L_vehicle.yaml)|
+|PP-YOLOE-S_vehicle|61.3|15.3787|178.441|28.8 M|[PP-YOLOE-S_vehicle.yaml](../../paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml)|
 
 **注:以上精度指标为 **[PPVehicle](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppvehicle)** 验证集 mAP(0.5:0.95)。**
 
-## 人脸检测模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PicoDet_LCNet_x2_5_face|35.8|33.7426|537.003|27.7 M|
+## [人脸检测模块](../module_usage/tutorials/cv_modules/face_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PicoDet_LCNet_x2_5_face|35.8|33.7426|537.003|27.7 M|[PicoDet_LCNet_x2_5_face.yaml](../../paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml)|
 
 **注:以上精度指标为 **[wider_face](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppvehicle)** 评估集 mAP(0.5:0.95)。**
 
-## 异常检测模块
-|模型名称|Avg(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|STFPM|96.2|-|-|21.5 M|
+## [异常检测模块](../module_usage/tutorials/cv_modules/anomaly_detection.md)
+|模型名称|Avg(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|STFPM|96.2|-|-|21.5 M|[STFPM.yaml](../../paddlex/configs/anomaly_detection/STFPM.yaml)|
 
 **注:以上精度指标为 **[MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad)** 验证集 平均异常分数。**
 
-## 语义分割模块
-|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|
-|Deeplabv3_Plus-R101|81.10|100.026|2460.71|162.5 M|
-|Deeplabv3-R50|79.90|82.2631|1735.83|138.3 M|
-|Deeplabv3-R101|80.85|121.492|2685.51|205.9 M|
-|OCRNet_HRNet-W18|80.67|48.2335|906.385|43.1 M|
-|OCRNet_HRNet-W48|82.15|78.9976|2226.95|249.8 M|
-|PP-LiteSeg-T|73.10|7.6827|138.683|28.5 M|
-|PP-LiteSeg-B|75.25|-|-|47.0 M|
-|SegFormer-B0 (slice)|76.73|11.1946|268.929|13.2 M|
-|SegFormer-B1 (slice)|78.35|17.9998|403.393|48.5 M|
-|SegFormer-B2 (slice)|81.60|48.0371|1248.52|96.9 M|
-|SegFormer-B3 (slice)|82.47|64.341|1666.35|167.3 M|
-|SegFormer-B4 (slice)|82.38|82.4336|1995.42|226.7 M|
-|SegFormer-B5 (slice)|82.58|97.3717|2420.19|229.7 M|
+## [语义分割模块](../module_usage/tutorials/cv_modules/semantic_segmentation.md)
+|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|[Deeplabv3_Plus-R50.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3_Plus-R50.yaml)|
+|Deeplabv3_Plus-R101|81.10|100.026|2460.71|162.5 M|[Deeplabv3_Plus-R101.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3_Plus-R101.yaml)|
+|Deeplabv3-R50|79.90|82.2631|1735.83|138.3 M|[Deeplabv3-R50.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3-R50.yaml)|
+|Deeplabv3-R101|80.85|121.492|2685.51|205.9 M|[Deeplabv3-R101.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3-R101.yaml)|
+|OCRNet_HRNet-W18|80.67|48.2335|906.385|43.1 M|[OCRNet_HRNet-W18.yaml](../../paddlex/configs/semantic_segmentation/OCRNet_HRNet-W18.yaml)|
+|OCRNet_HRNet-W48|82.15|78.9976|2226.95|249.8 M|[OCRNet_HRNet-W48.yaml](../../paddlex/configs/semantic_segmentation/OCRNet_HRNet-W48.yaml)|
+|PP-LiteSeg-T|73.10|7.6827|138.683|28.5 M|[PP-LiteSeg-T.yaml](../../paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml)|
+|PP-LiteSeg-B|75.25|-|-|47.0 M|[PP-LiteSeg-B.yaml](../../paddlex/configs/semantic_segmentation/PP-LiteSeg-B.yaml)|
+|SegFormer-B0 (slice)|76.73|11.1946|268.929|13.2 M|[SegFormer-B0.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B0.yaml)|
+|SegFormer-B1 (slice)|78.35|17.9998|403.393|48.5 M|[SegFormer-B1.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B1.yaml)|
+|SegFormer-B2 (slice)|81.60|48.0371|1248.52|96.9 M|[SegFormer-B2.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B2.yaml)|
+|SegFormer-B3 (slice)|82.47|64.341|1666.35|167.3 M|[SegFormer-B3.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B3.yaml)|
+|SegFormer-B4 (slice)|82.38|82.4336|1995.42|226.7 M|[SegFormer-B4.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B4.yaml)|
+|SegFormer-B5 (slice)|82.58|97.3717|2420.19|229.7 M|[SegFormer-B5.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B5.yaml)|
 
 **注:以上精度指标为 **[Cityscapes](https://www.cityscapes-dataset.com/)** 数据集 mloU。**
 
-|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|SeaFormer_base(slice)|40.92|24.4073|397.574|30.8 M|
-|SeaFormer_large (slice)|43.66|27.8123|550.464|49.8 M|
-|SeaFormer_small (slice)|38.73|19.2295|358.343|14.3 M|
-|SeaFormer_tiny (slice)|34.58|13.9496|330.132|6.1M |
+|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|SeaFormer_base(slice)|40.92|24.4073|397.574|30.8 M|[SeaFormer_base.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_base.yaml)|
+|SeaFormer_large (slice)|43.66|27.8123|550.464|49.8 M|[SeaFormer_large.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_large.yaml)|
+|SeaFormer_small (slice)|38.73|19.2295|358.343|14.3 M|[SeaFormer_small.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_small.yaml)|
+|SeaFormer_tiny (slice)|34.58|13.9496|330.132|6.1M |[SeaFormer_tiny.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_tiny.yaml)|
 
 **注:以上精度指标为 **[ADE20k](https://groups.csail.mit.edu/vision/datasets/ADE20K/)** 数据集, slice 表示对输入图像进行了切图操作。**
 
-## 实例分割模块
-|模型名称|Mask AP|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|Mask-RT-DETR-H|50.6|132.693|4896.17|449.9 M|
-|Mask-RT-DETR-L|45.7|46.5059|2575.92|113.6 M|
-|Mask-RT-DETR-M|42.7|36.8329|-|66.6 M|
-|Mask-RT-DETR-S|41.0|33.5007|-|51.8 M|
-|Mask-RT-DETR-X|47.5|75.755|3358.04|237.5 M|
-|Cascade-MaskRCNN-ResNet50-FPN|36.3|-|-|254.8 M|
-|Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN|39.1|-|-|254.7 M|
-|MaskRCNN-ResNet50-FPN|35.6|-|-|157.5 M|
-|MaskRCNN-ResNet50-vd-FPN|36.4|-|-|157.5 M|
-|MaskRCNN-ResNet50-vd-SSLDv2-FPN|38.2|-|-|157.2 M|
-|MaskRCNN-ResNet50|32.8|-|-|127.8 M|
-|MaskRCNN-ResNet101-FPN|36.6|-|-|225.4 M|
-|MaskRCNN-ResNet101-vd-FPN|38.1|-|-|225.1 M|
-|MaskRCNN-ResNeXt101-vd-FPN|39.5|-|-|370.0 M|
-|PP-YOLOE_seg-S|32.5|-|-|31.5 M|
+## [实例分割模块](../module_usage/tutorials/cv_modules/instance_segmentation.md)
+|模型名称|Mask AP|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|Mask-RT-DETR-H|50.6|132.693|4896.17|449.9 M|[Mask-RT-DETR-H.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-H.yaml)|
+|Mask-RT-DETR-L|45.7|46.5059|2575.92|113.6 M|[Mask-RT-DETR-L.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml)|
+|Mask-RT-DETR-M|42.7|36.8329|-|66.6 M|[Mask-RT-DETR-M.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-M.yaml)|
+|Mask-RT-DETR-S|41.0|33.5007|-|51.8 M|[Mask-RT-DETR-S.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-S.yaml)|
+|Mask-RT-DETR-X|47.5|75.755|3358.04|237.5 M|[Mask-RT-DETR-X.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-X.yaml)|
+|Cascade-MaskRCNN-ResNet50-FPN|36.3|-|-|254.8 M|[Cascade-MaskRCNN-ResNet50-FPN.yaml](../../paddlex/configs/instance_segmentation/Cascade-MaskRCNN-ResNet50-FPN.yaml)|
+|Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN|39.1|-|-|254.7 M|[Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN.yaml](../../paddlex/configs/instance_segmentation/Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN.yaml)|
+|MaskRCNN-ResNet50-FPN|35.6|-|-|157.5 M|[MaskRCNN-ResNet50-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet50-FPN.yaml)|
+|MaskRCNN-ResNet50-vd-FPN|36.4|-|-|157.5 M|[MaskRCNN-ResNet50-vd-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet50-vd-FPN.yaml)|
+|MaskRCNN-ResNet50|32.8|-|-|127.8 M|[MaskRCNN-ResNet50.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet50.yaml)|
+|MaskRCNN-ResNet101-FPN|36.6|-|-|225.4 M|[MaskRCNN-ResNet101-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet101-FPN.yaml)|
+|MaskRCNN-ResNet101-vd-FPN|38.1|-|-|225.1 M|[MaskRCNN-ResNet101-vd-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet101-vd-FPN.yaml)|
+|MaskRCNN-ResNeXt101-vd-FPN|39.5|-|-|370.0 M|[MaskRCNN-ResNeXt101-vd-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNeXt101-vd-FPN.yaml)|
+|PP-YOLOE_seg-S|32.5|-|-|31.5 M|[PP-YOLOE_seg-S.yaml](../../paddlex/configs/instance_segmentation/PP-YOLOE_seg-S.yaml)|
+|SOLOv2| 35.5|-|-|179.1 M|[SOLOv2.yaml](../../paddlex/configs/instance_segmentation/SOLOv2.yaml)
 
 **注:以上精度指标为 **[COCO2017](https://cocodataset.org/#home)** 验证集 Mask AP(0.5:0.95)。**
 
-## 文本检测模块
-|模型名称|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-OCRv4_mobile_det |77.79|10.6923|120.177|4.2 M|
-|PP-OCRv4_server_det |82.69|83.3501|2434.01|100.1M|
+## [文本检测模块](../module_usage/tutorials/ocr_modules/text_detection.md)
+|模型名称|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-OCRv4_mobile_det |77.79|10.6923|120.177|4.2 M|[PP-OCRv4_mobile_det.yaml](../../paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml)|
+|PP-OCRv4_server_det |82.69|83.3501|2434.01|100.1M|[PP-OCRv4_server_det.yaml](../../paddlex/configs/text_detection/PP-OCRv4_server_det.yaml)|
 
 **注:以上精度指标的评估集是 PaddleOCR 自建的中文数据集,覆盖街景、网图、文档、手写多个场景,其中检测包含 500 张图片。**
 
-## 印章文本检测模块
-|模型名称|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-OCRv4_mobile_seal_det|96.47|10.5878|131.813|4.7M |
-|PP-OCRv4_server_seal_det|98.21|84.341|2425.06|108.3 M|
+## [印章文本检测模块](../module_usage/tutorials/ocr_modules/seal_text_detection.md)
+|模型名称|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-OCRv4_mobile_seal_det|96.47|10.5878|131.813|4.7M |[PP-OCRv4_mobile_seal_det.yaml](../../paddlex/configs/text_detection_seal/PP-OCRv4_mobile_seal_det.yaml)|
+|PP-OCRv4_server_seal_det|98.21|84.341|2425.06|108.3 M|[PP-OCRv4_server_seal_det.yaml](../../paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.yaml)|
 
 **注:以上精度指标的评估集是 PaddleX 自建的印章数据集,包含500印章图像。**
 
-## 文本识别模块
-|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|
-|PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|
+## [文本识别模块](../module_usage/tutorials/ocr_modules/text_recognition.md)
+|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|[PP-OCRv4_mobile_rec.yaml](../../paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml)|
+|PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|[PP-OCRv4_server_rec.yaml](../../paddlex/configs/text_recognition/PP-OCRv4_server_rec.yaml)|
 
 **注:以上精度指标的评估集是 PaddleOCR 自建的中文数据集,覆盖街景、网图、文档、手写多个场景,其中文本识别包含 1.1w 张图片。**
 
-|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|ch_SVTRv2_rec|68.81|8.36801|165.706|73.9 M|
+|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|ch_SVTRv2_rec|68.81|8.36801|165.706|73.9 M|[ch_SVTRv2_rec.yaml](../../paddlex/configs/text_recognition/ch_SVTRv2_rec.yaml)|
 
 **注:以上精度指标的评估集是 [PaddleOCR算法模型挑战赛 - 赛题一:OCR端到端识别任务](https://aistudio.baidu.com/competition/detail/1131/0/introduction)A榜。**
 
-|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|ch_RepSVTR_rec|65.07|10.5047|51.5647|22.1 M|
+|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|ch_RepSVTR_rec|65.07|10.5047|51.5647|22.1 M|[ch_RepSVTR_rec.yaml](../../paddlex/configs/text_recognition/ch_RepSVTR_rec.yaml)|
 
 **注:以上精度指标的评估集是 [PaddleOCR算法模型挑战赛 - 赛题一:OCR端到端识别任务](https://aistudio.baidu.com/competition/detail/1131/0/introduction)B榜。**
 
-## 公式识别模块
-|模型名称|BLEU score|normed edit distance|ExpRate (%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|-|-|
-|LaTeX_OCR_rec|0.8821|0.0823|40.01|-|-|89.7 M|
+## [公式识别模块](../module_usage/tutorials/ocr_modules/formula_recognition.md)
+|模型名称|BLEU score|normed edit distance|ExpRate (%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|-|-|
+|LaTeX_OCR_rec|0.8821|0.0823|40.01|-|-|89.7 M|[LaTeX_OCR_rec.yaml](../../paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml)|
 
 **注:以上精度指标测量自 [LaTeX-OCR公式识别测试集](https://drive.google.com/drive/folders/13CA4vAmOmD_I_dSbvLp-Lf0s6KiaNfuO)。**
 
-## 表格结构识别模块
-|模型名称|精度(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|SLANet|59.52|522.536|1845.37|6.9 M |
-|SLANet_plus|63.69|522.536|1845.37|6.9 M |
+## [表格结构识别模块](../module_usage/tutorials/ocr_modules/table_structure_recognition.md)
+|模型名称|精度(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|SLANet|59.52|522.536|1845.37|6.9 M |[SLANet.yaml](../../paddlex/configs/table_recognition/SLANet.yaml)|
+|SLANet_plus|63.69|522.536|1845.37|6.9 M |[SLANet_plus.yaml](../../paddlex/configs/table_recognition/SLANet_plus.yaml)|
 
 **注:以上精度指标测量自 ****PaddleX内部自建英文表格识别数据集****。**
 
-## 图像矫正模块
-|模型名称|MS-SSIM (%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|UVDoc|54.40|-|-|30.3 M|
+## [图像矫正模块](../module_usage/tutorials/ocr_modules/text_image_unwarping.md)
+|模型名称|MS-SSIM (%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|UVDoc|54.40|-|-|30.3 M|[UVDoc.yaml](../../paddlex/configs/image_unwarping/UVDoc.yaml)|
 
 **注:以上精度指标测量自 ****PaddleX自建的图像矫正数据集****。**
 
-## 版面区域分析模块
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|
-|-|-|-|-|-|
-|PicoDet_layout_1x|86.8|13.036|91.2634|7.4 M |
-|PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|
-|RT-DETR-H_layout_3cls|95.9|114.644|3832.62|470.1 M|
-|RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2 M|
+## [版面区域检测模块](../module_usage/tutorials/ocr_modules/layout_detection.md)
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|PicoDet_layout_1x|86.8|13.036|91.2634|7.4 M |[PicoDet_layout_1x.yaml](../../paddlex/configs/structure_analysis/PicoDet_layout_1x.yaml)|
+|PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|[PicoDet-L_layout_3cls.yaml](../../paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml)|
+|RT-DETR-H_layout_3cls|95.9|114.644|3832.62|470.1 M|[RT-DETR-H_layout_3cls.yaml](../../paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml)|
+|RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2 M|[RT-DETR-H_layout_17cls.yaml](../../paddlex/configs/structure_analysis/RT-DETR-H_layout_17cls.yaml)|
 
-**注:以上精度指标的评估集是 ****PaddleX 自建的版面区域分析数据集****,包含 1w 张图片。**
+**注:以上精度指标的评估集是 ****PaddleX 自建的版面区域检测数据集****,包含 1w 张图片。**
 
-## 时序预测模块
-|模型名称|mse|mae|模型存储大小|
-|-|-|-|-|
-|DLinear|0.382|0.394|72 K|
-|NLinear|0.386|0.392|40 K |
-|Nonstationary|0.600|0.515|55.5 M|
-|PatchTST|0.385|0.397|2.0 M |
-|RLinear|0.384|0.392|40 K|
-|TiDE|0.405|0.412|31.7 M|
-|TimesNet|0.417|0.431|4.9 M|
+## [时序预测模块](../module_usage/tutorials/time_series_modules/time_series_forecasting.md)
+|模型名称|mse|mae|模型存储大小|yaml 文件|
+|-|-|-|-|-|
+|DLinear|0.382|0.394|72 K|[DLinear.yaml](../../paddlex/configs/ts_forecast/DLinear.yaml)|
+|NLinear|0.386|0.392|40 K |[NLinear.yaml](../../paddlex/configs/ts_forecast/NLinear.yaml)|
+|Nonstationary|0.600|0.515|55.5 M|[Nonstationary.yaml](../../paddlex/configs/ts_forecast/Nonstationary.yaml)|
+|PatchTST|0.385|0.397|2.0 M |[PatchTST.yaml](../../paddlex/configs/ts_forecast/PatchTST.yaml)|
+|RLinear|0.384|0.392|40 K|[RLinear.yaml](../../paddlex/configs/ts_forecast/RLinear.yaml)|
+|TiDE|0.405|0.412|31.7 M|[TiDE.yaml](../../paddlex/configs/ts_forecast/TiDE.yaml)|
+|TimesNet|0.417|0.431|4.9 M|[TimesNet.yaml](../../paddlex/configs/ts_forecast/TimesNet.yaml)|
 
 **注:以上精度指标测量自 **[ETTH1](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/Etth1.tar)** 数据集 ****(在测试集test.csv上的评测结果)****。**
 
-## 时序异常检测模块
-|模型名称|precison|recall|f1_score|模型存储大小|
-|-|-|-|-|-|
-|AutoEncoder_ad|99.36|84.36|91.25|52 K |
-|DLinear_ad|98.98|93.96|96.41|112 K|
-|Nonstationary_ad|98.55|88.95|93.51|1.8 M |
-|PatchTST_ad|98.78|90.70|94.57|320 K |
-|TimesNet_ad|98.37|94.80|96.56|1.3 M |
+## [时序异常检测模块](../module_usage/tutorials/time_series_modules/time_series_anomaly_detection.md)
+|模型名称|precison|recall|f1_score|模型存储大小|yaml 文件|
+|-|-|-|-|-|-|
+|AutoEncoder_ad|99.36|84.36|91.25|52 K |[AutoEncoder_ad.yaml](../../paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml)|
+|DLinear_ad|98.98|93.96|96.41|112 K|[DLinear_ad.yaml](../../paddlex/configs/ts_anomaly_detection/DLinear_ad.yaml)|
+|Nonstationary_ad|98.55|88.95|93.51|1.8 M |[Nonstationary_ad.yaml](../../paddlex/configs/ts_anomaly_detection/Nonstationary_ad.yaml)|
+|PatchTST_ad|98.78|90.70|94.57|320 K |[PatchTST_ad.yaml](../../paddlex/configs/ts_anomaly_detection/PatchTST_ad.yaml)|
+|TimesNet_ad|98.37|94.80|96.56|1.3 M |[TimesNet_ad.yaml](../../paddlex/configs/ts_anomaly_detection/TimesNet_ad.yaml)|
 
 **注:以上精度指标测量自 **[PSM](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ts_anomaly_examples.tar)** 数据集。**
 
-## 时序分类模块
-|模型名称|acc(%)|模型存储大小|
-|-|-|-|
-|TimesNet_cls|87.5|792 K|
+## [时序分类模块](../module_usage/tutorials/time_series_modules/time_series_classification.md)
+|模型名称|acc(%)|模型存储大小|yaml 文件|
+|-|-|-|-|
+|TimesNet_cls|87.5|792 K|[TimesNet_cls.yaml](../../paddlex/configs/ts_classification/TimesNet_cls.yaml)|
 
 **注:以上精度指标测量自 [UWaveGestureLibrary](https://paddlets.bj.bcebos.com/classification/UWaveGestureLibrary_TEST.csv)数据集。**
 
->**注:以上所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
+>**注:以上所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**

+ 285 - 285
docs/support_list/models_list_en.md

@@ -4,374 +4,374 @@
 
 PaddleX incorporates multiple pipelines, each containing several modules, and each module includes various models. You can choose which models to use based on the benchmark data below. If you prioritize model accuracy, select models with higher accuracy. If you prioritize inference speed, choose models with faster inference. If you prioritize model storage size, select models with smaller storage sizes.
 
-## Image Classification Module
-| Model Name | Top-1 Acc (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |
-|-|-|-|-|-|
-|CLIP_vit_base_patch16_224|85.36|13.1957|285.493|306.5 M|
-|CLIP_vit_large_patch14_224|88.1|51.1284|1131.28|1.04 G|
-|ConvNeXt_base_224|83.84|12.8473|1513.87|313.9 M|
-|ConvNeXt_base_384|84.90|31.7607|3967.05|313.9 M|
-|ConvNeXt_large_224|84.26|26.8103|2463.56|700.7 M|
-|ConvNeXt_large_384|85.27|66.4058|6598.92|700.7 M|
-|ConvNeXt_small|83.13|9.74075|1127.6|178.0 M|
-|ConvNeXt_tiny|82.03|5.48923|672.559|101.4 M|
-|FasterNet-L|83.5|23.4415|-|357.1 M|
-|FasterNet-M|83.0|21.8936|-|204.6 M|
-|FasterNet-S|81.3|13.0409|-|119.3 M|
-|FasterNet-T0|71.9|12.2432|-|15.1 M|
-|FasterNet-T1|75.9|11.3562|-|29.2 M|
-|FasterNet-T2|79.1|10.703|-|57.4 M|
-|MobileNetV1_x0_5|63.5|1.86754|7.48297|4.8 M|
-|MobileNetV1_x0_25|51.4|1.83478|4.83674|1.8 M|
-|MobileNetV1_x0_75|68.8|2.57903|10.6343|9.3 M|
-|MobileNetV1_x1_0|71.0|2.78781|13.98|15.2 M|
-|MobileNetV2_x0_5|65.0|4.94234|11.1629|7.1 M|
-|MobileNetV2_x0_25|53.2|4.50856|9.40991|5.5 M|
-|MobileNetV2_x1_0|72.2|6.12159|16.0442|12.6 M|
-|MobileNetV2_x1_5|74.1|6.28385|22.5129|25.0 M|
-|MobileNetV2_x2_0|75.2|6.12888|30.8612|41.2 M|
-|MobileNetV3_large_x0_5|69.2|6.31302|14.5588|9.6 M|
-|MobileNetV3_large_x0_35|64.3|5.76207|13.9041|7.5 M|
-|MobileNetV3_large_x0_75|73.1|8.41737|16.9506|14.0 M|
-|MobileNetV3_large_x1_0|75.3|8.64112|19.1614|19.5 M|
-|MobileNetV3_large_x1_25|76.4|8.73358|22.1296|26.5 M|
-|MobileNetV3_small_x0_5|59.2|5.16721|11.2688|6.8 M|
-|MobileNetV3_small_x0_35|53.0|5.22053|11.0055|6.0 M|
-|MobileNetV3_small_x0_75|66.0|5.39831|12.8313|8.5 M|
-|MobileNetV3_small_x1_0|68.2|6.00993|12.9598|10.5 M|
-|MobileNetV3_small_x1_25|70.7|6.9589|14.3995|13.0 M|
-|MobileNetV4_conv_large|83.4|12.5485|51.6453|125.2 M|
-|MobileNetV4_conv_medium|79.9|9.65509|26.6157|37.6 M|
-|MobileNetV4_conv_small|74.6|5.24172|11.0893|14.7 M|
-|MobileNetV4_hybrid_large|83.8|20.0726|213.769|145.1 M|
-|MobileNetV4_hybrid_medium|80.5|19.7543|62.2624|42.9 M|
-|PP-HGNet_base|85.0|14.2969|327.114|249.4 M|
-|PP-HGNet_small|81.51|5.50661|119.041|86.5 M|
-|PP-HGNet_tiny|79.83|5.22006|69.396|52.4 M|
-|PP-HGNetV2-B0|77.77|6.53694|23.352|21.4 M|
-|PP-HGNetV2-B1|79.18|6.56034|27.3099|22.6 M|
-|PP-HGNetV2-B2|81.74|9.60494|43.1219|39.9 M|
-|PP-HGNetV2-B3|82.98|11.0042|55.1367|57.9 M|
-|PP-HGNetV2-B4|83.57|9.66407|54.2462|70.4 M|
-|PP-HGNetV2-B5|84.75|15.7091|115.926|140.8 M|
-|PP-HGNetV2-B6|86.30|21.226|255.279|268.4 M|
-|PP-LCNet_x0_5|63.14|3.67722|6.66857|6.7 M|
-|PP-LCNet_x0_25|51.86|2.65341|5.81357|5.5 M|
-|PP-LCNet_x0_35|58.09|2.7212|6.28944|5.9 M|
-|PP-LCNet_x0_75|68.18|3.91032|8.06953|8.4 M|
-|PP-LCNet_x1_0|71.32|3.84845|9.23735|10.5 M|
-|PP-LCNet_x1_5|73.71|3.97666|12.3457|16.0 M|
-|PP-LCNet_x2_0|75.18|4.07556|16.2752|23.2 M|
-|PP-LCNet_x2_5|76.60|4.06028|21.5063|32.1 M|
-|PP-LCNetV2_base|77.05|5.23428|19.6005|23.7 M|
-|PP-LCNetV2_large |78.51|6.78335|30.4378|37.3 M|
-|PP-LCNetV2_small|73.97|3.89762|13.0273|14.6 M|
-|ResNet18_vd|72.3|3.53048|31.3014|41.5 M|
-|ResNet18|71.0|2.4868|27.4601|41.5 M|
-|ResNet34_vd|76.0|5.60675|56.0653|77.3 M|
-|ResNet34|74.6|4.16902|51.925|77.3 M|
-|ResNet50_vd|79.1|10.1885|68.446|90.8 M|
-|ResNet50|76.5|9.62383|64.8135|90.8 M|
-|ResNet101_vd|80.2|20.0563|124.85|158.4 M|
-|ResNet101|77.6|19.2297|121.006|158.7 M|
-|ResNet152_vd|80.6|29.6439|181.678|214.3 M|
-|ResNet152|78.3|30.0461|177.707|214.2 M|
-|ResNet200_vd|80.9|39.1628|235.185|266.0 M|
-|StarNet-S1|73.6|9.895|23.0465|11.2 M|
-|StarNet-S2|74.8|7.91279|21.9571|14.3 M|
-|StarNet-S3|77.0|10.7531|30.7656|22.2 M|
-|StarNet-S4|79.0|15.2868|43.2497|28.9 M|
-|SwinTransformer_base_patch4_window7_224|83.37|16.9848|383.83|310.5 M|
-|SwinTransformer_base_patch4_window12_384|84.17|37.2855|1178.63|311.4 M|
-|SwinTransformer_large_patch4_window7_224|86.19|27.5498|689.729|694.8 M|
-|SwinTransformer_large_patch4_window12_384|87.06|74.1768|2105.22|696.1 M|
-|SwinTransformer_small_patch4_window7_224|83.21|16.3982|285.56|175.6 M|
-|SwinTransformer_tiny_patch4_window7_224|81.10|8.54846|156.306|100.1 M|
+## [Image Classification Module](../module_usage/tutorials/cv_modules/image_classification_en.md)
+| Model Name | Top-1 Acc (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |YAML File|
+|-|-|-|-|-|-|
+|CLIP_vit_base_patch16_224|85.36|13.1957|285.493|306.5 M|[CLIP_vit_base_patch16_224.yaml](../../paddlex/configs/image_classification/CLIP_vit_base_patch16_224.yaml)|
+|CLIP_vit_large_patch14_224|88.1|51.1284|1131.28|1.04 G|[CLIP_vit_large_patch14_224.yaml](../../paddlex/configs/image_classification/CLIP_vit_large_patch14_224.yaml)|
+|ConvNeXt_base_224|83.84|12.8473|1513.87|313.9 M|[ConvNeXt_base_224.yaml](../../paddlex/configs/image_classification/ConvNeXt_base_224.yaml)|
+|ConvNeXt_base_384|84.90|31.7607|3967.05|313.9 M|[ConvNeXt_base_384.yaml](../../paddlex/configs/image_classification/ConvNeXt_base_384.yaml)|
+|ConvNeXt_large_224|84.26|26.8103|2463.56|700.7 M|[ConvNeXt_large_224.yaml](../../paddlex/configs/image_classification/ConvNeXt_large_224.yaml)|
+|ConvNeXt_large_384|85.27|66.4058|6598.92|700.7 M|[ConvNeXt_large_384.yaml](../../paddlex/configs/image_classification/ConvNeXt_large_384.yaml)|
+|ConvNeXt_small|83.13|9.74075|1127.6|178.0 M|[ConvNeXt_small.yaml](../../paddlex/configs/image_classification/ConvNeXt_small.yaml)|
+|ConvNeXt_tiny|82.03|5.48923|672.559|101.4 M|[ConvNeXt_tiny.yaml](../../paddlex/configs/image_classification/ConvNeXt_tiny.yaml)|
+|FasterNet-L|83.5|23.4415|-|357.1 M|[FasterNet-L.yaml](../../paddlex/configs/image_classification/FasterNet-L.yaml)|
+|FasterNet-M|83.0|21.8936|-|204.6 M|[FasterNet-M.yaml](../../paddlex/configs/image_classification/FasterNet-M.yaml)|
+|FasterNet-S|81.3|13.0409|-|119.3 M|[FasterNet-S.yaml](../../paddlex/configs/image_classification/FasterNet-S.yaml)|
+|FasterNet-T0|71.9|12.2432|-|15.1 M|[FasterNet-T0.yaml](../../paddlex/configs/image_classification/FasterNet-T0.yaml)|
+|FasterNet-T1|75.9|11.3562|-|29.2 M|[FasterNet-T1.yaml](../../paddlex/configs/image_classification/FasterNet-T1.yaml)|
+|FasterNet-T2|79.1|10.703|-|57.4 M|[FasterNet-T2.yaml](../../paddlex/configs/image_classification/FasterNet-T2.yaml)|
+|MobileNetV1_x0_5|63.5|1.86754|7.48297|4.8 M|[MobileNetV1_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV1_x0_5.yaml)|
+|MobileNetV1_x0_25|51.4|1.83478|4.83674|1.8 M|[MobileNetV1_x0_25.yaml](../../paddlex/configs/image_classification/MobileNetV1_x0_25.yaml)|
+|MobileNetV1_x0_75|68.8|2.57903|10.6343|9.3 M|[MobileNetV1_x0_75.yaml](../../paddlex/configs/image_classification/MobileNetV1_x0_75.yaml)|
+|MobileNetV1_x1_0|71.0|2.78781|13.98|15.2 M|[MobileNetV1_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV1_x1_0.yaml)|
+|MobileNetV2_x0_5|65.0|4.94234|11.1629|7.1 M|[MobileNetV2_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV2_x0_5.yaml)|
+|MobileNetV2_x0_25|53.2|4.50856|9.40991|5.5 M|[MobileNetV2_x0_25.yaml](../../paddlex/configs/image_classification/MobileNetV2_x0_25.yaml)|
+|MobileNetV2_x1_0|72.2|6.12159|16.0442|12.6 M|[MobileNetV2_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV2_x1_0.yaml)|
+|MobileNetV2_x1_5|74.1|6.28385|22.5129|25.0 M|[MobileNetV2_x1_5.yaml](../../paddlex/configs/image_classification/MobileNetV2_x1_5.yaml)|
+|MobileNetV2_x2_0|75.2|6.12888|30.8612|41.2 M|[MobileNetV2_x2_0.yaml](../../paddlex/configs/image_classification/MobileNetV2_x2_0.yaml)|
+|MobileNetV3_large_x0_5|69.2|6.31302|14.5588|9.6 M|[MobileNetV3_large_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x0_5.yaml)|
+|MobileNetV3_large_x0_35|64.3|5.76207|13.9041|7.5 M|[MobileNetV3_large_x0_35.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x0_35.yaml)|
+|MobileNetV3_large_x0_75|73.1|8.41737|16.9506|14.0 M|[MobileNetV3_large_x0_75.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x0_75.yaml)|
+|MobileNetV3_large_x1_0|75.3|8.64112|19.1614|19.5 M|[MobileNetV3_large_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x1_0.yaml)|
+|MobileNetV3_large_x1_25|76.4|8.73358|22.1296|26.5 M|[MobileNetV3_large_x1_25.yaml](../../paddlex/configs/image_classification/MobileNetV3_large_x1_25.yaml)|
+|MobileNetV3_small_x0_5|59.2|5.16721|11.2688|6.8 M|[MobileNetV3_small_x0_5.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x0_5.yaml)|
+|MobileNetV3_small_x0_35|53.0|5.22053|11.0055|6.0 M|[MobileNetV3_small_x0_35.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x0_35.yaml)|
+|MobileNetV3_small_x0_75|66.0|5.39831|12.8313|8.5 M|[MobileNetV3_small_x0_75.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x0_75.yaml)|
+|MobileNetV3_small_x1_0|68.2|6.00993|12.9598|10.5 M|[MobileNetV3_small_x1_0.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x1_0.yaml)|
+|MobileNetV3_small_x1_25|70.7|6.9589|14.3995|13.0 M|[MobileNetV3_small_x1_25.yaml](../../paddlex/configs/image_classification/MobileNetV3_small_x1_25.yaml)|
+|MobileNetV4_conv_large|83.4|12.5485|51.6453|125.2 M|[MobileNetV4_conv_large.yaml](../../paddlex/configs/image_classification/MobileNetV4_conv_large.yaml)|
+|MobileNetV4_conv_medium|79.9|9.65509|26.6157|37.6 M|[MobileNetV4_conv_medium.yaml](../../paddlex/configs/image_classification/MobileNetV4_conv_medium.yaml)|
+|MobileNetV4_conv_small|74.6|5.24172|11.0893|14.7 M|[MobileNetV4_conv_small.yaml](../../paddlex/configs/image_classification/MobileNetV4_conv_small.yaml)|
+|MobileNetV4_hybrid_large|83.8|20.0726|213.769|145.1 M|[MobileNetV4_hybrid_large.yaml](../../paddlex/configs/image_classification/MobileNetV4_hybrid_large.yaml)|
+|MobileNetV4_hybrid_medium|80.5|19.7543|62.2624|42.9 M|[MobileNetV4_hybrid_medium.yaml](../../paddlex/configs/image_classification/MobileNetV4_hybrid_medium.yaml)|
+|PP-HGNet_base|85.0|14.2969|327.114|249.4 M|[PP-HGNet_base.yaml](../../paddlex/configs/image_classification/PP-HGNet_base.yaml)|
+|PP-HGNet_small|81.51|5.50661|119.041|86.5 M|[PP-HGNet_small.yaml](../../paddlex/configs/image_classification/PP-HGNet_small.yaml)|
+|PP-HGNet_tiny|79.83|5.22006|69.396|52.4 M|[PP-HGNet_tiny.yaml](../../paddlex/configs/image_classification/PP-HGNet_tiny.yaml)|
+|PP-HGNetV2-B0|77.77|6.53694|23.352|21.4 M|[PP-HGNetV2-B0.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B0.yaml)|
+|PP-HGNetV2-B1|79.18|6.56034|27.3099|22.6 M|[PP-HGNetV2-B1.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B1.yaml)|
+|PP-HGNetV2-B2|81.74|9.60494|43.1219|39.9 M|[PP-HGNetV2-B2.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B2.yaml)|
+|PP-HGNetV2-B3|82.98|11.0042|55.1367|57.9 M|[PP-HGNetV2-B3.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B3.yaml)|
+|PP-HGNetV2-B4|83.57|9.66407|54.2462|70.4 M|[PP-HGNetV2-B4.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B4.yaml)|
+|PP-HGNetV2-B5|84.75|15.7091|115.926|140.8 M|[PP-HGNetV2-B5.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B5.yaml)|
+|PP-HGNetV2-B6|86.30|21.226|255.279|268.4 M|[PP-HGNetV2-B6.yaml](../../paddlex/configs/image_classification/PP-HGNetV2-B6.yaml)|
+|PP-LCNet_x0_5|63.14|3.67722|6.66857|6.7 M|[PP-LCNet_x0_5.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_5.yaml)|
+|PP-LCNet_x0_25|51.86|2.65341|5.81357|5.5 M|[PP-LCNet_x0_25.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_25.yaml)|
+|PP-LCNet_x0_35|58.09|2.7212|6.28944|5.9 M|[PP-LCNet_x0_35.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_35.yaml)|
+|PP-LCNet_x0_75|68.18|3.91032|8.06953|8.4 M|[PP-LCNet_x0_75.yaml](../../paddlex/configs/image_classification/PP-LCNet_x0_75.yaml)|
+|PP-LCNet_x1_0|71.32|3.84845|9.23735|10.5 M|[PP-LCNet_x1_0.yaml](../../paddlex/configs/image_classification/PP-LCNet_x1_0.yaml)|
+|PP-LCNet_x1_5|73.71|3.97666|12.3457|16.0 M|[PP-LCNet_x1_5.yaml](../../paddlex/configs/image_classification/PP-LCNet_x1_5.yaml)|
+|PP-LCNet_x2_0|75.18|4.07556|16.2752|23.2 M|[PP-LCNet_x2_0.yaml](../../paddlex/configs/image_classification/PP-LCNet_x2_0.yaml)|
+|PP-LCNet_x2_5|76.60|4.06028|21.5063|32.1 M|[PP-LCNet_x2_5.yaml](../../paddlex/configs/image_classification/PP-LCNet_x2_5.yaml)|
+|PP-LCNetV2_base|77.05|5.23428|19.6005|23.7 M|[PP-LCNetV2_base.yaml](../../paddlex/configs/image_classification/PP-LCNetV2_base.yaml)|
+|PP-LCNetV2_large |78.51|6.78335|30.4378|37.3 M|[PP-LCNetV2_large.yaml](../../paddlex/configs/image_classification/PP-LCNetV2_large.yaml)|
+|PP-LCNetV2_small|73.97|3.89762|13.0273|14.6 M|[PP-LCNetV2_small.yaml](../../paddlex/configs/image_classification/PP-LCNetV2_small.yaml)|
+|ResNet18_vd|72.3|3.53048|31.3014|41.5 M|[ResNet18_vd.yaml](../../paddlex/configs/image_classification/ResNet18_vd.yaml)|
+|ResNet18|71.0|2.4868|27.4601|41.5 M|[ResNet18.yaml](../../paddlex/configs/image_classification/ResNet18.yaml)|
+|ResNet34_vd|76.0|5.60675|56.0653|77.3 M|[ResNet34_vd.yaml](../../paddlex/configs/image_classification/ResNet34_vd.yaml)|
+|ResNet34|74.6|4.16902|51.925|77.3 M|[ResNet34.yaml](../../paddlex/configs/image_classification/ResNet34.yaml)|
+|ResNet50_vd|79.1|10.1885|68.446|90.8 M|[ResNet50_vd.yaml](../../paddlex/configs/image_classification/ResNet50_vd.yaml)|
+|ResNet50|76.5|9.62383|64.8135|90.8 M|[ResNet50.yaml](../../paddlex/configs/image_classification/ResNet50.yaml)|
+|ResNet101_vd|80.2|20.0563|124.85|158.4 M|[ResNet101_vd.yaml](../../paddlex/configs/image_classification/ResNet101_vd.yaml)|
+|ResNet101|77.6|19.2297|121.006|158.7 M|[ResNet101.yaml](../../paddlex/configs/image_classification/ResNet101.yaml)|
+|ResNet152_vd|80.6|29.6439|181.678|214.3 M|[ResNet152_vd.yaml](../../paddlex/configs/image_classification/ResNet152_vd.yaml)|
+|ResNet152|78.3|30.0461|177.707|214.2 M|[ResNet152.yaml](../../paddlex/configs/image_classification/ResNet152.yaml)|
+|ResNet200_vd|80.9|39.1628|235.185|266.0 M|[ResNet200_vd.yaml](../../paddlex/configs/image_classification/ResNet200_vd.yaml)|
+|StarNet-S1|73.6|9.895|23.0465|11.2 M|[StarNet-S1.yaml](../../paddlex/configs/image_classification/StarNet-S1.yaml)|
+|StarNet-S2|74.8|7.91279|21.9571|14.3 M|[StarNet-S2.yaml](../../paddlex/configs/image_classification/StarNet-S2.yaml)|
+|StarNet-S3|77.0|10.7531|30.7656|22.2 M|[StarNet-S3.yaml](../../paddlex/configs/image_classification/StarNet-S3.yaml)|
+|StarNet-S4|79.0|15.2868|43.2497|28.9 M|[StarNet-S4.yaml](../../paddlex/configs/image_classification/StarNet-S4.yaml)|
+|SwinTransformer_base_patch4_window7_224|83.37|16.9848|383.83|310.5 M|[SwinTransformer_base_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_base_patch4_window7_224.yaml)|
+|SwinTransformer_base_patch4_window12_384|84.17|37.2855|1178.63|311.4 M|[SwinTransformer_base_patch4_window12_384.yaml](../../paddlex/configs/image_classification/SwinTransformer_base_patch4_window12_384.yaml)|
+|SwinTransformer_large_patch4_window7_224|86.19|27.5498|689.729|694.8 M|[SwinTransformer_large_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_large_patch4_window7_224.yaml)|
+|SwinTransformer_large_patch4_window12_384|87.06|74.1768|2105.22|696.1 M|[SwinTransformer_large_patch4_window12_384.yaml](../../paddlex/configs/image_classification/SwinTransformer_large_patch4_window12_384.yaml)|
+|SwinTransformer_small_patch4_window7_224|83.21|16.3982|285.56|175.6 M|[SwinTransformer_small_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_small_patch4_window7_224.yaml)|
+|SwinTransformer_tiny_patch4_window7_224|81.10|8.54846|156.306|100.1 M|[SwinTransformer_tiny_patch4_window7_224.yaml](../../paddlex/configs/image_classification/SwinTransformer_tiny_patch4_window7_224.yaml)|
 
 **Note: The above accuracy metrics are Top-1 Acc on the [ImageNet-1k](https://www.image-net.org/index.php) validation set.**
 
-## Image Multi-Label Classification Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |
-|-|-|-|-|-|
-|CLIP_vit_base_patch16_448_ML|89.15|-|-|325.6 M|
-|PP-HGNetV2-B0_ML|80.98|-|-|39.6 M|
-|PP-HGNetV2-B4_ML|87.96|-|-|88.5 M|
-|PP-HGNetV2-B6_ML|91.25|-|-|286.5 M|
-|PP-LCNet_x1_0_ML|77.96|-|-|29.4 M|
-|ResNet50_ML|83.50|-|-|108.9 M|
+## [Image Multi-Label Classification Module](../module_usage/tutorials/cv_modules/ml_classification_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |YAML File|
+|-|-|-|-|-|-|
+|CLIP_vit_base_patch16_448_ML|89.15|-|-|325.6 M|[CLIP_vit_base_patch16_448_ML.yaml](../../paddlex/configs/multilabel_classification/CLIP_vit_base_patch16_448_ML.yaml)|
+|PP-HGNetV2-B0_ML|80.98|-|-|39.6 M|[PP-HGNetV2-B0_ML.yaml](../../paddlex/configs/multilabel_classification/PP-HGNetV2-B0_ML.yaml)|
+|PP-HGNetV2-B4_ML|87.96|-|-|88.5 M|[PP-HGNetV2-B4_ML.yaml](../../paddlex/configs/multilabel_classification/PP-HGNetV2-B4_ML.yaml)|
+|PP-HGNetV2-B6_ML|91.25|-|-|286.5 M|[PP-HGNetV2-B6_ML.yaml](../../paddlex/configs/multilabel_classification/PP-HGNetV2-B6_ML.yaml)|
+|PP-LCNet_x1_0_ML|77.96|-|-|29.4 M|[PP-LCNet_x1_0_ML.yaml](../../paddlex/configs/multilabel_classification/PP-LCNet_x1_0_ML.yaml)|
+|ResNet50_ML|83.50|-|-|108.9 M|[ResNet50_ML.yaml](../../paddlex/configs/multilabel_classification/ResNet50_ML.yaml)|
 
 **Note: The above accuracy metrics are mAP for the multi-label classification task on [COCO2017](https://cocodataset.org/#home).**
 
-## Pedestrian Attribute Module
-| Model Name | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size  |
-|-|-|-|-|-|
-|PP-LCNet_x1_0_pedestrian_attribute|92.2|3.84845|9.23735|6.7 M  |
+## [Pedestrian Attribute Module](../module_usage/tutorials/cv_modules/pedestrian_attribute_recognition_en.md)
+| Model Name | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size  |YAML File|
+|-|-|-|-|-|-|
+|PP-LCNet_x1_0_pedestrian_attribute|92.2|3.84845|9.23735|6.7 M  |[PP-LCNet_x1_0_pedestrian_attribute.yaml](../../paddlex/configs/pedestrian_attribute/PP-LCNet_x1_0_pedestrian_attribute.yaml)|
 
 **Note: The above accuracy metrics are mA on PaddleX's internal self-built dataset.**
 
-## Vehicle Attribute Module
-| Model Name | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |
-|-|-|-|-|-|
-|PP-LCNet_x1_0_vehicle_attribute|91.7|3.84845|9.23735|6.7 M|
+## [Vehicle Attribute Module](../module_usage/tutorials/cv_modules/vehicle_attribute_recognition_en.md)
+| Model Name | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-LCNet_x1_0_vehicle_attribute|91.7|3.84845|9.23735|6.7 M|[PP-LCNet_x1_0_vehicle_attribute.yaml](../../paddlex/configs/vehicle_attribute/PP-LCNet_x1_0_vehicle_attribute.yaml)|
 
 **Note: The above accuracy metrics are mA on the VeRi dataset.**
 
-## Image Feature Module
-| Model Name | recall@1 (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |
-|-|-|-|-|-|
-|PP-ShiTuV2_rec|84.2|5.23428|19.6005|16.3 M|
-|PP-ShiTuV2_rec_CLIP_vit_base|88.69|13.1957|285.493|306.6 M|
-|PP-ShiTuV2_rec_CLIP_vit_large|91.03|51.1284|1131.28|1.05 G|
+## [Image Feature Module](../module_usage/tutorials/cv_modules/image_feature_en.md)
+| Model Name | recall@1 (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-ShiTuV2_rec|84.2|5.23428|19.6005|16.3 M|[PP-ShiTuV2_rec.yaml](../../paddlex/configs/general_recognition/PP-ShiTuV2_rec.yaml)|
+|PP-ShiTuV2_rec_CLIP_vit_base|88.69|13.1957|285.493|306.6 M|[PP-ShiTuV2_rec_CLIP_vit_base.yaml](../../paddlex/configs/general_recognition/PP-ShiTuV2_rec_CLIP_vit_base.yaml)|
+|PP-ShiTuV2_rec_CLIP_vit_large|91.03|51.1284|1131.28|1.05 G|[PP-ShiTuV2_rec_CLIP_vit_large.yaml](../../paddlex/configs/general_recognition/PP-ShiTuV2_rec_CLIP_vit_large.yaml)|
 
 **Note: The above accuracy metrics are recall@1 on AliProducts.**
 
 
-## Document Orientation Classification Module
-| Model Name | Top-1 Acc (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |
-|-|-|-|-|-|
-|PP-LCNet_x1_0_doc_ori|99.26|3.84845|9.23735|7.1 M|
+## [Document Orientation Classification Module](../module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md)
+| Model Name | Top-1 Acc (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-LCNet_x1_0_doc_ori|99.26|3.84845|9.23735|7.1 M|[PP-LCNet_x1_0_doc_ori.yaml](../../paddlex/configs/doc_text_orientation/PP-LCNet_x1_0_doc_ori.yaml)|
 
 **Note: The above accuracy metrics are Top-1 Acc on PaddleX's internal self-built dataset.**
 
-## Main Body Detection Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size|
-|-|-|-|-|-|
-|PP-ShiTuV2_det|41.5|33.7426|537.003|27.6 M|
+## [Main Body Detection Module](../module_usage/tutorials/cv_modules/mainbody_detection_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size|YAML File|
+|-|-|-|-|-|-|
+|PP-ShiTuV2_det|41.5|33.7426|537.003|27.6 M|[PP-ShiTuV2_det.yaml](../../paddlex/configs/mainbody_detection/PP-ShiTuV2_det.yaml)|
 
 **Note: The above accuracy metrics are mAP(0.5:0.95) on the [PaddleClas main body detection dataset](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/en/training/PP-ShiTu/mainbody_detection.md).**
 
-## Object Detection Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |
-|-|-|-|-|-|
-|Cascade-FasterRCNN-ResNet50-FPN|41.1|-|-|245.4 M|
-|Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN|45.0|-|-|246.2 M|
-|CenterNet-DLA-34|37.6|-|-|75.4 M|
-|CenterNet-ResNet50|38.9|-|-|319.7 M|
-|DETR-R50|42.3|59.2132|5334.52|159.3 M|
-|FasterRCNN-ResNet34-FPN|37.8|-|-|137.5 M|
-|FasterRCNN-ResNet50-FPN|38.4|-|-|148.1 M|
-|FasterRCNN-ResNet50-vd-FPN|39.5|-|-|148.1 M|
-|FasterRCNN-ResNet50-vd-SSLDv2-FPN|41.4|-|-|148.1 M|
-|FasterRCNN-ResNet50|36.7|-|-|120.2 M|
-|FasterRCNN-ResNet101-FPN|41.4|-|-|216.3 M|
-|FasterRCNN-ResNet101|39.0|-|-|188.1 M|
-|FasterRCNN-ResNeXt101-vd-FPN|43.4|-|-|360.6 M|
-|FasterRCNN-Swin-Tiny-FPN|42.6|-|-|159.8 M|
-|FCOS-ResNet50|39.6|103.367|3424.91|124.2 M|
-|PicoDet-L|42.6|16.6715|169.904|20.9 M|
-|PicoDet-M|37.5|16.2311|71.7257|16.8 M|
-|PicoDet-S|29.1|14.097|37.6563|4.4 M |
-|PicoDet-XS|26.2|13.8102|48.3139|5.7M |
-|PP-YOLOE_plus-L|52.9|33.5644|814.825|185.3 M|
-|PP-YOLOE_plus-M|49.8|19.843|449.261|83.2 M|
-|PP-YOLOE_plus-S|43.7|16.8884|223.059|28.3 M|
-|PP-YOLOE_plus-X|54.7|57.8995|1439.93|349.4 M|
-|RT-DETR-H|56.3|114.814|3933.39|435.8 M|
-|RT-DETR-L|53.0|34.5252|1454.27|113.7 M|
-|RT-DETR-R18|46.5|19.89|784.824|70.7 M|
-|RT-DETR-R50|53.1|41.9327|1625.95|149.1 M|
-|RT-DETR-X|54.8|61.8042|2246.64|232.9 M|
-|YOLOv3-DarkNet53|39.1|40.1055|883.041|219.7 M|
-|YOLOv3-MobileNetV3|31.4|18.6692|267.214|83.8 M|
-|YOLOv3-ResNet50_vd_DCN|40.6|31.6276|856.047|163.0 M|
-|YOLOX-L|50.1|185.691|1250.58|192.5 M|
-|YOLOX-M|46.9|123.324|688.071|90.0 M|
-|YOLOX-N|26.1|79.1665|155.59|3.4 M|
-|YOLOX-S|40.4|184.828|474.446|32.0 M|
-|YOLOX-T|32.9|102.748|212.52|18.1 M|
-|YOLOX-X|51.8|227.361|2067.84|351.5 M|
+## [Object Detection Module](../module_usage/tutorials/cv_modules/object_detection_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size |YAML File|
+|-|-|-|-|-|-|
+|Cascade-FasterRCNN-ResNet50-FPN|41.1|-|-|245.4 M|[Cascade-FasterRCNN-ResNet50-FPN.yaml](../../paddlex/configs/object_detection/Cascade-FasterRCNN-ResNet50-FPN.yaml)|
+|Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN|45.0|-|-|246.2 M|[Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml](../../paddlex/configs/object_detection/Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml)|
+|CenterNet-DLA-34|37.6|-|-|75.4 M|[CenterNet-DLA-34.yaml](../../paddlex/configs/object_detection/CenterNet-DLA-34.yaml)|
+|CenterNet-ResNet50|38.9|-|-|319.7 M|[CenterNet-ResNet50.yaml](../../paddlex/configs/object_detection/CenterNet-ResNet50.yaml)|
+|DETR-R50|42.3|59.2132|5334.52|159.3 M|[DETR-R50.yaml](../../paddlex/configs/object_detection/DETR-R50.yaml)|
+|FasterRCNN-ResNet34-FPN|37.8|-|-|137.5 M|[FasterRCNN-ResNet34-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet34-FPN.yaml)|
+|FasterRCNN-ResNet50-FPN|38.4|-|-|148.1 M|[FasterRCNN-ResNet50-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50-FPN.yaml)|
+|FasterRCNN-ResNet50-vd-FPN|39.5|-|-|148.1 M|[FasterRCNN-ResNet50-vd-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50-vd-FPN.yaml)|
+|FasterRCNN-ResNet50-vd-SSLDv2-FPN|41.4|-|-|148.1 M|[FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50-vd-SSLDv2-FPN.yaml)|
+|FasterRCNN-ResNet50|36.7|-|-|120.2 M|[FasterRCNN-ResNet50.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet50.yaml)|
+|FasterRCNN-ResNet101-FPN|41.4|-|-|216.3 M|[FasterRCNN-ResNet101-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet101-FPN.yaml)|
+|FasterRCNN-ResNet101|39.0|-|-|188.1 M|[FasterRCNN-ResNet101.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNet101.yaml)|
+|FasterRCNN-ResNeXt101-vd-FPN|43.4|-|-|360.6 M|[FasterRCNN-ResNeXt101-vd-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-ResNeXt101-vd-FPN.yaml)|
+|FasterRCNN-Swin-Tiny-FPN|42.6|-|-|159.8 M|[FasterRCNN-Swin-Tiny-FPN.yaml](../../paddlex/configs/object_detection/FasterRCNN-Swin-Tiny-FPN.yaml)|
+|FCOS-ResNet50|39.6|103.367|3424.91|124.2 M|[FCOS-ResNet50.yaml](../../paddlex/configs/object_detection/FCOS-ResNet50.yaml)|
+|PicoDet-L|42.6|16.6715|169.904|20.9 M|[PicoDet-L.yaml](../../paddlex/configs/object_detection/PicoDet-L.yaml)|
+|PicoDet-M|37.5|16.2311|71.7257|16.8 M|[PicoDet-M.yaml](../../paddlex/configs/object_detection/PicoDet-M.yaml)|
+|PicoDet-S|29.1|14.097|37.6563|4.4 M |[PicoDet-S.yaml](../../paddlex/configs/object_detection/PicoDet-S.yaml)|
+|PicoDet-XS|26.2|13.8102|48.3139|5.7M |[PicoDet-XS.yaml](../../paddlex/configs/object_detection/PicoDet-XS.yaml)|
+|PP-YOLOE_plus-L|52.9|33.5644|814.825|185.3 M|[PP-YOLOE_plus-L.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-L.yaml)|
+|PP-YOLOE_plus-M|49.8|19.843|449.261|83.2 M|[PP-YOLOE_plus-M.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-M.yaml)|
+|PP-YOLOE_plus-S|43.7|16.8884|223.059|28.3 M|[PP-YOLOE_plus-S.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-S.yaml)|
+|PP-YOLOE_plus-X|54.7|57.8995|1439.93|349.4 M|[PP-YOLOE_plus-X.yaml](../../paddlex/configs/object_detection/PP-YOLOE_plus-X.yaml)|
+|RT-DETR-H|56.3|114.814|3933.39|435.8 M|[RT-DETR-H.yaml](../../paddlex/configs/object_detection/RT-DETR-H.yaml)|
+|RT-DETR-L|53.0|34.5252|1454.27|113.7 M|[RT-DETR-L.yaml](../../paddlex/configs/object_detection/RT-DETR-L.yaml)|
+|RT-DETR-R18|46.5|19.89|784.824|70.7 M|[RT-DETR-R18.yaml](../../paddlex/configs/object_detection/RT-DETR-R18.yaml)|
+|RT-DETR-R50|53.1|41.9327|1625.95|149.1 M|[RT-DETR-R50.yaml](../../paddlex/configs/object_detection/RT-DETR-R50.yaml)|
+|RT-DETR-X|54.8|61.8042|2246.64|232.9 M|[RT-DETR-X.yaml](../../paddlex/configs/object_detection/RT-DETR-X.yaml)|
+|YOLOv3-DarkNet53|39.1|40.1055|883.041|219.7 M|[YOLOv3-DarkNet53.yaml](../../paddlex/configs/object_detection/YOLOv3-DarkNet53.yaml)|
+|YOLOv3-MobileNetV3|31.4|18.6692|267.214|83.8 M|[YOLOv3-MobileNetV3.yaml](../../paddlex/configs/object_detection/YOLOv3-MobileNetV3.yaml)|
+|YOLOv3-ResNet50_vd_DCN|40.6|31.6276|856.047|163.0 M|[YOLOv3-ResNet50_vd_DCN.yaml](../../paddlex/configs/object_detection/YOLOv3-ResNet50_vd_DCN.yaml)|
+|YOLOX-L|50.1|185.691|1250.58|192.5 M|[YOLOX-L.yaml](../../paddlex/configs/object_detection/YOLOX-L.yaml)|
+|YOLOX-M|46.9|123.324|688.071|90.0 M|[YOLOX-M.yaml](../../paddlex/configs/object_detection/YOLOX-M.yaml)|
+|YOLOX-N|26.1|79.1665|155.59|3.4 M|[YOLOX-N.yaml](../../paddlex/configs/object_detection/YOLOX-N.yaml)|
+|YOLOX-S|40.4|184.828|474.446|32.0 M|[YOLOX-S.yaml](../../paddlex/configs/object_detection/YOLOX-S.yaml)|
+|YOLOX-T|32.9|102.748|212.52|18.1 M|[YOLOX-T.yaml](../../paddlex/configs/object_detection/YOLOX-T.yaml)|
+|YOLOX-X|51.8|227.361|2067.84|351.5 M|[YOLOX-X.yaml](../../paddlex/configs/object_detection/YOLOX-X.yaml)|
 
 **Note: The above accuracy metrics are mAP(0.5:0.95) on the [COCO2017](https://cocodataset.org/#home) validation set.**
 
-## Small Object Detection Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size  |
-|-|-|-|-|-|
-|PP-YOLOE_plus_SOD-S|25.1|65.4608|324.37|77.3 M|
-|PP-YOLOE_plus_SOD-L|31.9|57.1448|1006.98|325.0 M|
-|PP-YOLOE_plus_SOD-largesize-L|42.7|458.521|11172.7|340.5 M|
+## [Small Object Detection Module](../module_usage/tutorials/cv_modules/small_object_detection_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size  |YAML File|
+|-|-|-|-|-|-|
+|PP-YOLOE_plus_SOD-S|25.1|65.4608|324.37|77.3 M|[PP-YOLOE_plus_SOD-S.yaml](../../paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-S.yaml)|
+|PP-YOLOE_plus_SOD-L|31.9|57.1448|1006.98|325.0 M|[PP-YOLOE_plus_SOD-L.yaml](../../paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-L.yaml)|
+|PP-YOLOE_plus_SOD-largesize-L|42.7|458.521|11172.7|340.5 M|[PP-YOLOE_plus_SOD-largesize-L.yaml](../../paddlex/configs/small_object_detection/PP-YOLOE_plus_SOD-largesize-L.yaml)|
 
 **Note: The above accuracy metrics are mAP(0.5:0.95) on the [VisDrone-DET](https://github.com/VisDrone/VisDrone-Dataset) validation set.**
 
-## Pedestrian Detection Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |
-|-|-|-|-|-|
-|PP-YOLOE-L_human|48.0|32.7754|777.691|196.1 M|
-|PP-YOLOE-S_human|42.5|15.0118|179.317|28.8 M|
+## [Pedestrian Detection Module](../module_usage/tutorials/cv_modules/human_detection_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-YOLOE-L_human|48.0|32.7754|777.691|196.1 M|[PP-YOLOE-L_human.yaml](../../paddlex/configs/human_detection/PP-YOLOE-L_human.yaml)|
+|PP-YOLOE-S_human|42.5|15.0118|179.317|28.8 M|[PP-YOLOE-S_human.yaml](../../paddlex/configs/human_detection/PP-YOLOE-S_human.yaml)|
 
 **Note: The above accuracy metrics are mAP(0.5:0.95) on the [CrowdHuman](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip) validation set.**
 
 
-## Vehicle Detection Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |
-|-|-|-|-|-|
-|PP-YOLOE-L_vehicle|63.9|32.5619|775.633|196.1 M|
-|PP-YOLOE-S_vehicle|61.3|15.3787|178.441|28.8 M|
+## [Vehicle Detection Module](../module_usage/tutorials/cv_modules/vehicle_detection_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-YOLOE-L_vehicle|63.9|32.5619|775.633|196.1 M|[PP-YOLOE-L_vehicle.yaml](../../paddlex/configs/vehicle_detection/PP-YOLOE-L_vehicle.yaml)|
+|PP-YOLOE-S_vehicle|61.3|15.3787|178.441|28.8 M|[PP-YOLOE-S_vehicle.yaml](../../paddlex/configs/vehicle_detection/PP-YOLOE-S_vehicle.yaml)|
 
 **Note: The above accuracy metrics are mAP(0.5:0.95) on the [PPVehicle](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppvehicle) validation set.**
 
-## Face Detection Module
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size  |
-|-|-|-|-|-|
-|PicoDet_LCNet_x2_5_face|35.8|33.7426|537.003|27.7 M|
+## [Face Detection Module](../module_usage/tutorials/cv_modules/face_detection_en.md)
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms)  | Model Size  |YAML File|
+|-|-|-|-|-|-|
+|PicoDet_LCNet_x2_5_face|35.8|33.7426|537.003|27.7 M|[PicoDet_LCNet_x2_5_face.yaml](../../paddlex/configs/face_detection/PicoDet_LCNet_x2_5_face.yaml)|
 
 **Note: The above accuracy metrics are evaluated on the **[wider_face](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/ppvehicle)** dataset using mAP(0.5:0.95).**
 
 
-## Abnormality Detection Module
-|Model Name|Avg (%)|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size |
-|-|-|-|-|-|
-|STFPM|96.2|-|-|21.5 M|
+## [Abnormality Detection Module](../module_usage/tutorials/cv_modules/anomaly_detection_en.md)
+|Model Name|Avg (%)|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size |YAML File|
+|-|-|-|-|-|-|
+|STFPM|96.2|-|-|21.5 M|[STFPM.yaml](../../paddlex/configs/anomaly_detection/STFPM.yaml)|
 
 **Note: The above accuracy metrics are evaluated on the **[MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad)** dataset using the average anomaly score.**
 
-## Semantic Segmentation Module
-|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size|
-|-|-|-|-|-|
-|Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|
-|Deeplabv3_Plus-R101|81.10|100.026|2460.71|162.5 M|
-|Deeplabv3-R50|79.90|82.2631|1735.83|138.3 M|
-|Deeplabv3-R101|80.85|121.492|2685.51|205.9 M|
-|OCRNet_HRNet-W18|80.67|48.2335|906.385|43.1 M|
-|OCRNet_HRNet-W48|82.15|78.9976|2226.95|249.8 M|
-|PP-LiteSeg-T|73.10|7.6827|138.683|28.5 M|
-|PP-LiteSeg-B|75.25|-|-|47.0 M|
-|SegFormer-B0 (slice)|76.73|11.1946|268.929|13.2 M|
-|SegFormer-B1 (slice)|78.35|17.9998|403.393|48.5 M|
-|SegFormer-B2 (slice)|81.60|48.0371|1248.52|96.9 M|
-|SegFormer-B3 (slice)|82.47|64.341|1666.35|167.3 M|
-|SegFormer-B4 (slice)|82.38|82.4336|1995.42|226.7 M|
-|SegFormer-B5 (slice)|82.58|97.3717|2420.19|229.7 M|
+## [Semantic Segmentation Module](../module_usage/tutorials/cv_modules/semantic_segmentation_en.md)
+|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size|YAML File|
+|-|-|-|-|-|-|
+|Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|[Deeplabv3_Plus-R50.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3_Plus-R50.yaml)|
+|Deeplabv3_Plus-R101|81.10|100.026|2460.71|162.5 M|[Deeplabv3_Plus-R101.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3_Plus-R101.yaml)|
+|Deeplabv3-R50|79.90|82.2631|1735.83|138.3 M|[Deeplabv3-R50.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3-R50.yaml)|
+|Deeplabv3-R101|80.85|121.492|2685.51|205.9 M|[Deeplabv3-R101.yaml](../../paddlex/configs/semantic_segmentation/Deeplabv3-R101.yaml)|
+|OCRNet_HRNet-W18|80.67|48.2335|906.385|43.1 M|[OCRNet_HRNet-W18.yaml](../../paddlex/configs/semantic_segmentation/OCRNet_HRNet-W18.yaml)|
+|OCRNet_HRNet-W48|82.15|78.9976|2226.95|249.8 M|[OCRNet_HRNet-W48.yaml](../../paddlex/configs/semantic_segmentation/OCRNet_HRNet-W48.yaml)|
+|PP-LiteSeg-T|73.10|7.6827|138.683|28.5 M|[PP-LiteSeg-T.yaml](../../paddlex/configs/semantic_segmentation/PP-LiteSeg-T.yaml)|
+|PP-LiteSeg-B|75.25|-|-|47.0 M|[PP-LiteSeg-B.yaml](../../paddlex/configs/semantic_segmentation/PP-LiteSeg-B.yaml)|
+|SegFormer-B0 (slice)|76.73|11.1946|268.929|13.2 M|[SegFormer-B0.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B0.yaml)|
+|SegFormer-B1 (slice)|78.35|17.9998|403.393|48.5 M|[SegFormer-B1.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B1.yaml)|
+|SegFormer-B2 (slice)|81.60|48.0371|1248.52|96.9 M|[SegFormer-B2.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B2.yaml)|
+|SegFormer-B3 (slice)|82.47|64.341|1666.35|167.3 M|[SegFormer-B3.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B3.yaml)|
+|SegFormer-B4 (slice)|82.38|82.4336|1995.42|226.7 M|[SegFormer-B4.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B4.yaml)|
+|SegFormer-B5 (slice)|82.58|97.3717|2420.19|229.7 M|[SegFormer-B5.yaml](../../paddlex/configs/semantic_segmentation/SegFormer-B5.yaml)|
 
 **Note: The above accuracy metrics are evaluated on the **[Cityscapes](https://www.cityscapes-dataset.com/)** dataset using mIoU.**
 
-|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size|
-|-|-|-|-|-|
-|SeaFormer_base(slice)|40.92|24.4073|397.574|30.8 M|
-|SeaFormer_large (slice)|43.66|27.8123|550.464|49.8 M|
-|SeaFormer_small (slice)|38.73|19.2295|358.343|14.3 M|
-|SeaFormer_tiny (slice)|34.58|13.9496|330.132|6.1 M |
+|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size|YAML File|
+|-|-|-|-|-|-|
+|SeaFormer_base(slice)|40.92|24.4073|397.574|30.8 M|[SeaFormer_base.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_base.yaml)|
+|SeaFormer_large (slice)|43.66|27.8123|550.464|49.8 M|[SeaFormer_large.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_large.yaml)|
+|SeaFormer_small (slice)|38.73|19.2295|358.343|14.3 M|[SeaFormer_small.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_small.yaml)|
+|SeaFormer_tiny (slice)|34.58|13.9496|330.132|6.1 M |[SeaFormer_tiny.yaml](../../paddlex/configs/semantic_segmentation/SeaFormer_tiny.yaml)|
 
 **Note: The above accuracy metrics are evaluated on the **[ADE20k](https://groups.csail.mit.edu/vision/datasets/ADE20K/)** dataset. "slice" indicates that the input image has been cropped.**
 
-## Instance Segmentation Module
-|Model Name|Mask AP|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size |
-|-|-|-|-|-|
-|Mask-RT-DETR-H|50.6|132.693|4896.17|449.9 M|
-|Mask-RT-DETR-L|45.7|46.5059|2575.92|113.6 M|
-|Mask-RT-DETR-M|42.7|36.8329|-|66.6 M|
-|Mask-RT-DETR-S|41.0|33.5007|-|51.8 M|
-|Mask-RT-DETR-X|47.5|75.755|3358.04|237.5 M|
-|Cascade-MaskRCNN-ResNet50-FPN|36.3|-|-|254.8 M|
-|Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN|39.1|-|-|254.7 M|
-|MaskRCNN-ResNet50-FPN|35.6|-|-|157.5 M|
-|MaskRCNN-ResNet50-vd-FPN|36.4|-|-|157.5 M|
-|MaskRCNN-ResNet50-vd-SSLDv2-FPN|38.2|-|-|157.2 M|
-|MaskRCNN-ResNet50|32.8|-|-|127.8 M|
-|MaskRCNN-ResNet101-FPN|36.6|-|-|225.4 M|
-|MaskRCNN-ResNet101-vd-FPN|38.1|-|-|225.1 M|
-|MaskRCNN-ResNeXt101-vd-FPN|39.5|-|-|370.0 M|
-|PP-YOLOE_seg-S|32.5|-|-|31.5 M|
+## [Instance Segmentation Module](../module_usage/tutorials/cv_modules/instance_segmentation_en.md)
+|Model Name|Mask AP|GPU Inference Time (ms)|CPU Inference Time (ms) |Model Size |YAML File|
+|-|-|-|-|-|-|
+|Mask-RT-DETR-H|50.6|132.693|4896.17|449.9 M|[Mask-RT-DETR-H.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-H.yaml)|
+|Mask-RT-DETR-L|45.7|46.5059|2575.92|113.6 M|[Mask-RT-DETR-L.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-L.yaml)|
+|Mask-RT-DETR-M|42.7|36.8329|-|66.6 M|[Mask-RT-DETR-M.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-M.yaml)|
+|Mask-RT-DETR-S|41.0|33.5007|-|51.8 M|[Mask-RT-DETR-S.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-S.yaml)|
+|Mask-RT-DETR-X|47.5|75.755|3358.04|237.5 M|[Mask-RT-DETR-X.yaml](../../paddlex/configs/instance_segmentation/Mask-RT-DETR-X.yaml)|
+|Cascade-MaskRCNN-ResNet50-FPN|36.3|-|-|254.8 M|[Cascade-MaskRCNN-ResNet50-FPN.yaml](../../paddlex/configs/instance_segmentation/Cascade-MaskRCNN-ResNet50-FPN.yaml)|
+|Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN|39.1|-|-|254.7 M|[Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN.yaml](../../paddlex/configs/instance_segmentation/Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN.yaml)|
+|MaskRCNN-ResNet50-FPN|35.6|-|-|157.5 M|[MaskRCNN-ResNet50-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet50-FPN.yaml)|
+|MaskRCNN-ResNet50-vd-FPN|36.4|-|-|157.5 M|[MaskRCNN-ResNet50-vd-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet50-vd-FPN.yaml)|
+|MaskRCNN-ResNet50|32.8|-|-|127.8 M|[MaskRCNN-ResNet50.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet50.yaml)|
+|MaskRCNN-ResNet101-FPN|36.6|-|-|225.4 M|[MaskRCNN-ResNet101-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet101-FPN.yaml)|
+|MaskRCNN-ResNet101-vd-FPN|38.1|-|-|225.1 M|[MaskRCNN-ResNet101-vd-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNet101-vd-FPN.yaml)|
+|MaskRCNN-ResNeXt101-vd-FPN|39.5|-|-|370.0 M|[MaskRCNN-ResNeXt101-vd-FPN.yaml](../../paddlex/configs/instance_segmentation/MaskRCNN-ResNeXt101-vd-FPN.yaml)|
+|PP-YOLOE_seg-S|32.5|-|-|31.5 M|[PP-YOLOE_seg-S.yaml](../../paddlex/configs/instance_segmentation/PP-YOLOE_seg-S.yaml)|
+|SOLOv2| 35.5|-|-|179.1 M|[SOLOv2.yaml](../../paddlex/configs/instance_segmentation/SOLOv2.yaml)
 
 **Note: The above accuracy metrics are evaluated on the **[COCO2017](https://cocodataset.org/#home)** validation set using Mask AP(0.5:0.95).**
 
-## Text Detection Module
-|Model Name|Detection Hmean (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|
-|-|-|-|-|-|
-|PP-OCRv4_mobile_det |77.79|10.6923|120.177|4.2 M|
-|PP-OCRv4_server_det |82.69|83.3501|2434.01|100.1M|
+## [Text Detection Module](../module_usage/tutorials/ocr_modules/text_detection_en.md)
+|Model Name|Detection Hmean (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|YAML File|
+|-|-|-|-|-|-|
+|PP-OCRv4_mobile_det |77.79|10.6923|120.177|4.2 M|[PP-OCRv4_mobile_det.yaml](../../paddlex/configs/text_detection/PP-OCRv4_mobile_det.yaml)|
+|PP-OCRv4_server_det |82.69|83.3501|2434.01|100.1M|[PP-OCRv4_server_det.yaml](../../paddlex/configs/text_detection/PP-OCRv4_server_det.yaml)|
 
 **Note: The above accuracy metrics are evaluated on a self-built Chinese dataset by PaddleOCR, covering street scenes, web images, documents, and handwritten texts, with 500 images for detection.**
 
-## Seal Text Detection Module
-|Model Name|Detection Hmean (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |
-|-|-|-|-|-|
-|PP-OCRv4_mobile_seal_det|96.47|10.5878|131.813|4.7 M |
-|PP-OCRv4_server_seal_det|98.21|84.341|2425.06|108.3 M|
+## [Seal Text Detection Module](../module_usage/tutorials/ocr_modules/seal_text_detection_en.md)
+|Model Name|Detection Hmean (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-OCRv4_mobile_seal_det|96.47|10.5878|131.813|4.7 M |[PP-OCRv4_mobile_seal_det.yaml](../../paddlex/configs/text_detection_seal/PP-OCRv4_mobile_seal_det.yaml)|
+|PP-OCRv4_server_seal_det|98.21|84.341|2425.06|108.3 M|[PP-OCRv4_server_seal_det.yaml](../../paddlex/configs/text_detection_seal/PP-OCRv4_server_seal_det.yaml)|
 
 **Note: The above accuracy metrics are evaluated on a self-built seal dataset by PaddleX, containing 500 seal images.**
 
-## Text Recognition Module
-|Model Name|Recognition Avg Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |
-|-|-|-|-|-|
-|PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|
-|PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|
+## [Text Recognition Module](../module_usage/tutorials/ocr_modules/text_recognition_en.md)
+|Model Name|Recognition Avg Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |YAML File|
+|-|-|-|-|-|-|
+|PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|[PP-OCRv4_mobile_rec.yaml](../../paddlex/configs/text_recognition/PP-OCRv4_mobile_rec.yaml)|
+|PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|[PP-OCRv4_server_rec.yaml](../../paddlex/configs/text_recognition/PP-OCRv4_server_rec.yaml)|
 
 **Note: The above accuracy metrics are evaluated on a self-built Chinese dataset by PaddleOCR, covering street scenes, web images, documents, and handwritten texts, with 11,000 images for text recognition.**
 
-|Model Name|Recognition Avg Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |
-|-|-|-|-|-|
-|ch_SVTRv2_rec|68.81|8.36801|165.706|73.9 M|
+|Model Name|Recognition Avg Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |YAML File|
+|-|-|-|-|-|-|
+|ch_SVTRv2_rec|68.81|8.36801|165.706|73.9 M|[ch_SVTRv2_rec.yaml](../../paddlex/configs/text_recognition/ch_SVTRv2_rec.yaml)|
 
 **Note: The above accuracy metrics are evaluated on [PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition](https://aistudio.baidu.com/competition/detail/1131/0/introduction) A-Rank.**
 
-|Model Name|Recognition Avg Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|
-|-|-|-|-|-|
-|ch_RepSVTR_rec|65.07|10.5047|51.5647|22.1 M|
+|Model Name|Recognition Avg Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|YAML File|
+|-|-|-|-|-|-|
+|ch_RepSVTR_rec|65.07|10.5047|51.5647|22.1 M|[ch_RepSVTR_rec.yaml](../../paddlex/configs/text_recognition/ch_RepSVTR_rec.yaml)|
 
 **Note: The above accuracy metrics are evaluated on [PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition](https://aistudio.baidu.com/competition/detail/1131/0/introduction) B-Rank.**
 
-## Formula Recognition Module
-|Model Name|BLEU Score|Normed Edit Distance|ExpRate (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|
-|-|-|-|-|-|-|-|
-|LaTeX_OCR_rec|0.8821|0.0823|40.01|-|-|89.7 M|
+## [Formula Recognition Module](../module_usage/tutorials/ocr_modules/formula_recognition_en.md)
+|Model Name|BLEU Score|Normed Edit Distance|ExpRate (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|YAML File|
+|-|-|-|-|-|-|-|-|
+|LaTeX_OCR_rec|0.8821|0.0823|40.01|-|-|89.7 M|[LaTeX_OCR_rec.yaml](../../paddlex/configs/formula_recognition/LaTeX_OCR_rec.yaml)|
 
 **Note: The above accuracy metrics are measured on the [LaTeX-OCR formula recognition test set](https://drive.google.com/drive/folders/13CA4vAmOmD_I_dSbvLp-Lf0s6KiaNfuO).**
 
-## Table Structure Recognition Module
-|Model Name|Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |
-|-|-|-|-|-|
-|SLANet|59.52|522.536|1845.37|6.9 M |
-|SLANet_plus|63.69|522.536|1845.37|6.9 M |
- 
+## [Table Structure Recognition Module](../module_usage/tutorials/ocr_modules/table_structure_recognition_en.md)
+|Model Name|Accuracy (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size |YAML File|
+|-|-|-|-|-|-|
+|SLANet|59.52|522.536|1845.37|6.9 M |[SLANet.yaml](../../paddlex/configs/table_recognition/SLANet.yaml)|
+|SLANet_plus|63.69|522.536|1845.37|6.9 M |[SLANet_plus.yaml](../../paddlex/configs/table_recognition/SLANet_plus.yaml)|
+
 **Note: The above accuracy metrics are evaluated on a self-built English table recognition dataset by PaddleX.**
 
-## Image Rectification Module
-|Model Name|MS-SSIM (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|
-|-|-|-|-|-|
-|UVDoc|54.40|-|-|30.3 M|
+## [Image Rectification Module](../module_usage/tutorials/ocr_modules/text_image_unwarping_en.md)
+|Model Name|MS-SSIM (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|YAML File|
+|-|-|-|-|-|-|
+|UVDoc|54.40|-|-|30.3 M|[UVDoc.yaml](../../paddlex/configs/image_unwarping/UVDoc.yaml)|
 
 
 **Note: The above accuracy metrics are measured on a self-built image rectification dataset by PaddleX.**
 
-## Layout Analysis Module
-|Model Name|mAP (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|
-|-|-|-|-|-|
-|PicoDet_layout_1x|86.8|13.036|91.2634|7.4 M |
-|PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|
-|RT-DETR-H_layout_3cls|95.9|114.644|3832.62|470.1 M|
-|RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2 M|
+## [Layout Detection Module](../module_usage/tutorials/ocr_modules/layout_detection_en.md)
+|Model Name|mAP (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size|YAML File|
+|-|-|-|-|-|-|
+|PicoDet_layout_1x|86.8|13.036|91.2634|7.4 M |[PicoDet_layout_1x.yaml](../../paddlex/configs/structure_analysis/PicoDet_layout_1x.yaml)|
+|PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|[PicoDet-L_layout_3cls.yaml](../../paddlex/configs/structure_analysis/PicoDet-L_layout_3cls.yaml)|
+|RT-DETR-H_layout_3cls|95.9|114.644|3832.62|470.1 M|[RT-DETR-H_layout_3cls.yaml](../../paddlex/configs/structure_analysis/RT-DETR-H_layout_3cls.yaml)|
+|RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2 M|[RT-DETR-H_layout_17cls.yaml](../../paddlex/configs/structure_analysis/RT-DETR-H_layout_17cls.yaml)|
 
-**Note: The evaluation set for the above accuracy metrics is the ****PaddleX self-built Layout Analysis Dataset****, containing 10,000 images.**
+**Note: The evaluation set for the above accuracy metrics is the ****PaddleX self-built Layout Detection Dataset****, containing 10,000 images.**
 
-## Time Series Forecasting Module
-|Model Name|mse|mae|Model Size|
-|-|-|-|-|
-|DLinear|0.382|0.394|72 K|
-|NLinear|0.386|0.392|40 K |
-|Nonstationary|0.600|0.515|55.5 M|
-|PatchTST|0.385|0.397|2.0 M |
-|RLinear|0.384|0.392|40 K|
-|TiDE|0.405|0.412|31.7 M|
-|TimesNet|0.417|0.431|4.9 M|
+## [Time Series Forecasting Module](../module_usage/tutorials/ts_modules/time_series_forecast_en.md)
+|Model Name|mse|mae|Model Size|YAML File|
+|-|-|-|-|-|
+|DLinear|0.382|0.394|72 K|[DLinear.yaml](../../paddlex/configs/ts_forecast/DLinear.yaml)|
+|NLinear|0.386|0.392|40 K |[NLinear.yaml](../../paddlex/configs/ts_forecast/NLinear.yaml)|
+|Nonstationary|0.600|0.515|55.5 M|[Nonstationary.yaml](../../paddlex/configs/ts_forecast/Nonstationary.yaml)|
+|PatchTST|0.385|0.397|2.0 M |[PatchTST.yaml](../../paddlex/configs/ts_forecast/PatchTST.yaml)|
+|RLinear|0.384|0.392|40 K|[RLinear.yaml](../../paddlex/configs/ts_forecast/RLinear.yaml)|
+|TiDE|0.405|0.412|31.7 M|[TiDE.yaml](../../paddlex/configs/ts_forecast/TiDE.yaml)|
+|TimesNet|0.417|0.431|4.9 M|[TimesNet.yaml](../../paddlex/configs/ts_forecast/TimesNet.yaml)|
 
 **Note: The above accuracy metrics are measured on the **[ETTH1](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/Etth1.tar)** dataset ****(evaluation results on the test set test.csv)****.**
 
-## Time Series Anomaly Detection Module
-|Model Name|Precision|Recall|f1_score|Model Size|
-|-|-|-|-|-|
-|AutoEncoder_ad|99.36|84.36|91.25|52 K |
-|DLinear_ad|98.98|93.96|96.41|112 K|
-|Nonstationary_ad|98.55|88.95|93.51|1.8 M |
-|PatchTST_ad|98.78|90.70|94.57|320 K |
-|TimesNet_ad|98.37|94.80|96.56|1.3 M |
+## [Time Series Anomaly Detection Module](../module_usage/tutorials/ts_modules/time_series_anomaly_detection_en.md)
+|Model Name|Precision|Recall|f1_score|Model Size|YAML File|
+|-|-|-|-|-|-|
+|AutoEncoder_ad|99.36|84.36|91.25|52 K |[AutoEncoder_ad.yaml](../../paddlex/configs/ts_anomaly_detection/AutoEncoder_ad.yaml)|
+|DLinear_ad|98.98|93.96|96.41|112 K|[DLinear_ad.yaml](../../paddlex/configs/ts_anomaly_detection/DLinear_ad.yaml)|
+|Nonstationary_ad|98.55|88.95|93.51|1.8 M |[Nonstationary_ad.yaml](../../paddlex/configs/ts_anomaly_detection/Nonstationary_ad.yaml)|
+|PatchTST_ad|98.78|90.70|94.57|320 K |[PatchTST_ad.yaml](../../paddlex/configs/ts_anomaly_detection/PatchTST_ad.yaml)|
+|TimesNet_ad|98.37|94.80|96.56|1.3 M |[TimesNet_ad.yaml](../../paddlex/configs/ts_anomaly_detection/TimesNet_ad.yaml)|
 
 **Note: The above accuracy metrics are measured on the **[PSM](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ts_anomaly_examples.tar)** dataset.**
 
-## Time Series Classification Module
-|Model Name|acc (%)|Model Size|
-|-|-|-|
-|TimesNet_cls|87.5|792 K|
+## [Time Series Classification Module](../module_usage/tutorials/ts_modules/time_series_classification_en.md)
+|Model Name|acc (%)|Model Size|YAML File|
+|-|-|-|-|
+|TimesNet_cls|87.5|792 K|[TimesNet_cls.yaml](../../paddlex/configs/ts_classification/TimesNet_cls.yaml)|
 
 **Note: The above accuracy metrics are measured on the [UWaveGestureLibrary](https://paddlets.bj.bcebos.com/classification/UWaveGestureLibrary_TEST.csv) dataset.**
 
->**Note: All GPU inference times for the above models are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
+>**Note: All GPU inference times for the above models are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**

+ 3 - 0
paddlex/inference/components/llm/__init__.py

@@ -16,6 +16,9 @@ from .erniebot import ErnieBot
 
 
 def create_llm_api(model_name: str, params={}) -> BaseLLM:
+    # for CI
+    if model_name == "paddlex_ci":
+        return
     return BaseLLM.get(model_name)(
         model_name=model_name,
         params=params,

+ 4 - 3
paddlex/inference/components/paddle_predictor/predictor.py

@@ -37,7 +37,7 @@ class BasePaddlePredictor(BaseComponent, PPEngineMixin):
         self.model_prefix = model_prefix
         self._is_initialized = False
 
-    def _reset(self):
+    def reset(self):
         if not self.option:
             self.option = PaddlePredictorOption()
         (
@@ -62,7 +62,7 @@ class BasePaddlePredictor(BaseComponent, PPEngineMixin):
         params_file = (self.model_dir / f"{self.model_prefix}.pdiparams").as_posix()
         config = Config(model_file, params_file)
 
-        if self.option.device == "gpu":
+        if self.option.device in ("gpu", "dcu"):
             config.enable_use_gpu(200, self.option.device_id)
             if paddle.is_compiled_with_rocm():
                 os.environ["FLAGS_conv_workspace_size_limit"] = "2000"
@@ -164,7 +164,7 @@ No need to generate again."
 
     def apply(self, **kwargs):
         if not self._is_initialized:
-            self._reset()
+            self.reset()
 
         x = self.to_batch(**kwargs)
         for idx in range(len(x)):
@@ -196,6 +196,7 @@ class ImagePredictor(BasePaddlePredictor):
 
 
 class ImageDetPredictor(BasePaddlePredictor):
+
     INPUT_KEYS = [["img", "scale_factors"], ["img", "scale_factors", "img_size"]]
     OUTPUT_KEYS = [["boxes"], ["boxes", "masks"]]
     DEAULT_INPUTS = {"img": "img", "scale_factors": "scale_factors"}

+ 4 - 2
paddlex/inference/models/base/basic_predictor.py

@@ -72,10 +72,12 @@ class BasicPredictor(
     def set_predictor(self, batch_size=None, device=None, pp_option=None):
         if batch_size:
             self.components["ReadCmp"].batch_size = batch_size
-        if device:
+        if device and device != self.pp_option.device:
             self.pp_option.device = device
-        if pp_option:
+            self.components["PPEngineCmp"].reset()
+        if pp_option and pp_option != self.pp_option:
             self.pp_option = pp_option
+            self.components["PPEngineCmp"].reset()
 
     def _has_setter(self, attr):
         prop = getattr(self.__class__, attr, None)

+ 6 - 2
paddlex/inference/models/text_detection.py

@@ -53,8 +53,12 @@ class TextDetPredictor(BasicPredictor):
         return ReadImage(format=img_mode)
 
     @register("DetResizeForTest")
-    def build_resize(self, resize_long=960):
-        return DetResizeForTest(limit_side_len=resize_long, limit_type="max")
+    def build_resize(self, **kwargs):
+        # TODO: align to PaddleOCR
+        if self.model_name in ("PP-OCRv4_server_det", "PP-OCRv4_mobile_det"):
+            resize_long = kwargs.get("resize_long", 960)
+            return DetResizeForTest(limit_side_len=resize_long, limit_type="max")
+        return DetResizeForTest(**kwargs)
 
     @register("NormalizeImage")
     def build_normalize(

+ 14 - 4
paddlex/inference/pipelines/base.py

@@ -24,9 +24,10 @@ class BasePipeline(ABC, metaclass=AutoRegisterABCMetaClass):
 
     __is_base = True
 
-    def __init__(self, predictor_kwargs) -> None:
+    def __init__(self, device, predictor_kwargs={}) -> None:
         super().__init__()
-        self._predictor_kwargs = {} if predictor_kwargs is None else predictor_kwargs
+        self._predictor_kwargs = predictor_kwargs
+        self._device = device
 
     @abstractmethod
     def set_predictor():
@@ -41,9 +42,18 @@ class BasePipeline(ABC, metaclass=AutoRegisterABCMetaClass):
     def _create(self, model=None, pipeline=None, *args, **kwargs):
         if model:
             return create_predictor(
-                model=model, *args, **kwargs, **self._predictor_kwargs
+                *args,
+                model=model,
+                device=self._device,
+                **kwargs,
+                **self._predictor_kwargs
             )
         elif pipeline:
-            return pipeline(*args, **kwargs, predictor_kwargs=self._predictor_kwargs)
+            return pipeline(
+                *args,
+                device=self._device,
+                predictor_kwargs=self._predictor_kwargs,
+                **kwargs
+            )
         else:
             raise Exception()

+ 1 - 2
paddlex/inference/pipelines/formula_recognition.py

@@ -33,12 +33,11 @@ class FormulaRecognitionPipeline(BasePipeline):
         device=None,
         predictor_kwargs=None,
     ):
-        super().__init__(predictor_kwargs=predictor_kwargs)
+        super().__init__(device, predictor_kwargs)
         self._build_predictor(layout_model, formula_rec_model)
         self.set_predictor(
             layout_batch_size=layout_batch_size,
             formula_rec_batch_size=formula_rec_batch_size,
-            device=device,
         )
 
     def _build_predictor(self, layout_model, formula_rec_model):

+ 1 - 2
paddlex/inference/pipelines/ocr.py

@@ -32,12 +32,11 @@ class OCRPipeline(BasePipeline):
         device=None,
         predictor_kwargs=None,
     ):
-        super().__init__(predictor_kwargs=predictor_kwargs)
+        super().__init__(device, predictor_kwargs)
         self._build_predictor(text_det_model, text_rec_model)
         self.set_predictor(
             text_det_batch_size=text_det_batch_size,
             text_rec_batch_size=text_rec_batch_size,
-            device=device,
         )
 
     def _build_predictor(self, text_det_model, text_rec_model):

+ 28 - 64
paddlex/inference/pipelines/ppchatocrv3/ppchatocrv3.py

@@ -62,9 +62,7 @@ class PPChatOCRPipeline(_TableRecPipeline):
         predictor_kwargs=None,
         _build_models=True,
     ):
-        super().__init__(
-            predictor_kwargs=predictor_kwargs,
-        )
+        super().__init__(device, predictor_kwargs)
         if _build_models:
             self._build_predictor(
                 layout_model=layout_model,
@@ -85,7 +83,6 @@ class PPChatOCRPipeline(_TableRecPipeline):
                 doc_image_ori_cls_batch_size=doc_image_ori_cls_batch_size,
                 doc_image_unwarp_batch_size=doc_image_unwarp_batch_size,
                 seal_text_det_batch_size=seal_text_det_batch_size,
-                device=device,
             )
 
         # get base prompt from yaml info
@@ -424,16 +421,9 @@ class PPChatOCRPipeline(_TableRecPipeline):
         if not any([visual_info, self.visual_info]):
             return VectorResult({"vector": None})
 
-        if visual_info:
-            # use for serving or local
-            _visual_info = visual_info
-        else:
-            # use for local
-            _visual_info = self.visual_info
-
-        ocr_text = _visual_info["ocr_text"]
-        html_list = _visual_info["table_html"]
-        table_text_list = _visual_info["table_text"]
+        ocr_text = visual_info["ocr_text"]
+        html_list = visual_info["table_html"]
+        table_text_list = visual_info["table_text"]
 
         # add table text to ocr text
         for html, table_text_rec in zip(html_list, table_text_list):
@@ -459,36 +449,16 @@ class PPChatOCRPipeline(_TableRecPipeline):
     def retrieval(
         self,
         key_list,
-        visual_info=None,
-        vector=None,
+        vector,
         llm_name=None,
         llm_params={},
         llm_request_interval=0.1,
     ):
-
-        if not any([visual_info, vector, self.visual_info, self.vector]):
-            return RetrievalResult({"retrieval": None})
-
+        assert "vector" in vector
         key_list = format_key(key_list)
 
-        is_seving = visual_info and llm_name
-
-        if self.visual_flag and not is_seving:
-            self.vector = self.build_vector()
-
-        if not any([vector, self.vector]):
-            logging.warning(
-                "The vector library is not created, and is being created automatically"
-            )
-            if is_seving:
-                # for serving
-                vector = self.build_vector(
-                    llm_name=llm_name, llm_params=llm_params, visual_info=visual_info
-                )
-            else:
-                self.vector = self.build_vector()
-
-        if vector and llm_name:
+        # for serving
+        if llm_name:
             _vector = vector["vector"]
             llm_api = create_llm_api(llm_name, llm_params)
             retrieval = llm_api.caculate_similar(
@@ -498,7 +468,7 @@ class PPChatOCRPipeline(_TableRecPipeline):
                 sleep_time=llm_request_interval,
             )
         else:
-            _vector = self.vector["vector"]
+            _vector = vector["vector"]
             retrieval = self.llm_api.caculate_similar(
                 vector=_vector, key_list=key_list, sleep_time=llm_request_interval
             )
@@ -514,33 +484,24 @@ class PPChatOCRPipeline(_TableRecPipeline):
         user_task_description="",
         rules="",
         few_shot="",
-        use_retrieval=True,
         save_prompt=False,
-        llm_name="ernie-3.5",
+        llm_name=None,
         llm_params={},
     ):
         """
         chat with key
 
         """
-        if not any(
-            [vector, visual_info, retrieval_result, self.visual_info, self.vector]
-        ):
+        if not any([vector, visual_info, retrieval_result]):
             return ChatResult(
                 {"chat_res": "请先完成图像解析再开始再对话", "prompt": ""}
             )
         key_list = format_key(key_list)
         # first get from table, then get from text in table, last get from all ocr
-        if visual_info:
-            # use for serving or local
-            _visual_info = visual_info
-        else:
-            # use for local
-            _visual_info = self.visual_info
 
-        ocr_text = _visual_info["ocr_text"]
-        html_list = _visual_info["table_html"]
-        table_text_list = _visual_info["table_text"]
+        ocr_text = visual_info["ocr_text"]
+        html_list = visual_info["table_html"]
+        table_text_list = visual_info["table_text"]
 
         prompt_res = {"ocr_prompt": "str", "table_prompt": [], "html_prompt": []}
 
@@ -573,18 +534,21 @@ class PPChatOCRPipeline(_TableRecPipeline):
             logging.debug("get result from ocr")
             if retrieval_result:
                 ocr_text = retrieval_result.get("retrieval")
-            elif use_retrieval and any([visual_info, vector]):
-                # for serving or local
-                ocr_text = self.retrieval(
-                    key_list=key_list,
-                    visual_info=visual_info,
-                    vector=vector,
-                    llm_name=llm_name,
-                    llm_params=llm_params,
-                )["retrieval"]
-            else:
+            elif vector:
+                # for serving
+                if llm_name:
+                    ocr_text = self.retrieval(
+                        key_list=key_list,
+                        vector=vector,
+                        llm_name=llm_name,
+                        llm_params=llm_params,
+                    )["retrieval"]
                 # for local
-                ocr_text = self.retrieval(key_list=key_list)["retrieval"]
+                else:
+                    ocr_text = self.retrieval(key_list=key_list, vector=vector)[
+                        "retrieval"
+                    ]
+
             prompt = self.get_prompt_for_ocr(
                 ocr_text,
                 key_list,

+ 1 - 1
paddlex/inference/pipelines/ppchatocrv3/utils.py

@@ -46,7 +46,7 @@ def get_oriclas_results(inputs, predictor):
     return results
 
 
-def get_uvdoc_results(inputs, predictor):
+def get_unwarp_results(inputs, predictor):
     results = []
     img_list = [img_info["img"] for img_info in inputs]
     for input, pred in zip(inputs, predictor(img_list)):

+ 1 - 2
paddlex/inference/pipelines/seal_recognition.py

@@ -50,7 +50,7 @@ class SealOCRPipeline(BasePipeline):
         device=None,
         predictor_kwargs=None,
     ):
-        super().__init__(predictor_kwargs=predictor_kwargs)
+        super().__init__(device, predictor_kwargs)
         self._build_predictor(
             layout_model=layout_model,
             text_det_model=text_det_model,
@@ -63,7 +63,6 @@ class SealOCRPipeline(BasePipeline):
             layout_batch_size=layout_batch_size,
             text_det_batch_size=text_det_batch_size,
             text_rec_batch_size=text_rec_batch_size,
-            device=device,
         )
 
     def _build_predictor(

+ 27 - 27
paddlex/inference/pipelines/serving/_pipeline_apps/ppchatocrv3.py

@@ -27,6 +27,7 @@ from pydantic import BaseModel, Field
 from typing_extensions import Annotated, TypeAlias, assert_never
 
 from .....utils import logging
+from .... import results
 from ...ppchatocrv3 import PPChatOCRPipeline
 from .. import file_storage
 from .. import utils as serving_utils
@@ -122,13 +123,12 @@ class BuildVectorStoreResult(BaseModel):
 class RetrieveKnowledgeRequest(BaseModel):
     keys: List[str]
     vectorStore: dict
-    visionInfo: dict
     llmName: Optional[LLMName] = None
     llmParams: Optional[Annotated[LLMParams, Field(discriminator="apiType")]] = None
 
 
 class RetrieveKnowledgeResult(BaseModel):
-    retrievalResult: str
+    retrievalResult: dict
 
 
 class ChatRequest(BaseModel):
@@ -137,9 +137,8 @@ class ChatRequest(BaseModel):
     taskDescription: Optional[str] = None
     rules: Optional[str] = None
     fewShot: Optional[str] = None
-    useVectorStore: bool = True
     vectorStore: Optional[dict] = None
-    retrievalResult: Optional[str] = None
+    retrievalResult: Optional[dict] = None
     returnPrompts: bool = True
     llmName: Optional[LLMName] = None
     llmParams: Optional[Annotated[LLMParams, Field(discriminator="apiType")]] = None
@@ -147,8 +146,8 @@ class ChatRequest(BaseModel):
 
 class Prompts(BaseModel):
     ocr: str
-    table: str
-    html: str
+    table: Optional[str] = None
+    html: Optional[str] = None
 
 
 class ChatResult(BaseModel):
@@ -314,9 +313,9 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
 
             result = await pipeline.infer(
                 images,
-                use_oricls=request.useOricls,
-                use_curve=request.useCurve,
-                use_uvdoc=request.useUvdoc,
+                use_doc_image_ori_cls_model=request.useOricls,
+                use_doc_image_unwarp_model=request.useCurve,
+                use_seal_text_det_model=request.useUvdoc,
             )
 
             vision_results: List[VisionResult] = []
@@ -392,7 +391,7 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
         pipeline = ctx.pipeline
 
         try:
-            kwargs = {"visual_info": request.visionInfo}
+            kwargs = {"visual_info": results.VisualInfoResult(request.visionInfo)}
             if request.minChars is not None:
                 kwargs["min_characters"] = request.minChars
             else:
@@ -432,8 +431,7 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
         try:
             kwargs = {
                 "key_list": request.keys,
-                "vector": request.vectorStore,
-                "visual_info": request.visionInfo,
+                "vector": results.VectorResult(request.vectorStore),
             }
             if request.llmName is not None:
                 kwargs["llm_name"] = request.llmName
@@ -448,7 +446,7 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
                 logId=serving_utils.generate_log_id(),
                 errorCode=0,
                 errorMsg="Success",
-                result=RetrieveKnowledgeResult(retrievalResult=result["retrieval"]),
+                result=RetrieveKnowledgeResult(retrievalResult=result),
             )
 
         except Exception as e:
@@ -456,7 +454,10 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
             raise HTTPException(status_code=500, detail="Internal server error")
 
     @app.post(
-        "/chatocr-chat", operation_id="chat", responses={422: {"model": Response}}
+        "/chatocr-chat",
+        operation_id="chat",
+        responses={422: {"model": Response}},
+        response_model_exclude_none=True,
     )
     async def _chat(
         request: ChatRequest,
@@ -466,7 +467,7 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
         try:
             kwargs = {
                 "key_list": request.keys,
-                "visual_info": request.visionInfo,
+                "visual_info": results.VisualInfoResult(request.visionInfo),
             }
             if request.taskDescription is not None:
                 kwargs["user_task_description"] = request.taskDescription
@@ -474,11 +475,12 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
                 kwargs["rules"] = request.rules
             if request.fewShot is not None:
                 kwargs["few_shot"] = request.fewShot
-            kwargs["use_vector"] = request.useVectorStore
             if request.vectorStore is not None:
-                kwargs["vector"] = request.vectorStore
+                kwargs["vector"] = results.VectorResult(request.vectorStore)
             if request.retrievalResult is not None:
-                kwargs["retrieval_result"] = request.retrievalResult
+                kwargs["retrieval_result"] = results.RetrievalResult(
+                    request.retrievalResult
+                )
             kwargs["save_prompt"] = request.returnPrompts
             if request.llmName is not None:
                 kwargs["llm_name"] = request.llmName
@@ -490,17 +492,15 @@ def create_pipeline_app(pipeline: PPChatOCRPipeline, app_config: AppConfig) -> F
             if result["prompt"]:
                 prompts = Prompts(
                     ocr=result["prompt"]["ocr_prompt"],
-                    table=result["prompt"]["table_prompt"],
-                    html=result["prompt"]["html_prompt"],
-                )
-                chat_result = ChatResult(
-                    chatResult=result["chat_res"],
-                    prompts=prompts,
+                    table=result["prompt"]["table_prompt"] or None,
+                    html=result["prompt"]["html_prompt"] or None,
                 )
             else:
-                chat_result = ChatResult(
-                    chatResult=result["chat_res"],
-                )
+                prompts = None
+            chat_result = ChatResult(
+                chatResult=result["chat_res"],
+                prompts=prompts,
+            )
             return ResultResponse(
                 logId=serving_utils.generate_log_id(),
                 errorCode=0,

+ 2 - 2
paddlex/inference/pipelines/single_model_pipeline.py

@@ -18,9 +18,9 @@ from .base import BasePipeline
 class _SingleModelPipeline(BasePipeline):
 
     def __init__(self, model, batch_size=1, device=None, predictor_kwargs=None):
-        super().__init__(predictor_kwargs=predictor_kwargs)
+        super().__init__(device, predictor_kwargs)
         self._build_predictor(model)
-        self.set_predictor(batch_size=batch_size, device=device)
+        self.set_predictor(batch_size=batch_size)
 
     def _build_predictor(self, model):
         self.model = self._create(model)

+ 4 - 4
paddlex/inference/pipelines/table_recognition/table_recognition.py

@@ -26,9 +26,10 @@ class _TableRecPipeline(BasePipeline):
 
     def __init__(
         self,
-        predictor_kwargs=None,
+        device,
+        predictor_kwargs,
     ):
-        super().__init__(predictor_kwargs=predictor_kwargs)
+        super().__init__(device, predictor_kwargs)
 
     def _build_predictor(
         self,
@@ -179,12 +180,11 @@ class TableRecPipeline(_TableRecPipeline):
         device=None,
         predictor_kwargs=None,
     ):
-        super().__init__(predictor_kwargs=predictor_kwargs)
+        super().__init__(device, predictor_kwargs)
         self._build_predictor(layout_model, text_det_model, text_rec_model, table_model)
         self.set_predictor(
             layout_batch_size=layout_batch_size,
             text_det_batch_size=text_det_batch_size,
             text_rec_batch_size=text_rec_batch_size,
             table_batch_size=table_batch_size,
-            device=device,
         )

+ 8 - 8
paddlex/inference/results/chat_ocr.py

@@ -63,31 +63,31 @@ class VisualResult(BaseResult):
             oricls_result._HARD_FLAG = True
             oricls_result.save_to_img(oricls_save_path)
         uvdoc_save_path = f"{save_path}_uvdoc.jpg"
-        uvdoc_result = self["uvdoc_result"]
-        if uvdoc_result:
-            # uvdoc_result._HARD_FLAG = True
-            uvdoc_result.save_to_img(uvdoc_save_path)
+        unwarp_result = self["unwarp_result"]
+        if unwarp_result:
+            # unwarp_result._HARD_FLAG = True
+            unwarp_result.save_to_img(uvdoc_save_path)
         curve_save_path = f"{save_path}_curve.jpg"
         curve_results = self["curve_result"]
         # TODO(): support list of result
         if isinstance(curve_results, dict):
             curve_results = [curve_results]
         for curve_result in curve_results:
-            curve_result._HARD_FLAG = True if not uvdoc_result else False
+            curve_result._HARD_FLAG = True if not unwarp_result else False
             curve_result.save_to_img(curve_save_path)
         layout_save_path = f"{save_path}_layout.jpg"
         layout_result = self["layout_result"]
         if layout_result:
-            layout_result._HARD_FLAG = True if not uvdoc_result else False
+            layout_result._HARD_FLAG = True if not unwarp_result else False
             layout_result.save_to_img(layout_save_path)
         ocr_save_path = f"{save_path}_ocr.jpg"
         table_save_path = f"{save_path}_table.jpg"
         ocr_result = self["ocr_result"]
         if ocr_result:
-            ocr_result._HARD_FLAG = True if not uvdoc_result else False
+            ocr_result._HARD_FLAG = True if not unwarp_result else False
             ocr_result.save_to_img(ocr_save_path)
         for table_result in self["table_result"]:
-            table_result._HARD_FLAG = True if not uvdoc_result else False
+            table_result._HARD_FLAG = True if not unwarp_result else False
             table_result.save_to_img(table_save_path)
 
 

+ 2 - 2
paddlex/inference/utils/pp_option.py

@@ -28,7 +28,7 @@ class PaddlePredictorOption(object):
         "mkldnn",
         "mkldnn_bf16",
     )
-    SUPPORT_DEVICE = ("gpu", "cpu", "npu", "xpu", "mlu")
+    SUPPORT_DEVICE = ("gpu", "cpu", "npu", "xpu", "mlu", "dcu")
 
     def __init__(self, model_name=None, **kwargs):
         super().__init__()
@@ -95,12 +95,12 @@ class PaddlePredictorOption(object):
         if not device:
             return
         device_type, device_ids = parse_device(device)
-        self._cfg["device"] = device_type
         if device_type not in self.SUPPORT_DEVICE:
             support_run_mode_str = ", ".join(self.SUPPORT_DEVICE)
             raise ValueError(
                 f"The device type must be one of {support_run_mode_str}, but received {repr(device_type)}."
             )
+        self._cfg["device"] = device_type
         device_id = device_ids[0] if device_ids is not None else 0
         self._cfg["device_id"] = device_id
         set_env_for_device(device)

+ 2 - 2
paddlex/model.py

@@ -59,8 +59,8 @@ class _ModelBasedInference(_BaseModel):
     def predict(self, *args, **kwargs):
         yield from self._predictor(*args, **kwargs)
 
-    def set_predict(self, **kwargs):
-        self._predictor.set_predict(**kwargs)
+    def set_predictor(self, **kwargs):
+        self._predictor.set_predictor(**kwargs)
 
 
 class _ModelBasedConfig(_BaseModel):

+ 3 - 3
paddlex/pipelines/PP-ChatOCRv3-doc.yaml

@@ -21,7 +21,7 @@ Pipeline:
   text_det_batch_size: 1
   text_rec_batch_size: 1
   table_batch_size: 1
-  uvdoc_batch_size: 1
-  curve_batch_size: 1
-  oricls_batch_size: 1
+  doc_image_ori_cls_batch_size: 1
+  doc_image_unwarp_batch_size: 1
+  seal_text_det_batch_size: 1
   recovery: True

Một số tệp đã không được hiển thị bởi vì quá nhiều tập tin thay đổi trong này khác