瀏覽代碼

Refine docs (#2214)

* Update paddlepaddle_install.md

* Update paddlepaddle_install_en.md

* refine docs

* refine docs

* refine docs
Liu Jiaxuan 1 年之前
父節點
當前提交
b15cde9c03
共有 32 個文件被更改,包括 695 次插入292 次删除
  1. 33 5
      docs/installation/installation.md
  2. 36 8
      docs/installation/installation_en.md
  3. 26 2
      docs/installation/paddlepaddle_install.md
  4. 27 2
      docs/installation/paddlepaddle_install_en.md
  5. 5 5
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md
  6. 6 6
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md
  7. 7 7
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md
  8. 8 8
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md
  9. 5 5
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md
  10. 8 8
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md
  11. 6 6
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md
  12. 11 10
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md
  13. 311 43
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md
  14. 37 35
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md
  15. 7 7
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md
  16. 11 11
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md
  17. 6 6
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md
  18. 2 2
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md
  19. 17 7
      docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md
  20. 20 13
      docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md
  21. 6 6
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md
  22. 7 4
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md
  23. 5 5
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md
  24. 6 6
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md
  25. 16 12
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md
  26. 33 28
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md
  27. 5 6
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md
  28. 8 9
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md
  29. 5 5
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md
  30. 6 6
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md
  31. 4 4
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md
  32. 5 5
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md

+ 33 - 5
docs/installation/installation.md

@@ -20,7 +20,7 @@ PaddleX为您提供了两种安装模式:**Wheel包安装**和**插件安装**
 pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0b1-py3-none-any.whl
 ```
 ### 1.2 插件安装模式
-若您使用PaddleX的应用场景为**二次开发** ,那么推荐您使用**功能更加强大**的插件安装模式。
+若您使用PaddleX的应用场景为**二次开发** (例如重新训练模型、微调模型、自定义模型结构、自定义推理代码等),那么推荐您使用**功能更加强大**的插件安装模式。
 
 安装您需要的PaddleX插件之后,您不仅同样能够对插件支持的模型进行推理与集成,还可以对其进行模型训练等二次开发更高级的操作。
 
@@ -37,7 +37,7 @@ PaddleX支持的插件如下,请您根据开发需求,确定所需的一个
 |通用实例分割|实例分割|`PaddleDetection`|
 |通用OCR|文本检测<br>文本识别|`PaddleOCR`|
 |通用表格识别|版面区域检测<br>表格结构识别<br>文本检测<br>文本识别|`PaddleOCR`<br>`PaddleDetection`|
-|文档场景信息抽取v3|表格结构识别<br>版面区域检测<br>文本检测<br>文本识别<br>印章文本检测<br>文图像矫正<br>文档图像方向分类|`PaddleOCR`<br>`PaddleDetection`<br>`PaddleClas` |
+|文档场景信息抽取v3|表格结构识别<br>版面区域检测<br>文本检测<br>文本识别<br>印章文本检测<br>文图像矫正<br>文档图像方向分类|`PaddleOCR`<br>`PaddleDetection`<br>`PaddleClas` |
 |时序预测|时序预测模块|`PaddleTS`|
 |时序异常检测|时序异常检测模块|`PaddleTS`|
 |时序分类|时序分类模块|`PaddleTS`|
@@ -55,7 +55,7 @@ PaddleX支持的插件如下,请您根据开发需求,确定所需的一个
 git clone https://github.com/PaddlePaddle/PaddleX.git
 cd PaddleX
 pip install -e .
-paddlex --install PaddleXXX
+paddlex --install PaddleXXX  # 例如PaddleOCR
 ```
 
 > ❗ 注:采用这种安装方式后,是可编辑模式安装,当前项目的代码更改,都会直接作用到已经安装的 PaddleX Wheel 包。
@@ -76,15 +76,43 @@ paddlex --install PaddleXXX
 ### 2.1 基于Docker获取PaddleX
 参考下述命令,使用 PaddleX 官方 Docker 镜像,创建一个名为 `paddlex` 的容器,并将当前工作目录映射到容器内的 `/paddle` 目录。
 
+若您使用的 Docker 版本 >= 19.03,请执行:
+
 ```bash
+# 对于 CPU 用户
+docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-cpu /bin/bash
+
+# 对于 GPU 用户
 # 对于 CUDA11.8 用户
-docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda11.8-cudnn8.9-trt8.5 /bin/bash
+docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5 /bin/bash
 
 # 对于 CUDA12.3 用户
 docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
 ```
+
+* 若您使用的 Docker 版本 <= 19.03 但 >= 17.06,请执行:
+
+<details>
+   <summary> 点击展开</summary>
+
+```bash
+# 对于 CPU 用户
+docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-cpu /bin/bash
+
+# 对于 GPU 用户
+# 对于 CUDA11.8 用户
+nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5 /bin/bash
+
+# 对于 CUDA12.3 用户
+nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
+```
+
+</details>
+
+* 若您使用的 Docker 版本 <= 17.06,请升级 Docker 版本。
+
 * 若您想更深入了解 Docker 的原理或使用方式,请参考 [Docker官方网站](https://www.docker.com/) 或 [Docker官方教程](https://docs.docker.com/get-started/)。
-* 若您是 CUDA11.8 用户,请确保您的 Docker版本 >= 19.03;若您是 CUDA12.3 用户,请确保您的 Docker版本 >= 20.10。
+
 ### 2.2 自定义方式安装PaddleX
 在安装之前,请确保您已经参考[飞桨PaddlePaddle本地安装教程](paddlepaddle_install.md)完成飞桨的本地安装。
 

+ 36 - 8
docs/installation/installation_en.md

@@ -19,9 +19,9 @@ pip install https://paddle-model-ecology.bj.bcebos.com/paddlex/whl/paddlex-3.0.0
 ```
 
 ### 1.2 Plugin Installation Mode
-If your use case for PaddleX involves **secondary development**, we recommend the more **powerful** plugin installation mode.
+If your use case for PaddleX involves **custom development** (e.g. retraining models, fine-tuning models, customizing model structures, customizing inference codes, etc.), we recommend the more **powerful** plugin installation mode.
 
-After installing the PaddleX plugins you need, you can not only perform inference and integration with the supported models but also conduct advanced operations such as model training for secondary development.
+After installing the PaddleX plugins you need, you can not only perform inference and integration with the supported models but also conduct advanced operations such as model training for custom development.
 
 The plugins supported by PaddleX are listed below. Please determine the name(s) of the plugin(s) you need based on your development requirements:
 
@@ -35,12 +35,12 @@ The plugins supported by PaddleX are listed below. Please determine the name(s)
 | General Semantic Segmentation | Semantic Segmentation | `PaddleSeg` |
 | General Instance Segmentation | Instance Segmentation | `PaddleDetection` |
 | General OCR | Text Detection<br>Text Recognition | `PaddleOCR` |
-| General Table Recognition | Layout Region Detection<br>Table Structure Recognition<br>Text Detection<br>Text Recognition | `PaddleOCR`<br>`PaddleDetection` |
-| Document Scene Information Extraction v3 | Table Structure Recognition<br>Layout Region Detection<br>Text Detection<br>Text Recognition<br>Seal Text Detection<br>Document Image Correction<br>Document Image Orientation Classification | `PaddleOCR`<br>`PaddleDetection`<br>`PaddleClas` |
-| Time Series Prediction | Time Series Prediction Module | `PaddleTS` |
+| Table Recognition | Layout Region Detection<br>Table Structure Recognition<br>Text Detection<br>Text Recognition | `PaddleOCR`<br>`PaddleDetection` |
+| PP-ChatOCRv3-doc | Table Structure Recognition<br>Layout Region Detection<br>Text Detection<br>Text Recognition<br>Seal Text Detection<br>Text Image Correction<br>Document Image Orientation Classification | `PaddleOCR`<br>`PaddleDetection`<br>`PaddleClas` |
+| Time Series Forecasting | Time Series Forecasting Module | `PaddleTS` |
 | Time Series Anomaly Detection | Time Series Anomaly Detection Module | `PaddleTS` |
 | Time Series Classification | Time Series Classification Module | `PaddleTS` |
-| General Multi-label Classification | Image Multi-label Classification | `PaddleClas` |
+| Image Multi-Label Classification | Image Multi-label Classification | `PaddleClas` |
 | Small Object Detection | Small Object Detection | `PaddleDetection` |
 | Image Anomaly Detection | Unsupervised Anomaly Detection | `PaddleSeg` |
 
@@ -67,15 +67,43 @@ When using the official Docker image, **PaddlePaddle, PaddleX (including the whe
 When using custom installation methods, you need to first install the PaddlePaddle framework, then obtain the PaddleX source code, and finally choose the PaddleX installation mode.
 ### 2.1 Get PaddleX based on Docker
 Using the PaddleX official Docker image, create a container called 'paddlex' and map the current working directory to the '/paddle' directory inside the container by following the command.
+
+If your Docker version >= 19.03, please use:
+
 ```bash
+# For CPU
+docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-cpu /bin/bash
+
+# For GPU
 # For CUDA11.8
-docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda11.8-cudnn8.9-trt8.5 /bin/bash
+docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5 /bin/bash
 
 # For CUDA12.3
 docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
 ```
+
+* If your Docker version <= 19.03 and >= 17.06, please use:
+
+<details>
+   <summary> Click Here</summary>
+
+```bash
+# For CPU
+docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-cpu /bin/bash
+
+# For GPU
+# For CUDA11.8
+nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5 /bin/bash
+
+# For CUDA12.3
+nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8g --network=host -it registry.baidubce.com/paddlex/paddlex:paddlex3.0.0b1-paddlepaddle3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
+```
+
+</details>
+
+* If your Docker version <= 17.06, please update your Docker.
+
 * If you want to delve deeper into the principles or usage of Docker, please refer to the [Docker Official Website](https://www.docker.com/) or the [Docker Official Tutorial](https://docs.docker.com/get-started/).
-* If you are a CUDA 11.8 user, please ensure your Docker version is >= 19.03; if you are a CUDA 12.3 user, please ensure your Docker version is >= 20.10.
 
 ### 2.2 Custom Installation of PaddleX
 Before installation, please ensure you have completed the local installation of PaddlePaddle by referring to the [PaddlePaddle Local Installation Tutorial](paddlepaddle_install_en.md).

+ 26 - 2
docs/installation/paddlepaddle_install.md

@@ -9,9 +9,28 @@
 ## 基于 Docker 安装飞桨
 **若您通过 Docker 安装**,请参考下述命令,使用飞桨官方 Docker 镜像,创建一个名为 `paddlex` 的容器,并将当前工作目录映射到容器内的 `/paddle` 目录:
 
+若您使用的 Docker 版本 >= 19.03,请执行:
+
 ```bash
 # 对于 cpu 用户:
-nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1 /bin/bash
+docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1 /bin/bash
+
+# 对于 gpu 用户:
+# CUDA11.8 用户
+docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5 /bin/bash
+
+# CUDA12.3 用户
+docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
+```
+
+* 若您使用的 Docker 版本 <= 19.03 但 >= 17.06,请执行:
+
+<details>
+   <summary> 点击展开</summary>
+
+```bash
+# 对于 cpu 用户:
+docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1 /bin/bash
 
 # 对于 gpu 用户:
 # CUDA11.8 用户
@@ -20,7 +39,12 @@ nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -i
 # CUDA12.3 用户
 nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
 ```
-注:更多飞桨官方 docker 镜像请参考[飞桨官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/docker/linux-docker.html)。若您是 CUDA11.8 用户,请确保您的 Docker版本 >= 19.03;若您是 CUDA12.3 用户,请确保您的 Docker版本 >= 20.10。
+
+</details>
+
+* 若您使用的 Docker 版本 <= 17.06,请升级 Docker 版本。
+
+* 注:更多飞桨官方 docker 镜像请参考[飞桨官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/docker/linux-docker.html)。
 
 ## 基于 pip 安装飞桨
 **若您通过 pip 安装**,请参考下述命令,用 pip 在当前环境中安装飞桨 PaddlePaddle:

+ 27 - 2
docs/installation/paddlepaddle_install_en.md

@@ -7,9 +7,28 @@ When installing PaddlePaddle, you can choose to install it via Docker or pip.
 ## Installing PaddlePaddle via Docker
 **If you choose to install via Docker**, please refer to the following commands to use the official PaddlePaddle Docker image to create a container named `paddlex` and map the current working directory to the `/paddle` directory inside the container:
 
+If your Docker version >= 19.03, please use:
+
+```bash
+# For CPU users:
+docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1 /bin/bash
+
+# For GPU users:
+# CUDA 11.8 users
+docker run --gpus all --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1-gpu-cuda11.8-cudnn8.6-trt8.5 /bin/bash
+
+# CUDA 12.3 users
+docker run --gpus all --name paddlex -v $PWD:/paddle  --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
+```
+
+* If your Docker version <= 19.03 and >= 17.06, please use:
+
+<details>
+   <summary> Click Here</summary>
+
 ```bash
 # For CPU users:
-nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1 /bin/bash
+docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1 /bin/bash
 
 # For GPU users:
 # CUDA 11.8 users
@@ -18,7 +37,13 @@ nvidia-docker run --name paddlex -v $PWD:/paddle --shm-size=8G --network=host -i
 # CUDA 12.3 users
 nvidia-docker run --name paddlex -v $PWD:/paddle  --shm-size=8G --network=host -it registry.baidubce.com/paddlepaddle/paddle:3.0.0b1-gpu-cuda12.3-cudnn9.0-trt8.6 /bin/bash
 ```
-Note: For more official PaddlePaddle Docker images, please refer to the [PaddlePaddle official website](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/en/install/docker/linux-docker.html). If you are a CUDA 11.8 user, please ensure your Docker version is >= 19.03; if you are a CUDA 12.3 user, please ensure your Docker version is >= 20.10.
+
+</details>
+
+* If your Docker version <= 17.06, please update your Docker.
+
+
+* Note: For more official PaddlePaddle Docker images, please refer to the [PaddlePaddle official website](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/en/install/docker/linux-docker.html)
 
 ## Installing PaddlePaddle via pip
 **If you choose to install via pip**, please refer to the following commands to install PaddlePaddle in your current environment using pip:

+ 5 - 5
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md

@@ -67,7 +67,7 @@ paddlex --pipeline ./anomaly_detection.yaml --input uad_grid.png
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_anomaly_detection/02.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 ### 2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以图像异常检测产线为例:
@@ -93,7 +93,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -136,7 +136,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -615,9 +615,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline anomaly_detection --input uad_grid.png --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline anomaly_detection --input uad_grid.png --device npu:0
 ```
-若您想在更多种类的硬件上使用图像异常检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用图像异常检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection_en.md

@@ -66,7 +66,7 @@ After running, the result is:
 ```
 ![](/tmp/images/pipelines/image_anomaly_detection/02.png)
 
-The visualized image is saved in the `output` directory by default, which can be customized using `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 
 ### 2.2 Python Script Integration
 A few lines of code are sufficient for quick inference using the pipeline. Taking the image anomaly detection pipeline as an example:
@@ -93,7 +93,7 @@ In the above Python script, the following steps are executed:
 |-|-|-|-|
 |`pipeline`| The name of the pipeline or the path to the pipeline configuration file. If it's a pipeline name, it must be a pipeline supported by PaddleX. |`str`| None |
 |`device`| The device for pipeline model inference. Supports: "gpu", "cpu". |`str`|`gpu`|
-|`enable_hpi`| Whether to enable high-performance inference, only available if the pipeline supports it. |`bool`|`False`|
+|`use_hpip`| Whether to enable high-performance inference, only available if the pipeline supports it. |`bool`|`False`|
 
 (2)Invoke the `predict` method of the pipeline object for inference prediction: The `predict` method takes `x` as its parameter, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -137,7 +137,7 @@ If you need to apply the pipeline directly in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -559,7 +559,7 @@ echo "Output image saved at " . $output_image_path . "\n";
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, enabling devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the image anomaly detection pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the image anomaly detection pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -588,9 +588,9 @@ For example, if you use an NVIDIA GPU for inference with the image anomaly detec
 ```bash
 paddlex --pipeline anomaly_detection --input uad_grid.png --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline anomaly_detection --input uad_grid.png --device npu:0
 ```
-If you want to use the image anomaly detection pipeline on more types of hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the image anomaly detection pipeline on more types of hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 7 - 7
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md

@@ -17,7 +17,7 @@
     <th>模型</th>
     <th>Top1 Acc(%)</th>
     <th>GPU推理耗时 (ms)</th>
-    <th>CPU推理耗时</th>
+    <th>CPU推理耗时(ms)</th>
     <th>模型存储大小 (M)</th>
     <th>介绍</th>
   </tr>
@@ -668,7 +668,7 @@ paddlex --pipeline ./image_classification.yaml --input general_image_classificat
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_classification/03.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 #### 2.2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用图像分类产线为例:
@@ -694,7 +694,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用图像分类产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -702,7 +702,7 @@ for res in output:
 |---------------|-----------------------------------------------------------------------------------------------------------|
 | Python Var    | 支持直接传入Python变量,如numpy.ndarray表示的图像数据。                                               |
 | str         | 支持传入待预测数据文件路径,如图像文件的本地路径:`/root/data/img.jpg`。                                   |
-| str           | 支持传入待预测数据文件URL,如图像文件的网络URL:[示例](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001)。|
+| str           | 支持传入待预测数据文件URL,如图像文件的网络URL:[示例](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg)。|
 | str           | 支持传入本地目录,该目录下需包含待预测数据文件,如本地路径:`/root/data/`。                               |
 | dict          | 支持传入字典类型,字典的key需与具体任务对应,如图像分类任务对应\"img\",字典的val支持上述类型数据,例如:`{\"img\": \"/root/data1\"}`。|
 | list          | 支持传入列表,列表元素需为上述类型数据,如`[numpy.ndarray, numpy.ndarray],[\"/root/data/img1.jpg\", \"/root/data/img2.jpg\"]`,`[\"/root/data1\", \"/root/data2\"]`,`[{\"img\": \"/root/data1\"}, {\"img\": \"/root/data2/img.jpg\"}]`。|
@@ -737,7 +737,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -1237,9 +1237,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需将 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需将 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device npu:0
 ```
-若您想在更多种类的硬件上使用通用图像分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用图像分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 8 - 8
docs/pipeline_usage/tutorials/cv_pipelines/image_classification_en.md

@@ -17,7 +17,7 @@ Image classification is a technique that assigns images to predefined categories
     <th>Model</th>
     <th>Top-1 Accuracy (%)</th>
     <th>GPU Inference Time (ms)</th>
-    <th>CPU Inference Time</th>
+    <th>CPU Inference Time (ms)</th>
     <th>Model Size (M)</th>
     <th>Description</th>
   </tr>
@@ -668,7 +668,7 @@ After running, the result will be:
 ![](/tmp/images/pipelines/image_classification/03.png)
 
 
-The visualization images are saved in the `output` directory by default, and you can also customize it through `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 
 #### 2.2.2 Integration via Python Script
 A few lines of code can complete the quick inference of the pipeline. Taking the general image classification pipeline as an example:
@@ -694,7 +694,7 @@ In the above Python script, the following steps are executed:
 |-----------|-------------|------|---------|
 |`pipeline` | The name of the pipeline or the path to the pipeline configuration file. If it is the name of the pipeline, it must be a pipeline supported by PaddleX. | `str` | None |
 |`device` | The device for pipeline model inference. Supports: "gpu", "cpu". | `str` | "gpu" |
-|`enable_hpi` | Whether to enable high-performance inference, which is only available when the pipeline supports it. | `bool` | `False` |
+|`use_hpip` | Whether to enable high-performance inference, which is only available when the pipeline supports it. | `bool` | `False` |
 
 (2) Call the `predict` method of the image classification pipeline object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -702,7 +702,7 @@ In the above Python script, the following steps are executed:
 |----------------|-------------|
 | Python Var | Supports directly passing Python variables, such as numpy.ndarray representing image data. |
 | `str` | Supports passing the path of the file to be predicted, such as the local path of an image file: `/root/data/img.jpg`. |
-| `str` | Supports passing the URL of the file to be predicted, such as the network URL of an image file: [Example](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001). |
+| `str` | Supports passing the URL of the file to be predicted, such as the network URL of an image file: [Example](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg). |
 | `str` | Supports passing a local directory, which should contain files to be predicted, such as the local path: `/root/data/`. |
 | `dict` | Supports passing a dictionary type, where the key needs to correspond to the specific task, such as "img" for the image classification task, and the value of the dictionary supports the above data types, e.g., `{"img": "/root/data1"}`. |
 | `list` | Supports passing a list, where the list elements need to be the above data types, such as `[numpy.ndarray, numpy.ndarray]`, `["/root/data/img1.jpg", "/root/data/img2.jpg"]`, `["/root/data1", "/root/data2"]`, `[{"img": "/root/data1"}, {"img": "/root/data2/img.jpg"}]`. |
@@ -738,7 +738,7 @@ If you need to apply the pipeline directly in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -1189,7 +1189,7 @@ print_r($result["categories"]);
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the general image classification pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **data from your specific domain or application scenario** to improve the recognition performance of the general image classification pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -1218,9 +1218,9 @@ For example, if you use an NVIDIA GPU for inference in the image classification
 ```bash
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline image_classification --input general_image_classification_001.jpg --device npu:0
 ```
-If you want to use the General Image Classification Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the General Image Classification Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 5 - 5
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md

@@ -77,7 +77,7 @@ paddlex --pipeline ./multi_label_image_classification.yaml --input general_image
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_multi_label_classification/02.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 ### 2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用图像多标签分类产线为例:
@@ -103,7 +103,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用多标签分类产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -146,7 +146,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -646,9 +646,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline multi_label_image_classification --input general_image_classification_001.jpg --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline multi_label_image_classification --input general_image_classification_001.jpg --device npu:0
 ```
-若您想在更多种类的硬件上使用通用图像多标签分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用图像多标签分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 8 - 8
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification_en.md

@@ -5,7 +5,7 @@
 ## 1. Introduction to the General Image Multi-Label Classification Pipeline
 Image multi-label classification is a technique that assigns multiple relevant categories to a single image simultaneously, widely used in image annotation, content recommendation, and social media analysis. It can identify multiple objects or features present in an image, for example, an image containing both "dog" and "outdoor" labels. By leveraging deep learning models, image multi-label classification automatically extracts image features and performs accurate classification, providing users with more comprehensive information. This technology is of great significance in applications such as intelligent search engines and automatic content generation.
 
-![](/tmp/images/pipelines/image_multi_label_classification/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_multi_label_classification/01.png)
 
 **The General Image Multi-Label Classification Pipeline includes a module for image multi-label classification. If you prioritize model accuracy, choose a model with higher accuracy. If you prioritize inference speed, choose a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage size.**
 
@@ -72,9 +72,9 @@ After running, the result obtained is:
 ```
 {'img_path': '/root/.paddlex/predict_input/general_image_classification_001.jpg', 'class_ids': [21, 0, 30, 24], 'scores': [0.99257, 0.70596, 0.63001, 0.57852], 'label_names': ['bear', 'person', 'skis', 'backpack']}
 ```
-![](/tmp/images/pipelines/image_multi_label_classification/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/image_multi_label_classification/02.png)
 
-The visualization image is saved in the `output` directory by default, and you can also customize it through `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 
 ### 2.2 Integration via Python Script
 A few lines of code can complete the rapid inference of the pipeline. Taking the general image multi-label classification pipeline as an example:
@@ -100,7 +100,7 @@ In the above Python script, the following steps are executed:
 |-----------|-------------|------|---------------|
 |`pipeline` | The name of the pipeline or the path of the pipeline configuration file. If it is the name of the pipeline, it must be a pipeline supported by PaddleX. | `str` | None |
 |`device` | The device for pipeline model inference. Supports: "gpu", "cpu". | `str` | "gpu" |
-|`enable_hpi` | Whether to enable high-performance inference, which is only available when the pipeline supports it. | `bool` | `False` |
+|`use_hpip` | Whether to enable high-performance inference, which is only available when the pipeline supports it. | `bool` | `False` |
 
 (2) Call the `predict` method of the multi-label classification pipeline object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -144,7 +144,7 @@ If you need to directly apply the pipeline in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have strict standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have strict standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -595,7 +595,7 @@ print_r($result["categories"]);
 📱 **Edge Deployment**: Edge deployment is a way to place computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the general image multi-label classification pipeline do not meet your requirements in terms of accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the general image multi-label classification pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -625,8 +625,8 @@ For example, if you use an NVIDIA GPU for inference of the image multi-label cla
 paddlex --pipeline multi_label_image_classification --input https://paddle-model-ecology.bj.bcebos.com/padd
 ```
 
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 ```
 paddlex --pipeline multi_label_image_classification --input https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg --device npu:0
 ```
-If you want to use the General Image Multi-label Classification Pipeline on more diverse hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../installation/installation_other_devices_en.md).
+If you want to use the General Image Multi-label Classification Pipeline on more diverse hardware, please refer to the [PaddleX Multi-device Usage Guide](../../../installation/multi_devices_use_guide_en.md).

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md

@@ -12,7 +12,7 @@
 <details>
    <summary> 👉模型列表详情</summary>
 
-|模型名称|Mask AP|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|Mask AP|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |Mask-RT-DETR-H|50.6|132.693|4896.17|449.9|
 |Mask-RT-DETR-L|45.7|46.5059|2575.92|113.6|
@@ -91,7 +91,7 @@ paddlex --pipeline ./instance_segmentation.yaml --input general_instance_segment
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/instance_segmentation/03.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 #### 2.2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用实例分割产线为例:
@@ -117,7 +117,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用实例分割产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -160,7 +160,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -676,9 +676,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device npu:0
 ```
-若您想在更多种类的硬件上使用通用实例分割产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用实例分割产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 11 - 10
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation_en.md

@@ -5,14 +5,15 @@
 ## 1. Introduction to the General Instance Segmentation Pipeline
 Instance segmentation is a computer vision task that not only identifies the object categories in an image but also distinguishes the pixels of different instances within the same category, enabling precise segmentation of each object. Instance segmentation can separately label each car, person, or animal in an image, ensuring they are independently processed at the pixel level. For example, in a street scene image containing multiple cars and pedestrians, instance segmentation can clearly separate the contours of each car and person, forming multiple independent region labels. This technology is widely used in autonomous driving, video surveillance, and robotic vision, often relying on deep learning models (such as Mask R-CNN) to achieve efficient pixel classification and instance differentiation through Convolutional Neural Networks (CNNs), providing powerful support for understanding complex scenes.
 
-![](/tmp/images/pipelines/instance_segmentation/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/instance_segmentation/01.png)
+
 
 **The General Instance Segmentation Pipeline includes a** **Object Detection** **module. If you prioritize model precision, choose a model with higher precision. If you prioritize inference speed, choose a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage size.**
 
 <details>
    <summary> 👉Model List Details</summary>
 
-|Model Name|Mask AP|GPU Inference Time (ms)|CPU Inference Time|Model Size (M)|
+|Model Name|Mask AP|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
 |-|-|-|-|-|
 |Mask-RT-DETR-H|50.6|132.693|4896.17|449.9|
 |Mask-RT-DETR-L|45.7|46.5059|2575.92|113.6|
@@ -40,7 +41,7 @@ The pre-trained model pipelines provided by PaddleX allow for quick experience o
 ### 2.1 Online Experience
 You can [experience online](https://aistudio.baidu.com/community/app/100063/webUI) the effects of the General Instance Segmentation Pipeline using the demo images provided by the official. For example:
 
-![](/tmp/images/pipelines/instance_segmentation/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/instance_segmentation/02.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model within the pipeline**.
 
@@ -94,9 +95,9 @@ After running, the result is:
 {'img_path': '/root/.paddlex/predict_input/general_instance_segmentation_004.png', 'boxes': [{'cls_id': 0, 'label': 'person', 'score': 0.8698326945304871, 'coordinate': [339, 0, 639, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8571141362190247, 'coordinate': [0, 0, 195, 575]}, {'cls_id': 0, 'label': 'person', 'score': 0.8202633857727051, 'coordinate': [88, 113, 401, 574]}, {'cls_id': 0, 'label': 'person', 'score': 0.7108577489852905, 'coordinate': [522, 21, 767, 574]}, {'cls_id': 27, 'label': 'tie', 'score': 0.554280698299408, 'coordinate': [247, 311, 355, 574]}]}
 ```
 
-![](/tmp/images/pipelines/instance_segmentation/03.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/instance_segmentation/03.png)
 
-The visualization image is saved in the `output` directory by default, and you can customize it through `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 
 #### 2.2.2 Python Script Integration
 A few lines of code can complete the quick inference of the pipeline. Taking the general instance segmentation pipeline as an example:
@@ -122,7 +123,7 @@ In the above Python script, the following steps are executed:
 |-----------|-------------|------|---------|
 |`pipeline` | The name of the pipeline or the path to the pipeline configuration file. If it is the name of the pipeline, it must be a pipeline supported by PaddleX. | `str` | None |
 |`device` | The device for pipeline model inference. Supports: "gpu", "cpu". | `str` | "gpu" |
-|`enable_hpi` | Whether to enable high-performance inference, which is only available when the pipeline supports it. | `bool` | `False` |
+|`use_hpip` | Whether to enable high-performance inference, which is only available when the pipeline supports it. | `bool` | `False` |
 
 (2) Call the `predict` method of the image classification pipeline object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -166,7 +167,7 @@ If you need to directly apply the pipeline in your Python project, you can refer
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant speedups in the end-to-end process. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant speedups in the end-to-end process. For detailed High-Performance Inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -632,7 +633,7 @@ print_r($result["instances"]);
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on the user's device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
 You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.
 
-## 4. Custom Development
+## 4. CCustom Development
 If the default model weights provided by the general instance segmentation pipeline do not meet your requirements for accuracy or speed in your scenario, you can try to further **fine-tune** the existing model using **data specific to your domain or application scenario** to improve the recognition effect of the general instance segmentation pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -662,10 +663,10 @@ For example, if you use an NVIDIA GPU for instance segmentation pipeline inferen
 ```bash
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline instance_segmentation --input general_instance_segmentation_004.png --device npu:0
 ```
 
-If you want to use the General Instance Segmentation Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the General Instance Segmentation Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 311 - 43
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md

@@ -12,47 +12,315 @@
 <details>
    <summary> 👉模型列表详情</summary>
 
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
-|-|-|-|-|-|
-|Cascade-FasterRCNN-ResNet50-FPN|41.1|-|-|245.4 M|
-|Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN|45.0|-|-|246.2 M|
-|CenterNet-DLA-34|37.6|-|-|75.4 M|
-|CenterNet-ResNet50|38.9|-|-|319.7 M|
-|DETR-R50|42.3|59.2132|5334.52|159.3 M|
-|FasterRCNN-ResNet34-FPN|37.8|-|-|137.5 M|
-|FasterRCNN-ResNet50-FPN|38.4|-|-|148.1 M|
-|FasterRCNN-ResNet50-vd-FPN|39.5|-|-|148.1 M|
-|FasterRCNN-ResNet50-vd-SSLDv2-FPN|41.4|-|-|148.1 M|
-|FasterRCNN-ResNet50|36.7|-|-|120.2 M|
-|FasterRCNN-ResNet101-FPN|41.4|-|-|216.3 M|
-|FasterRCNN-ResNet101|39.0|-|-|188.1 M|
-|FasterRCNN-ResNeXt101-vd-FPN|43.4|-|-|360.6 M|
-|FasterRCNN-Swin-Tiny-FPN|42.6|-|-|159.8 M|
-|FCOS-ResNet50|39.6|103.367|3424.91|124.2 M|
-|PicoDet-L|42.6|16.6715|169.904|20.9 M|
-|PicoDet-M|37.5|16.2311|71.7257|16.8 M|
-|PicoDet-S|29.1|14.097|37.6563|4.4 M |
-|PicoDet-XS|26.2|13.8102|48.3139|5.7M |
-|PP-YOLOE_plus-L|52.9|33.5644|814.825|185.3 M|
-|PP-YOLOE_plus-M|49.8|19.843|449.261|83.2 M|
-|PP-YOLOE_plus-S|43.7|16.8884|223.059|28.3 M|
-|PP-YOLOE_plus-X|54.7|57.8995|1439.93|349.4 M|
-|RT-DETR-H|56.3|114.814|3933.39|435.8 M|
-|RT-DETR-L|53.0|34.5252|1454.27|113.7 M|
-|RT-DETR-R18|46.5|19.89|784.824|70.7 M|
-|RT-DETR-R50|53.1|41.9327|1625.95|149.1 M|
-|RT-DETR-X|54.8|61.8042|2246.64|232.9 M|
-|YOLOv3-DarkNet53|39.1|40.1055|883.041|219.7 M|
-|YOLOv3-MobileNetV3|31.4|18.6692|267.214|83.8 M|
-|YOLOv3-ResNet50_vd_DCN|40.6|31.6276|856.047|163.0 M|
-|YOLOX-L|50.1|185.691|1250.58|192.5 M|
-|YOLOX-M|46.9|123.324|688.071|90.0 M|
-|YOLOX-N|26.1|79.1665|155.59|3.4M|
-|YOLOX-S|40.4|184.828|474.446|32.0 M|
-|YOLOX-T|32.9|102.748|212.52|18.1 M|
-|YOLOX-X|51.8|227.361|2067.84|351.5 M|
+<table >
+  <tr>
+    <th>模型</th>
+    <th>mAP(%)</th>
+    <th>GPU推理耗时 (ms)</th>
+    <th>CPU推理耗时 (ms)</th>
+    <th>模型存储大小 (M)</th>
+    <th>介绍</th>
+  </tr>
+  <tr>
+    <td>Cascade-FasterRCNN-ResNet50-FPN</td>
+    <td>41.1</td>
+    <td>-</td>
+    <td>-</td>
+    <td>245.4 M</td>
+    <td rowspan="2">Cascade-FasterRCNN 是一种改进的Faster R-CNN目标检测模型,通过耦联多个检测器,利用不同IoU阈值优化检测结果,解决训练和预测阶段的mismatch问题,提高目标检测的准确性。</td>
+  </tr>
+  <tr>
+    <td>Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN</td>
+    <td>45.0</td>
+    <td>-</td>
+    <td>-</td>
+    <td>246.2 M</td>
+  </tr>
+  <tr>
+    <td>CenterNet-DLA-34</td>
+    <td>37.6</td>
+    <td>-</td>
+    <td>-</td>
+    <td>75.4 M</td>
+    <td rowspan="2">CenterNet是一种anchor-free目标检测模型,把待检测物体的关键点视为单一点-即其边界框的中心点,并通过关键点进行回归。</td>
+  </tr>
+  <tr>
+    <td>CenterNet-ResNet50</td>
+    <td>38.9</td>
+    <td>-</td>
+    <td>-</td>
+    <td>319.7 M</td>
+
+  </tr>
+  <tr>
+    <td>DETR-R50</td>
+    <td>42.3</td>
+    <td>59.2132</td>
+    <td>5334.52</td>
+    <td>159.3 M</td>
+    <td >DETR 是Facebook提出的一种transformer目标检测模型,该模型在不需要预定义的先验框anchor和NMS的后处理策略的情况下,就可以实现端到端的目标检测。</td>
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet34-FPN</td>
+    <td>37.8</td>
+    <td>-</td>
+    <td>-</td>
+    <td>137.5 M</td>
+    <td rowspan="9">Faster R-CNN是典型的two-stage目标检测模型,即先生成区域建议(Region Proposal),然后在生成的Region Proposal上做分类和回归。相较于前代R-CNN和Fast R-CNN,Faster R-CNN的改进主要在于区域建议方面,使用区域建议网络(Region Proposal Network, RPN)提供区域建议,以取代传统选择性搜索。RPN是卷积神经网络,并与检测网络共享图像的卷积特征,减少了区域建议的计算开销。</td>
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet50-FPN</td>
+    <td>38.4</td>
+    <td>-</td>
+    <td>-</td>
+    <td>148.1 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet50-vd-FPN</td>
+    <td>39.5</td>
+    <td>-</td>
+    <td>-</td>
+    <td>148.1 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet50-vd-SSLDv2-FPN</td>
+    <td>41.4</td>
+    <td>-</td>
+    <td>-</td>
+    <td>148.1 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet50</td>
+    <td>36.7</td>
+    <td>-</td>
+    <td>-</td>
+    <td>120.2 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet101-FPN</td>
+    <td>41.4</td>
+    <td>-</td>
+    <td>-</td>
+    <td>216.3 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNet101</td>
+    <td>39.0</td>
+    <td>-</td>
+    <td>-</td>
+    <td>188.1 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-ResNeXt101-vd-FPN</td>
+    <td>43.4</td>
+    <td>-</td>
+    <td>-</td>
+    <td>360.6 M</td>
+
+  </tr>
+  <tr>
+    <td>FasterRCNN-Swin-Tiny-FPN</td>
+    <td>42.6</td>
+    <td>-</td>
+    <td>-</td>
+    <td>159.8 M</td>
+
+  </tr>
+  <tr>
+    <td>FCOS-ResNet50</td>
+    <td>39.6</td>
+    <td>103.367</td>
+    <td>3424.91</td>
+    <td>124.2 M</td>
+    <td>FCOS是一种密集预测的anchor-free目标检测模型,使用RetinaNet的骨架,直接在feature map上回归目标物体的长宽,并预测物体的类别以及centerness(feature map上像素点离物体中心的偏移程度),centerness最终会作为权重来调整物体得分。</td>
+  </tr>
+  <tr>
+    <td>PicoDet-L</td>
+    <td>42.6</td>
+    <td>16.6715</td>
+    <td>169.904</td>
+    <td>20.9 M</td>
+    <td rowspan="4">PP-PicoDet是一种全尺寸、棱视宽目标的轻量级目标检测算法,它考虑移动端设备运算量。与传统目标检测算法相比,PP-PicoDet具有更小的模型尺寸和更低的计算复杂度,并在保证检测精度的同时更高的速度和更低的延迟。</td>
+  </tr>
+  <tr>
+    <td>PicoDet-M</td>
+    <td>37.5</td>
+    <td>16.2311</td>
+    <td>71.7257</td>
+    <td>16.8 M</td>
+
+  </tr>
+  <tr>
+    <td>PicoDet-S</td>
+    <td>29.1</td>
+    <td>14.097</td>
+    <td>37.6563</td>
+    <td>4.4 M</td>
+
+  </tr>
+  <tr>
+    <td>PicoDet-XS</td>
+    <td>26.2</td>
+    <td>13.8102</td>
+    <td>48.3139</td>
+    <td>5.7 M</td>
+
+  </tr>
+    <tr>
+    <td>PP-YOLOE_plus-L</td>
+    <td>52.9</td>
+    <td>33.5644</td>
+    <td>814.825</td>
+    <td>185.3 M</td>
+    <td rowspan="4">PP-YOLOE_plus 是一种是百度飞桨视觉团队自研的云边一体高精度模型PP-YOLOE迭代优化升级的版本,通过使用Objects365大规模数据集、优化预处理,大幅提升了模型端到端推理速度。</td>
+  </tr>
+  <tr>
+    <td>PP-YOLOE_plus-M</td>
+    <td>49.8</td>
+    <td>19.843</td>
+    <td>449.261</td>
+    <td>82.3 M</td>
+
+  </tr>
+  <tr>
+    <td>PP-YOLOE_plus-S</td>
+    <td>43.7</td>
+    <td>16.8884</td>
+    <td>223.059</td>
+    <td>28.3 M</td>
+
+  </tr>
+  <tr>
+    <td>PP-YOLOE_plus-X</td>
+    <td>54.7</td>
+    <td>57.8995</td>
+    <td>1439.93</td>
+    <td>349.4 M</td>
+
+  </tr>
+  <tr>
+    <td>RT-DETR-H</td>
+    <td>56.3</td>
+    <td>114.814</td>
+    <td>3933.39</td>
+    <td>435.8 M</td>
+    <td rowspan="5">RT-DETR是第一个实时端到端目标检测器。该模型设计了一个高效的混合编码器,满足模型效果与吞吐率的双需求,高效处理多尺度特征,并提出了加速和优化的查询选择机制,以优化解码器查询的动态化。RT-DETR支持通过使用不同的解码器来实现灵活端到端推理速度。</td>
+  </tr>
+  <tr>
+    <td>RT-DETR-L</td>
+    <td>53.0</td>
+    <td>34.5252</td>
+    <td>1454.27</td>
+    <td>113.7 M</td>
+
+  </tr>
+  <tr>
+    <td>RT-DETR-R18</td>
+    <td>46.5</td>
+    <td>19.89</td>
+    <td>784.824</td>
+    <td>70.7 M</td>
+
+  </tr>
+  <tr>
+    <td>RT-DETR-R50</td>
+    <td>53.1</td>
+    <td>41.9327</td>
+    <td>1625.95</td>
+    <td>149.1 M</td>
+
+  </tr>
+  <tr>
+    <td>RT-DETR-X</td>
+    <td>54.8</td>
+    <td>61.8042</td>
+    <td>2246.64</td>
+    <td>232.9 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOv3-DarkNet53</td>
+    <td>39.1</td>
+    <td>40.1055</td>
+    <td>883.041</td>
+    <td>219.7 M</td>
+    <td rowspan="3">YOLOv3是一种实时的端到端目标检测器。它使用一个独特的单个卷积神经网络,将目标检测问题分解为一个回归问题,从而实现实时的检测。该模型采用了多个尺度的检测,提高了不同尺度目标物体的检测性能。</td>
+  </tr>
+  <tr>
+    <td>YOLOv3-MobileNetV3</td>
+    <td>31.4</td>
+    <td>18.6692</td>
+    <td>267.214</td>
+    <td>83.8 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOv3-ResNet50_vd_DCN</td>
+    <td>40.6</td>
+    <td>31.6276</td>
+    <td>856.047</td>
+    <td>163.0 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOX-L</td>
+    <td>50.1</td>
+    <td>185.691</td>
+    <td>1250.58</td>
+    <td>192.5 M</td>
+    <td rowspan="6">YOLOX模型以YOLOv3作为目标检测网络的框架,通过设计Decoupled Head、Data Aug、Anchor Free以及SimOTA组件,显著提升了模型在各种复杂场景下的检测性能。</td>
+  </tr>
+  <tr>
+    <td>YOLOX-M</td>
+    <td>46.9</td>
+    <td>123.324</td>
+    <td>688.071</td>
+    <td>90.0 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOX-N</td>
+    <td>26.1</td>
+    <td>79.1665</td>
+    <td>155.59</td>
+    <td>3.4 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOX-S</td>
+    <td>40.4</td>
+    <td>184.828</td>
+    <td>474.446</td>
+    <td>32.0 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOX-T</td>
+    <td>32.9</td>
+    <td>102.748</td>
+    <td>212.52</td>
+    <td>18.1 M</td>
+
+  </tr>
+  <tr>
+    <td>YOLOX-X</td>
+    <td>51.8</td>
+    <td>227.361</td>
+    <td>2067.84</td>
+    <td>351.5 M</td>
+
+  </tr>
+</table>
+
 
 **注:以上精度指标为[COCO2017](https://cocodataset.org/#home)验证集 mAP(0.5:0.95)。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
+</details>
 
 </details>
 
@@ -116,7 +384,7 @@ paddlex --pipeline ./object_detection.yaml --input general_object_detection_002.
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/object_detection/03.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 #### 2.2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用目标检测产线为例:
@@ -186,7 +454,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -697,9 +965,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline object_detection --input general_object_detection_002.png --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline object_detection --input general_object_detection_002.png --device npu:0
 ```
-若您想在更多种类的硬件上使用通用目标检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用目标检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 37 - 35
docs/pipeline_usage/tutorials/cv_pipelines/object_detection_en.md

@@ -5,7 +5,8 @@
 ## 1. Introduction to General Object Detection Pipeline
 Object detection aims to identify the categories and locations of multiple objects in images or videos by generating bounding boxes to mark these objects. Unlike simple image classification, object detection not only requires recognizing what objects are present in an image, such as people, cars, and animals, but also accurately determining the specific position of each object within the image, typically represented by rectangular boxes. This technology is widely used in autonomous driving, surveillance systems, smart photo albums, and other fields, relying on deep learning models (e.g., YOLO, Faster R-CNN) that can efficiently extract features and perform real-time detection, significantly enhancing the computer's ability to understand image content.
 
-![](/tmp/images/pipelines/object_detection/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/object_detection/01.png)
+
 
 
 <details>
@@ -16,7 +17,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <th>Model</th>
     <th>mAP(%)</th>
     <th>GPU Inference Time (ms)</th>
-    <th>CPU Inference Time</th>
+    <th>CPU Inference Time (ms)</th>
     <th>Model Size (M)</th>
     <th>Description</th>
   </tr>
@@ -34,7 +35,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>246.2 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>CenterNet-DLA-34</td>
@@ -50,7 +51,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>319.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>DETR-R50</td>
@@ -74,7 +75,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50-vd-FPN</td>
@@ -82,7 +83,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50-vd-SSLDv2-FPN</td>
@@ -90,7 +91,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>148.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet50</td>
@@ -98,7 +99,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>120.2 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet101-FPN</td>
@@ -106,7 +107,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>216.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNet101</td>
@@ -114,7 +115,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>188.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-ResNeXt101-vd-FPN</td>
@@ -122,7 +123,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>360.6 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FasterRCNN-Swin-Tiny-FPN</td>
@@ -130,7 +131,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>-</td>
     <td>-</td>
     <td>159.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>FCOS-ResNet50</td>
@@ -154,7 +155,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>16.2311</td>
     <td>71.7257</td>
     <td>16.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PicoDet-S</td>
@@ -162,7 +163,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>14.097</td>
     <td>37.6563</td>
     <td>4.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PicoDet-XS</td>
@@ -170,7 +171,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>13.8102</td>
     <td>48.3139</td>
     <td>5.7 M</td>
-    <td></td>
+
   </tr>
     <tr>
     <td>PP-YOLOE_plus-L</td>
@@ -186,7 +187,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>19.843</td>
     <td>449.261</td>
     <td>82.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus-S</td>
@@ -194,7 +195,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>16.8884</td>
     <td>223.059</td>
     <td>28.3 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>PP-YOLOE_plus-X</td>
@@ -202,7 +203,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>57.8995</td>
     <td>1439.93</td>
     <td>349.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-H</td>
@@ -218,7 +219,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>34.5252</td>
     <td>1454.27</td>
     <td>113.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-R18</td>
@@ -226,7 +227,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>19.89</td>
     <td>784.824</td>
     <td>70.7 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-R50</td>
@@ -234,7 +235,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>41.9327</td>
     <td>1625.95</td>
     <td>149.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>RT-DETR-X</td>
@@ -242,7 +243,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>61.8042</td>
     <td>2246.64</td>
     <td>232.9 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOv3-DarkNet53</td>
@@ -258,7 +259,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>18.6692</td>
     <td>267.214</td>
     <td>83.8 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOv3-ResNet50_vd_DCN</td>
@@ -266,7 +267,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>31.6276</td>
     <td>856.047</td>
     <td>163.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-L</td>
@@ -282,7 +283,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>123.324</td>
     <td>688.071</td>
     <td>90.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-N</td>
@@ -290,7 +291,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>79.1665</td>
     <td>155.59</td>
     <td>3.4 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-S</td>
@@ -298,7 +299,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>184.828</td>
     <td>474.446</td>
     <td>32.0 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-T</td>
@@ -306,7 +307,7 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>102.748</td>
     <td>212.52</td>
     <td>18.1 M</td>
-    <td></td>
+
   </tr>
   <tr>
     <td>YOLOX-X</td>
@@ -314,12 +315,13 @@ Object detection aims to identify the categories and locations of multiple objec
     <td>227.361</td>
     <td>2067.84</td>
     <td>351.5 M</td>
-    <td></td>
+
   </tr>
 </table>
 
 **Note: The precision metrics mentioned are based on the [COCO2017](https://cocodataset.org/#home) validation set mAP(0.5:0.95). All model GPU inference times are measured on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
+
 </details>
 
 ## 2. Quick Start
@@ -328,7 +330,7 @@ PaddleX's pre-trained model pipelines allow for quick experience of their effect
 ### 2.1 Online Experience
 You can [experience the General Object Detection Pipeline online](https://aistudio.baidu.com/community/app/70230/webUI) using the demo images provided by the official source, for example:
 
-![](/tmp/images/pipelines/object_detection/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/object_detection/02.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model within the pipeline**.
 
@@ -380,9 +382,9 @@ After running, the result will be:
 {'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png', 'boxes': [{'cls_id': 49, 'label': 'orange', 'score': 0.8188097476959229, 'coordinate': [661, 93, 870, 305]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7743489146232605, 'coordinate': [76, 274, 330, 520]}, {'cls_id': 47, 'label': 'apple', 'score': 0.7270504236221313, 'coordinate': [285, 94, 469, 297]}, {'cls_id': 46, 'label': 'banana', 'score': 0.5570532083511353, 'coordinate': [310, 361, 685, 712]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5484835505485535, 'coordinate': [764, 285, 924, 440]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5160726308822632, 'coordinate': [853, 169, 987, 303]}, {'cls_id': 60, 'label': 'dining table', 'score': 0.5142655968666077, 'coordinate': [0, 0, 1072, 720]}, {'cls_id': 47, 'label': 'apple', 'score': 0.5101479291915894, 'coordinate': [57, 23, 213, 176]}]}
 ```
 
-![](/tmp/images/pipelines/object_detection/03.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/object_detection/03.png)
 
-The visualized images are saved in the `output` directory by default, but you can customize this with `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 
 #### 2.2.2 Integration via Python Scripts
 A few lines of code are all you need to quickly perform inference on your production line. Taking General Object Detection as an example:
@@ -453,7 +455,7 @@ If you need to directly apply the pipeline in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies, especially response speed, to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. Refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy.md) for detailed high-performance deployment procedures.
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies, especially response speed, to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. Refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy.md) for detailed High-Performance Inference procedures.
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. Refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy.md) for detailed service-oriented deployment procedures.
 
@@ -944,9 +946,9 @@ For example, if you use an NVIDIA GPU for inference of the General Object Detect
 ```bash
 paddlex --pipeline object_detection --input general_object_detection_002.png --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline object_detection --input general_object_detection_002.png --device npu:0
 ```
-If you want to use the General Object Detection Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the General Object Detection Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 7 - 7
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md

@@ -3,7 +3,7 @@
 # 通用语义分割产线使用教程
 
 ## 1. 通用语义分割产线介绍
-语义分割是一种计算机视觉技术,旨在将图像中的每个像素分配到特定的类别,从而实现对图像内容的精细化理解。语义分割不仅要识别出图像中的物体类型,还要对每个像素进行分类,这样使得同一类别的区域能够被完整标记。例如,在一幅街景图像中,语义分割可以将行人、汽车、天空和道路等不同类别的部分逐像素区分开来,形成一个详细的标签图。这项技术广泛应用于自动驾驶、医学影像分析和人机交互等领域,通常依赖于深度学习模型(如FCN、U-Net等),通过卷积神经网络(CNN)来提取特征并实现高精度的像素级分类,从而为进一步的智能分析提供基础。
+语义分割是一种计算机视觉技术,旨在将图像中的每个像素分配到特定的类别,从而实现对图像内容的精细化理解。语义分割不仅要识别出图像中的物体类型,还要对每个像素进行分类,这样使得同一类别的区域能够被完整标记。例如,在一幅街景图像中,语义分割可以将行人、汽车、天空和道路等不同类别的部分逐像素区分开来,形成一个详细的标签图。这项技术广泛应用于自动驾驶、医学影像分析和人机交互等领域,通常依赖于深度学习模型(如SegFormer等),通过卷积神经网络(CNN)或视觉变换器(Transformer)来提取特征并实现高精度的像素级分类,从而为进一步的智能分析提供基础。
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/01.png)
 
@@ -12,7 +12,7 @@
 <details>
    <summary> 👉模型列表详情</summary>
 
-|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|
 |Deeplabv3_Plus-R101|81.10|100.026|2460.71|162.5 M|
@@ -31,7 +31,7 @@
 
 **注:以上精度指标为 **[Cityscapes](https://www.cityscapes-dataset.com/)** 数据集 mloU。以上所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。**
 
-|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|mloU(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |SeaFormer_base(slice)|40.92|24.4073|397.574|30.8 M|
 |SeaFormer_large (slice)|43.66|27.8123|550.464|49.8 M|
@@ -98,7 +98,7 @@ paddlex --pipeline ./semantic_segmentation.yaml --input semantic_segmentation/ma
 {'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png'}
 ```
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/03.png)
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 #### 2.2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用语义分割产线为例:
@@ -167,7 +167,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -646,9 +646,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline semantic_segmentation --input semantic_segmentation/makassaridn-road_demo.png --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline semantic_segmentation --input semantic_segmentation/makassaridn-road_demo.png --device npu:0
 ```
-若您想在更多种类的硬件上使用通用语义分割产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用语义分割产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 11 - 11
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation_en.md

@@ -3,14 +3,14 @@
 # General Semantic Segmentation Pipeline Tutorial
 
 ## 1. Introduction to the General Semantic Segmentation Pipeline
-Semantic segmentation is a computer vision technique that aims to assign each pixel in an image to a specific category, enabling a detailed understanding of the image content. Semantic segmentation not only identifies the types of objects in an image but also classifies each pixel, allowing regions of the same category to be fully labeled. For example, in a street scene image, semantic segmentation can distinguish pedestrians, cars, the sky, and roads pixel by pixel, forming a detailed label map. This technology is widely used in autonomous driving, medical image analysis, and human-computer interaction, often relying on deep learning models (such as FCN, U-Net, etc.) to extract features and achieve high-precision pixel-level classification, providing a foundation for further intelligent analysis.
+Semantic segmentation is a computer vision technique that aims to assign each pixel in an image to a specific category, enabling a detailed understanding of the image content. Semantic segmentation not only identifies the types of objects in an image but also classifies each pixel, allowing regions of the same category to be fully labeled. For example, in a street scene image, semantic segmentation can distinguish pedestrians, cars, the sky, and roads pixel by pixel, forming a detailed label map. This technology is widely used in autonomous driving, medical image analysis, and human-computer interaction, often relying on deep learning models (such as SegFormer, etc.) to extract features by CNN or Transformer, and achieve high-precision pixel-level classification, providing a foundation for further intelligent analysis.
 
-![](/tmp/images/pipelines/semantic_segmentation/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/01.png)
 
 <details>
    <summary> 👉 Model List Details</summary>
 
-|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time|Model Size (M)|
+|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
 |-|-|-|-|-|
 |Deeplabv3_Plus-R50 |80.36|61.0531|1513.58|94.9 M|
 |Deeplabv3_Plus-R101|81.10|100.026|2460.71|162.5 M|
@@ -30,7 +30,7 @@ Semantic segmentation is a computer vision technique that aims to assign each pi
 **The accuracy metrics of the above models are measured on the [Cityscapes](https://www.cityscapes-dataset.com/) dataset. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.**
 
 
-|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time|Model Size (M)|
+|Model Name|mIoU (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
 |-|-|-|-|-|
 |SeaFormer_base(slice)|40.92|24.4073|397.574|30.8 M|
 |SeaFormer_large (slice)|43.66|27.8123|550.464|49.8 M|
@@ -47,7 +47,7 @@ PaddleX's pre-trained model pipelines can be quickly experienced. You can experi
 ### 2.1 Online Experience
 You can [experience online](https://aistudio.baidu.com/community/app/100062/webUI?source=appCenter) the effects of the General Semantic Segmentation Pipeline, using the official demo images for recognition, for example:
 
-![](/tmp/images/pipelines/semantic_segmentation/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/02.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the model in the pipeline online**.
 
@@ -100,9 +100,9 @@ After running, the result is:
 {'img_path': '/root/.paddlex/predict_input/general_object_detection_002.png'}
 ```
 
-![](/tmp/images/pipelines/semantic_segmentation/03.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/semantic_segmentation/03.png)
 
-The visualization images are saved in the `output` directory by default, and you can also customize it through `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path.
 
 #### 2.2.2 Python Script Integration
 A few lines of code can complete the quick inference of the pipeline. Taking the general semantic segmentation pipeline as an example:
@@ -172,7 +172,7 @@ If you need to directly apply the pipeline in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant end-to-end speedups. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant end-to-end speedups. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -595,7 +595,7 @@ echo "Output image saved at " . $output_image_path . "\n";
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the general semantic segmentation pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the general semantic segmentation pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -624,9 +624,9 @@ For example, if you use an NVIDIA GPU for semantic segmentation pipeline inferen
 ```bash
 paddlex --pipeline semantic_segmentation --input makassaridn-road_demo.png --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` flag in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` flag in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline semantic_segmentation --input makassaridn-road_demo.png --device npu:0
 ```
-If you want to use the General Semantic Segmentation Pipeline on a wider range of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the General Semantic Segmentation Pipeline on a wider range of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 6 - 6
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md

@@ -12,7 +12,7 @@
 <details>
    <summary> 👉模型列表详情</summary>
 
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |PP-YOLOE_plus_SOD-S|25.1|65.4608|324.37|77.3 M|
 |PP-YOLOE_plus_SOD-L|31.9|57.1448|1006.98|325.0 M|
@@ -72,7 +72,7 @@ paddlex --pipeline ./small_object_detection.yaml --input small_object_detection.
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/small_object_detection/02.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 ### 2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用小目标检测产线为例:
@@ -98,7 +98,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -143,7 +143,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -655,9 +655,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline multilabel_classification --input small_object_detection.jpg --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline multilabel_classification --input small_object_detection.jpg --device npu:0
 ```
-若您想在更多种类的硬件上使用通用小目标检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用小目标检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

文件差異過大導致無法顯示
+ 2 - 2
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection_en.md


+ 17 - 7
docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction.md

@@ -21,7 +21,7 @@
     <th>模型</th>
     <th>精度(%)</th>
     <th>GPU推理耗时 (ms)</th>
-    <th>CPU推理耗时</th>
+    <th>CPU推理耗时(ms)</th>
     <th>模型存储大小 (M)</th>
     <th>介绍</th>
   </tr>
@@ -103,7 +103,7 @@
         <th>模型</th>
         <th>识别 Avg Accuracy(%)</th>
         <th>GPU推理耗时(ms)</th>
-        <th>CPU推理耗时</th>
+        <th>CPU推理耗时(ms)</th>
         <th>模型存储大小(M)</th>
         <th>介绍</th>
     </tr>
@@ -127,7 +127,7 @@
         <th>模型</th>
         <th>识别 Avg Accuracy(%)</th>
         <th>GPU推理耗时(ms)</th>
-        <th>CPU推理耗时</th>
+        <th>CPU推理耗时(ms)</th>
         <th>模型存储大小(M)</th>
         <th>介绍</th>
     </tr>
@@ -313,11 +313,11 @@ chat_result.print()
 ## 3. 开发集成/部署
 如果产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
 
-若您需要将产线直接应用在您的Python项目中,可以参考 [2.2 本地体验](#22-python脚本方式集成)中的示例代码。
+若您需要将产线直接应用在您的Python项目中,可以参考 [2.2 本地体验](#22-本地体验)中的示例代码。
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -673,7 +673,7 @@ if __name__ == "__main__":
 如果通用文档场景信息抽取v3产线提供的默认模型权重在您的场景中,精度或速度不满意,您可以尝试利用**您自己拥有的特定领域或应用场景的数据**对现有模型进行进一步的**微调**,以提升通用表格识别产线的在您的场景中的识别效果。
 
 ### 4.1 模型微调
-由于通用文档场景信息抽取v3产线包含个模块,模型产线的效果不及预期可能来自于其中任何一个模块(文本图像矫正模块暂不支持二次开发)。
+由于通用文档场景信息抽取v3产线包含个模块,模型产线的效果不及预期可能来自于其中任何一个模块(文本图像矫正模块暂不支持二次开发)。
 
 您可以对识别效果差的图片进行分析,参考如下规则进行分析和模型微调:
 
@@ -719,4 +719,14 @@ predict = create_pipeline(
     )
 ```
 
-若您想在更多种类的硬件上使用通用文档场景信息抽取产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+此时,若您想将硬件切换为昇腾 NPU,仅需对脚本中的 `--device` 修改为 npu:0 即可:
+
+```python
+from paddlex import create_pipeline
+predict = create_pipeline( pipeline="PP-ChatOCRv3-doc",
+                            llm_name="ernie-3.5",
+                            llm_params = {"api_type":"qianfan","ak":"","sk":""},  ## 请填入您的ak与sk,否则无法调用大模型
+                            device = "npu:0" )
+```
+若您想在更多种类的硬件上使用通用文档场景信息抽取产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。
+

+ 20 - 13
docs/pipeline_usage/tutorials/information_extration_pipelines/document_scene_information_extraction_en.md

@@ -1,6 +1,6 @@
 [简体中文](document_scene_information_extraction.md) | English
 
-# PP-ChatOCRv3-doc Pipeline Usage Tutorial
+# PP-ChatOCRv3-doc Pipeline utorial
 
 ## 1. Introduction to PP-ChatOCRv3-doc Pipeline
 PP-ChatOCRv3-doc is a unique intelligent analysis solution for documents and images developed by PaddlePaddle. It combines Large Language Models (LLM) and OCR technology to provide a one-stop solution for complex document information extraction challenges such as layout analysis, rare characters, multi-page PDFs, tables, and seal recognition. By integrating with ERNIE Bot, it fuses massive data and knowledge to achieve high accuracy and wide applicability.
@@ -31,7 +31,7 @@ The **PP-ChatOCRv3-doc** pipeline includes modules for **Table Structure Recogni
     <td>522.536</td>
     <td>1845.37</td>
     <td>6.9 M</td>
-    <td>SLANet is a table structure recognition model developed by Baidu PaddlePaddle Vision Team. The model significantly improves the accuracy and inference speed of table structure recognition by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information.</td>
+    <td>SLANet is a table structure recognition model developed by Baidu PaddleX Team. The model significantly improves the accuracy and inference speed of table structure recognition by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information.</td>
   </tr>
   <tr>
     <td>SLANet_plus</td>
@@ -39,7 +39,7 @@ The **PP-ChatOCRv3-doc** pipeline includes modules for **Table Structure Recogni
     <td>522.536</td>
     <td>1845.37</td>
     <td>6.9 M</td>
-    <td>SLANet_plus is an enhanced version of SLANet, the table structure recognition model developed by Baidu PaddlePaddle Vision Team. Compared to SLANet, SLANet_plus significantly improves the recognition ability for wireless and complex tables and reduces the model's sensitivity to the accuracy of table positioning, enabling more accurate recognition even with offset table positioning.</td>
+    <td>SLANet_plus is an enhanced version of SLANet, the table structure recognition model developed by Baidu PaddleX Team. Compared to SLANet, SLANet_plus significantly improves the recognition ability for wireless and complex tables and reduces the model's sensitivity to the accuracy of table positioning, enabling more accurate recognition even with offset table positioning.</td>
   </tr>
 </table>
 
@@ -100,7 +100,7 @@ The **PP-ChatOCRv3-doc** pipeline includes modules for **Table Structure Recogni
         <th>Model</th>
         <th>Recognition Avg Accuracy (%)</th>
         <th>GPU Inference Time (ms)</th>
-        <th>CPU Inference Time</th>
+        <th>CPU Inference Time (ms)</th>
         <th>Model Size (M)</th>
         <th>Description</th>
     </tr>
@@ -123,7 +123,7 @@ The **PP-ChatOCRv3-doc** pipeline includes modules for **Table Structure Recogni
         <th>Model</th>
         <th>Recognition Avg Accuracy (%)</th>
         <th>GPU Inference Time (ms)</th>
-        <th>CPU Inference Time</th>
+        <th>CPU Inference Time (ms)</th>
         <th>Model Size (M)</th>
         <th>Description</th>
     </tr>
@@ -308,7 +308,7 @@ If you need to directly apply the pipeline in your Python project, you can refer
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics (especially response speed) of deployment strategies to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed high-performance deployment procedures, please refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics (especially response speed) of deployment strategies to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed High-Performance Inference procedures, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, please refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -563,11 +563,11 @@ if __name__ == "__main__":
 <br/>
 
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the PP-ChatOCRv3-doc Pipeline do not meet your requirements in terms of accuracy or speed for your specific scenario, you can attempt to further **fine-tune** the existing models using **your own domain-specific or application-specific data** to enhance the recognition performance of the general table recognition pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
-Since the PP-ChatOCRv3-doc Pipeline comprises four modules, unsatisfactory performance may stem from any of these modules (note that the text image rectification module does not support customization at this time).
+Since the PP-ChatOCRv3-doc Pipeline comprises six modules, unsatisfactory performance may stem from any of these modules (note that the text image rectification module does not support customization at this time).
 
 You can analyze images with poor recognition results and follow the guidelines below for analysis and model fine-tuning:
 
@@ -575,7 +575,7 @@ You can analyze images with poor recognition results and follow the guidelines b
 * Misplaced layout elements (e.g., incorrect positioning of tables or seals) may suggest issues with the layout detection module. Consult the **Customization** section in the [Layout Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/layout_detection_en.md) and fine-tune the layout detection model with your private dataset.
 * Frequent undetected text (i.e., text leakage) may indicate limitations in the text detection model. Refer to the **Customization** section in the [Text Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/text_detection_en.md) and fine-tune the text detection model using your private dataset.
 * High text recognition errors (i.e., recognized text content does not match the actual text) suggest that the text recognition model requires improvement. Follow the **Customization** section in the [Text Recognition Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/text_recognition_en.md) to fine-tune the text recognition model.
-* Frequent recognition errors in detected seal text indicate that the seal text detection model needs further refinement. Consult the **Customization** section in the [Seal Text Detection Module Development Tutorials](../../../module_usage/tutorials/ocr_modules/text_detection_en.md) to fine-tune the seal text detection model.
+* Frequent recognition errors in detected seal text indicate that the seal text detection model needs further refinement. Consult the **Customization** section in the [Seal Text Detection Module Development Tutorials](../../../module_usage/tutorials/ocr_modules/seal_text_detection_en.md) to fine-tune the seal text detection model.
 * Frequent misidentifications of document or certificate orientations with text regions suggest that the document image orientation classification model requires improvement. Refer to the **Customization** section in the [Document Image Orientation Classification Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md) to fine-tune the document image orientation classification model.
 
 ### 4.2 Model Deployment
@@ -600,11 +600,17 @@ Subsequently, load the modified pipeline configuration file using the command-li
 
 ## 5. Multi-hardware Support
 
-PaddleX supports various devices such as NVIDIA GPUs, Kunlun XPU, Ascend NPU, and Cambricon MLU. Only need to set the **`device` parameter** simply.
+For example, to perform inference using the PP-ChatOCRv3-doc Pipeline on an NVIDIA GPU, you run:
+```python
+from paddlex import create_pipeline
+predict = create_pipeline( pipeline="PP-ChatOCRv3-doc",
+                            llm_name="ernie-3.5",
+                            llm_params = {"api_type":"qianfan","ak":"","sk":""},  ## Please fill in your ak and sk, or you will not be able to call the large model
+                            device = "gpu:0" )
+```
 
-例如,使用文档场景信息抽取v3产线时,将运行设备从英伟达 GPU 更改为昇腾 NPU,仅需将脚本中的 `device` 修改为 npu 即可:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the script to `npu:0`:
 
-For example, when using the PP-ChatOCRv3-doc Pipeline, changing the running device from Nvidia GPU to Ascend NPU only requires modifying the `device`:
 
 ```python
 from paddlex import create_pipeline
@@ -616,4 +622,5 @@ predict = create_pipeline(
     )
 ```
 
-If you want to use the PP-ChatOCRv3-doc Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../installation/installation_other_devices_en.md).
+If you want to use the PP-ChatOCRv3-doc Pipeline on more types of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../installation/multi_devices_use_guide_en.md).
+

+ 6 - 6
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md

@@ -18,7 +18,7 @@ OCR(光学字符识别,Optical Character Recognition)是一种将图像中
     <th>具体模型</th>
     <th>精度</th>
     <th>GPU推理耗时 (ms)</th>
-    <th>CPU推理耗时</th>
+    <th>CPU推理耗时(ms)</th>
     <th>模型存储大小 (M)</th>
   </tr>
   <tr>
@@ -77,7 +77,7 @@ PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/ocr/02.png)
 
-如果您对产线运行的效果满意,可以直接对产线进行集成部署,您可以直接从云端下载部署包,也可以使用[2.2节本地体验](#3-开发集成部署)的方式。如果不满意,您也可以利用私有数据**对产线中的模型进行在线微调**。
+如果您对产线运行的效果满意,可以直接对产线进行集成部署,您可以直接从云端下载部署包,也可以使用[2.2节本地体验](#22-本地体验)的方式。如果不满意,您也可以利用私有数据**对产线中的模型进行在线微调**。
 
 ### 2.2 本地体验
 > ❗ 在本地使用通用OCR产线前,请确保您已经按照[PaddleX安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。
@@ -129,7 +129,7 @@ paddlex --pipeline ./ocr.yaml --input general_ocr_002.png
 
 可视化结果如下:
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/ocr/03.png)
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 #### 2.2.2 Python脚本方式集成
 * 几行代码即可完成产线的快速推理,以通用 OCR 产线为例:
 
@@ -153,7 +153,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用OCR产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -195,7 +195,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -747,4 +747,4 @@ paddlex --pipeline OCR --input general_ocr_002.png --device gpu:0
 ```bash
 paddlex --pipeline OCR --input general_ocr_002.png --device npu:0
 ```
-若您想在更多种类的硬件上使用通用OCR产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用OCR产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

文件差異過大導致無法顯示
+ 7 - 4
docs/pipeline_usage/tutorials/ocr_pipelines/OCR_en.md


+ 5 - 5
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md

@@ -17,7 +17,7 @@
 
 **版面区域检测模块模型:**
 
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |RT-DETR-H_layout_17cls|92.6|115.126|3827.25|470.2M|
 
@@ -106,7 +106,7 @@ paddlex --pipeline ./formula_recognition.yaml --input general_formula_recognitio
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/formula_recognition/02.jpg)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。此外,您可以通过网站 [https://www.lddgo.net/math/latex-to-image](https://www.lddgo.net/math/latex-to-image) 对识别出来的LaTeX代码进行可视化。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下。此外,您可以通过网站 [https://www.lddgo.net/math/latex-to-image](https://www.lddgo.net/math/latex-to-image) 对识别出来的LaTeX代码进行可视化。
 
 ### 2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用公式识别产线为例:
@@ -132,7 +132,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用通用公式识别产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -174,7 +174,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -724,4 +724,4 @@ paddlex --pipeline formula_recognition --input general_formula_recognition.png -
 ```bash
 paddlex --pipeline formula_recognition --input general_formula_recognition.png --device npu:0
 ```
-若您想在更多种类的硬件上使用通用通用公式识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用通用公式识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 6 - 6
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition_en.md

@@ -17,7 +17,7 @@ Formula recognition is a technology that automatically identifies and extracts L
 
 **Layout Detection Module Models**:
 
-| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time | Model Size (M) |
+| Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) |
 |-|-|-|-|-|
 | RT-DETR-H_layout_17cls | 92.6 | 115.126 | 3827.25 | 470.2M |
 
@@ -105,7 +105,7 @@ Where `dt_polys` represents the coordinates of the detected formula area, and `r
 The visualization result is as follows:
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/formula_recognition/02.jpg)
 
-The visualization image is saved in the `output` directory by default, and you can also customize it through `--save_path`. Additionally, you can visualize the recognized LaTeX code through the website [https://www.lddgo.net/math/latex-to-image](https://www.lddgo.net/math/latex-to-image).
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path. Additionally, you can visualize the recognized LaTeX code through the website [https://www.lddgo.net/math/latex-to-image](https://www.lddgo.net/math/latex-to-image).
 
 
 #### 2.2 Python Script Integration
@@ -131,7 +131,7 @@ The Python script above executes the following steps:
 |-|-|-|-|
 |`pipeline`| The name of the pipeline or the path to the pipeline configuration file. If it is the name of the pipeline, it must be supported by PaddleX. |`str`|None|
 |`device`| The device for pipeline model inference. Supports: "gpu", "cpu". |`str`|`gpu`|
-|`enable_hpi`| Whether to enable high-performance inference, only available if the pipeline supports it. |`bool`|`False`|
+|`use_hpip`| Whether to enable high-performance inference, only available if the pipeline supports it. |`bool`|`False`|
 
 (2)Invoke the `predict` method of the general formula recognition pipeline object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -174,7 +174,7 @@ If you need to apply the general formula recognition pipeline directly in your P
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -666,7 +666,7 @@ print_r($result["texts"]);
 You can choose the appropriate deployment method based on your needs to proceed with subsequent AI application integration.
 
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the general formula recognition pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing models using **your own domain-specific or application-specific data** to improve the recognition performance of the general formula recognition pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -706,4 +706,4 @@ Now, if you want to switch the hardware to Ascend NPU, you only need to modify t
 paddlex --pipeline formula_recognition --input general_formula_recognition.png --device npu:0
 ```
 
-If you want to use the general formula recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the general formula recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 16 - 12
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md

@@ -21,7 +21,7 @@
     <th>模型</th>
     <th>精度(%)</th>
     <th>GPU推理耗时 (ms)</th>
-    <th>CPU推理耗时</th>
+    <th>CPU推理耗时(ms)</th>
     <th>模型存储大小 (M)</th>
     <th>介绍</th>
   </tr>
@@ -47,7 +47,7 @@
 
 **版面区域分析模块模型:**
 
-|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|mAP(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |PicoDet_layout_1x|86.8|13.036|91.2634|7.4M |
 |PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|
@@ -58,7 +58,7 @@
 
 **文本检测模块模型:**
 
-|模型名称|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|检测Hmean(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |PP-OCRv4_mobile_det |77.79|10.6923|120.177|4.2 M|
 |PP-OCRv4_server_det |82.69|83.3501|2434.01|100.1M|
@@ -67,7 +67,7 @@
 
 **文本识别模块模型:**
 
-|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时|模型存储大小(M)|
+|模型名称|识别Avg Accuracy(%)|GPU推理耗时(ms)|CPU推理耗时(ms)|模型存储大小(M)|
 |-|-|-|-|-|
 |PP-OCRv4_mobile_rec |78.20|7.95018|46.7868|10.6 M|
 |PP-OCRv4_server_rec |79.20|7.19439|140.179|71.2 M|
@@ -194,7 +194,7 @@ paddlex --pipeline ./table_recognition.yaml --input table_recognition.jpg
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/table_recognition/03.png)
 
-可视化图片默认保存在 `output` 目录下,您可以通过 `--save_path` 进行自定义。
+可视化图片默认不进行保存,您可以通过 `--save_path` 自定义保存路径,随后所有结果将被保存在指定路径下
 
 ### 2.2 Python脚本方式集成
 几行代码即可完成产线的快速推理,以通用表格识别产线为例:
@@ -207,8 +207,9 @@ pipeline = create_pipeline(pipeline="table_recognition")
 output = pipeline.predict("table_recognition.jpg")
 for res in output:
     res.print() ## 打印预测的结构化输出
-    res.save_to_csv("./output/") ## 保存csv格式结果
+    res.save_to_img("./output/") ## 保存img格式结果
     res.save_to_xlsx("./output/") ## 保存表格格式结果
+    res.save_to_html("./output/") ## 保存html结果
 ```
 得到的结果与命令行方式相同。
 
@@ -220,7 +221,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -240,10 +241,12 @@ for res in output:
 
 |方法|说明|方法参数|
 |-|-|-|
-|save_to_csv|将结果保存为csv格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
+|save_to_img|将结果保存为img格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
 |save_to_html|将结果保存为html格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
 |save_to_xlsx|将结果保存为表格格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
 
+其中,`save_to_img` 能够保存可视化结果(包括OCR结果图片、版面分析结果图片、表格结构识别结果图片), `save_to_html` 能够将表格直接保存为html文件(包括文本和表格格式),`save_to_xlsx` 能够将表格保存为Excel格式文件(包括文本和格式)。
+ 
 若您获取了配置文件,即可对表格识别产线各项配置进行自定义,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可。
 
 例如,若您的配置文件保存在 `./my_path/table_recognition.yaml` ,则只需执行:
@@ -254,17 +257,18 @@ pipeline = create_pipeline(pipeline="./my_path/table_recognition.yaml")
 output = pipeline.predict("table_recognition.jpg")
 for res in output:
     res.print() ## 打印预测的结构化输出
-    res.save_to_csv("./output/") ## 保存csv格式结果
+    res.save_to_img("./output/") ## 保存img格式结果
     res.save_to_xlsx("./output/") ## 保存表格格式结果
+    res.save_to_html("./output/") ## 保存html结果
 ```
 ## 3. 开发集成/部署
 如果产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
 
-若您需要将产线直接应用在您的Python项目中,可以参考 [2.2.2 Python脚本方式](#222-python脚本方式集成)中的示例代码。
+若您需要将产线直接应用在您的Python项目中,可以参考 [2.2 Python脚本方式](#22-python脚本方式集成)中的示例代码。
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -830,4 +834,4 @@ paddlex --pipeline table_recognition --input table_recognition.jpg --device gpu:
 ```
 paddlex --pipeline table_recognition --input table_recognition.jpg --device npu:0
 ```
-若您想在更多种类的硬件上使用通用表格识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用表格识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 33 - 28
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_en.md

@@ -1,11 +1,12 @@
-[简体中文](table_recognition.md) | English
+[简体中文](table_recognition_en.md) | English
 
 # General Table Recognition Pipeline Usage Tutorial
 
 ## 1. Introduction to the General Table Recognition Pipeline
 Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in data entry, information retrieval, and document analysis. By leveraging computer vision and machine learning algorithms, table recognition can convert complex table information into editable formats, facilitating further data processing and analysis for users.
 
-![](/tmp/images/pipelines/table_recognition/01.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/table_recognition/01.png)
+
 
 **The General Table Recognition Pipeline comprises modules for table structure recognition, layout analysis, text detection, and text recognition.**
 
@@ -31,7 +32,7 @@ Table recognition is a technology that automatically identifies and extracts tab
     <td>522.536</td>
     <td>1845.37</td>
     <td>6.9 M</td>
-    <td rowspan="1">SLANet is a table structure recognition model developed by Baidu PaddlePaddle Vision Team. The model significantly improves the accuracy and inference speed of table structure recognition by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information.</td>
+    <td rowspan="1">SLANet is a table structure recognition model developed by Baidu PaddleX Team. The model significantly improves the accuracy and inference speed of table structure recognition by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information.</td>
   </tr>
    </tr>
    <tr>
@@ -41,7 +42,7 @@ Table recognition is a technology that automatically identifies and extracts tab
     <td>1845.37</td>
     <td>6.9 M</td>
         <td rowspan="1">
-SLANet_plus is an enhanced version of SLANet, a table structure recognition model developed by Baidu PaddlePaddle's Vision Team. Compared to SLANet, SLANet_plus significantly improves its recognition capabilities for wireless and complex tables, while reducing the model's sensitivity to the accuracy of table localization. Even when there are offsets in table localization, it can still perform relatively accurate recognition.
+SLANet_plus is an enhanced version of SLANet, a table structure recognition model developed by Baidu PaddleX Team. Compared to SLANet, SLANet_plus significantly improves its recognition capabilities for wireless and complex tables, while reducing the model's sensitivity to the accuracy of table localization. Even when there are offsets in table localization, it can still perform relatively accurate recognition.
 </td>
   </tr>
 </table>
@@ -50,7 +51,7 @@ SLANet_plus is an enhanced version of SLANet, a table structure recognition mode
 
 **Layout Analysis Module Models**:
 
-|Model Name|mAP (%)|GPU Inference Time (ms)|CPU Inference Time|Model Size (M)|
+|Model Name|mAP (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
 |-|-|-|-|-|
 |PicoDet_layout_1x|86.8|13.036|91.2634|7.4M|
 |PicoDet-L_layout_3cls|89.3|15.7425|159.771|22.6 M|
@@ -61,7 +62,7 @@ SLANet_plus is an enhanced version of SLANet, a table structure recognition mode
 
 **Text Detection Module Models**:
 
-|Model Name|Detection Hmean (%)|GPU Inference Time (ms)|CPU Inference Time|Model Size (M)|
+|Model Name|Detection Hmean (%)|GPU Inference Time (ms)|CPU Inference Time (ms)|Model Size (M)|
 |-|-|-|-|-|
 |PP-OCRv4_mobile_det|77.79|10.6923|120.177|4.2 M|
 |PP-OCRv4_server_det|82.69|83.3501|2434.01|100.1M|
@@ -74,12 +75,12 @@ PaddleX's pre-trained model pipelines allow for quick experience of their effect
 ### 2.1 Online Experience
 You can [experience online](https://aistudio.baidu.com/community/app/91661/webUI) the effects of the General Table Recognition pipeline by using the demo images provided by the official. For example:
 
-![](/tmp/images/pipelines/table_recognition/02.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/table_recognition/02.png)
 
 If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to **fine-tune the models in the pipeline online**.
 
 ### 2.2 Local Experience
-Before using the General Table Recognition pipeline locally, ensure you have installed the PaddleX wheel package following the [PaddleX Local Installation Guide](../../../installation/installation.md).
+Before using the General Table Recognition pipeline locally, ensure you have installed the PaddleX wheel package following the [PaddleX Local Installation Guide](../../../installation/installation_en.md).
 
 ### 2.1 Command Line Experience
 Experience the effects of the table recognition pipeline with a single command:
@@ -124,9 +125,9 @@ Here, parameters like `--model` and `--device` do not need to be specified, as t
 
 After running, the result is:
 
-![](/tmp/images/pipelines/table_recognition/03.png)
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/table_recognition/03.png)
 
-The visualized image is saved in the `output` directory by default, and you can customize it with `--save_path`.
+The visualized image not saved by default. You can customize the save path through `--save_path`, and then all results will be saved in the specified path. 
 
 ### 2.2 Python Script Integration
 A few lines of code are all you need to quickly perform inference with the pipeline. Taking the General Table Recognition pipeline as an example:
@@ -139,8 +140,9 @@ pipeline = create_pipeline(pipeline="table_recognition")
 output = pipeline.predict("table_recognition.jpg")
 for res in output:
     res.print()  # Print the structured output of the prediction
-    res.save_to_csv("./output/")  # Save the results in CSV format
+    res.save_to_img("./output/")  # Save the results in img format
     res.save_to_xlsx("./output/")  # Save the results in Excel format
+    res.save_to_html("./output/") # Save results in HTML format
 ```
 The results are the same as those obtained through the command line.
 
@@ -152,7 +154,7 @@ In the above Python script, the following steps are executed:
 |-|-|-|-|
 |`pipeline`| The name of the production line or the path to the production line configuration file. If it is the name of the production line, it must be supported by PaddleX. |`str`|None|
 |`device`| The device for production line model inference. Supports: "gpu", "cpu". |`str`|`gpu`|
-|`enable_hpi`| Whether to enable high-performance inference, only available if the production line supports it. |`bool`|`False`|
+|`use_hpip`| Whether to enable high-performance inference, only available if the production line supports it. |`bool`|`False`|
 
 (2)Invoke the `predict` method of the  production line object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -169,11 +171,13 @@ In the above Python script, the following steps are executed:
 
 (4)Process the prediction results: The prediction result for each sample is of `dict` type and supports printing or saving to files, with the supported file types depending on the specific pipeline. For example:
 
-| Method         | Description                     | Method Parameters |
-|--------------|-----------------------------|--------------------------------------------------------------------------------------------------------|
-| print        | Prints results to the terminal  | `- format_json`: bool, whether to format the output content with json indentation, default is True;<br>`- indent`: int, json formatting setting, only valid when format_json is True, default is 4;<br>`- ensure_ascii`: bool, json formatting setting, only valid when format_json is True, default is False; |
-| save_to_json | Saves results as a json file   | `- save_path`: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type;<br>`- indent`: int, json formatting setting, default is 4;<br>`- ensure_ascii`: bool, json formatting setting, default is False; |
-| save_to_img  | Saves results as an image file | `- save_path`: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type; |
+| Method | Description | Method Parameters |
+|--------|-------------|-------------------|
+| save_to_img | Save the results as an img format file | `- save_path`: str, the path to save the file. When it's a directory, the saved file name will be consistent with the input file type; |
+| save_to_html | Save the results as an html format file | `- save_path`: str, the path to save the file. When it's a directory, the saved file name will be consistent with the input file type; |
+| save_to_xlsx | Save the results as a spreadsheet format file | `- save_path`: str, the path to save the file. When it's a directory, the saved file name will be consistent with the input file type; |
+
+Where `save_to_img` can save visualization results (including OCR result images, layout analysis result images, table structure recognition result images), `save_to_html` can directly save the table as an html file (including text and table formatting), and `save_to_xlsx` can save the table as an Excel format file (including text and formatting).
 
 If you have a configuration file, you can customize the configurations of the image anomaly detection pipeline by simply modifying the `pipeline` parameter in the `create_pipeline` method to the path of the pipeline configuration file.
 
@@ -185,20 +189,21 @@ pipeline = create_pipeline(pipeline="./my_path/table_recognition.yaml")
 output = pipeline.predict("table_recognition.jpg")
 for res in output:
     res.print()  # Print the structured output of prediction
-    res.save_to_csv("./output/")  # Save results in CSV format
+    res.save_to_img("./output/")  # Save results in img format
     res.save_to_xlsx("./output/")  # Save results in Excel format
+    res.save_to_html("./output/") # Save results in HTML format
 ```
 
 ## 3. Development Integration/Deployment
 If the pipeline meets your requirements for inference speed and accuracy in production, you can proceed with development integration/deployment.
 
-If you need to directly apply the pipeline in your Python project, refer to the example code in [2.2.2 Python Script Integration](#222-python-script-integration).
+If you need to directly apply the pipeline in your Python project, refer to the example code in [2.2 Python Script Integration](#22-python-script-integration).
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that aim to deeply optimize model inference and pre/post-processing for significant end-to-end process acceleration. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
-☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy.md).
+☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
 Below are the API references and multi-language service invocation examples:
 
@@ -693,10 +698,10 @@ print_r($result["tables"]);
 </details>
 <br/>
 
-📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
+📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities directly on user devices, allowing devices to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy_en.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the general table recognition pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the general table recognition pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -704,10 +709,10 @@ Since the general table recognition pipeline consists of four modules, unsatisfa
 
 Analyze images with poor recognition results and follow the rules below for analysis and model fine-tuning:
 
-* If the detected table structure is incorrect (e.g., row and column recognition errors, incorrect cell positions), the table structure recognition module may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/table_structure_recognition.md#customization) section in the [Table Structure Recognition Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/table_structure_recognition.md) and use your private dataset to fine-tune the table structure recognition model.
-* If the table area is incorrectly located within the overall layout, the layout detection module may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/layout_detection.md#customization) section in the [Layout Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/layout_detection.md) and use your private dataset to fine-tune the layout detection model.
-* If many texts are undetected (i.e., text miss detection), the text detection model may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/text_recognition.md#customization) section in the [Text Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/text_recognition.md) and use your private dataset to fine-tune the text detection model.
-* If many detected texts contain recognition errors (i.e., the recognized text content does not match the actual text content), the text recognition model requires further improvement. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/table_structure_recognition.md#customization) section.
+* If the detected table structure is incorrect (e.g., row and column recognition errors, incorrect cell positions), the table structure recognition module may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/table_structure_recognition_en.md#customization) section in the [Table Structure Recognition Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/table_structure_recognition_en.md) and use your private dataset to fine-tune the table structure recognition model.
+* If the table area is incorrectly located within the overall layout, the layout detection module may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/layout_detection_en.md#customization) section in the [Layout Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/layout_detection_en.md) and use your private dataset to fine-tune the layout detection model.
+* If many texts are undetected (i.e., text miss detection), the text detection model may be insufficient. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/text_recognition_en.md#customization) section in the [Text Detection Module Development Tutorial](../../../module_usage/tutorials/ocr_modules/text_recognition_en.md) and use your private dataset to fine-tune the text detection model.
+* If many detected texts contain recognition errors (i.e., the recognized text content does not match the actual text content), the text recognition model requires further improvement. You need to refer to the [Customization](../../../module_usage/tutorials/ocr_modules/table_structure_recognition_en.md#customization) section.
 ### 4.2 Model Application
 After fine-tuning your model with a private dataset, you will obtain local model weights files.
 
@@ -741,4 +746,4 @@ At this time, if you want to switch the hardware to Ascend NPU, simply modify `-
 ```bash
 paddlex --pipeline table_recognition --input table_recognition.jpg --device npu:0
 ```
-If you want to use the general table recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../installation/installation_other_devices.md).
+If you want to use the general table recognition pipeline on more types of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 5 - 6
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md

@@ -3,7 +3,7 @@
 # 时序异常检测产线使用教程
 
 ## 1. 通用时序异常检测产线介绍
-时序异常检测是一种识别时间序列数据中异常模式或行为的技术,广泛应用于网络安全、设备监控和金融欺诈检测等领域。它通过分析历史数据中的正常趋势和规律,来发现与预期行为显著不同的事件,例如突然增加的网络流量或异常的交易活动。时序异常检测通常使用统计方法或机器学习算法(如孤立森林、LSTM等),能够自动识别数据中的异常点,为企业和组织提供实时警报,帮助及时应对潜在风险和问题。这项技术在保障系统稳定性和安全性方面发挥着重要作用。
+时序异常检测是一种识别时间序列数据中异常模式或行为的技术,广泛应用于网络安全、设备监控和金融欺诈检测等领域。它通过分析历史数据中的正常趋势和规律,来发现与预期行为显著不同的事件,例如突然增加的网络流量或异常的交易活动。时序异常检测能够自动识别数据中的异常点,为企业和组织提供实时警报,帮助及时应对潜在风险和问题。这项技术在保障系统稳定性和安全性方面发挥着重要作用。
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/05.png)
 
@@ -123,7 +123,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -143,7 +143,6 @@ for res in output:
 |方法|说明|方法参数|
 |-|-|-|
 |save_to_csv|将结果保存为csv格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
-|save_to_html|将结果保存为html格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
 |save_to_xlsx|将结果保存为表格格式的文件|`- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;|
 
 若您获取了配置文件,即可对时序异常检测产线各项配置进行自定义,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可。
@@ -166,7 +165,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -631,9 +630,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline ts_ad --input ts_ad.cs --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的` --device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的` --device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline ts_ad --input ts_ad.cs --device npu:0
 ```
-若您想在更多种类的硬件上使用通用时序异常检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用时序异常检测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 8 - 9
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection_en.md

@@ -3,7 +3,7 @@
 # Time Series Anomaly Detection Pipeline Tutorial
 
 ## 1. Introduction to the General Time Series Anomaly Detection Pipeline
-Time series anomaly detection is a technique for identifying abnormal patterns or behaviors in time series data. It is widely applied in fields such as network security, equipment monitoring, and financial fraud detection. By analyzing normal trends and patterns in historical data, it discovers events that significantly deviate from expected behaviors, such as sudden spikes in network traffic or unusual transaction activities. Time series anomaly detection typically employs statistical methods or machine learning algorithms (e.g., Isolation Forest, LSTM), enabling automatic identification of anomalies in data. This technology provides real-time alerts for enterprises and organizations, helping them promptly address potential risks and issues. It plays a crucial role in ensuring system stability and security.
+Time series anomaly detection is a technique for identifying abnormal patterns or behaviors in time series data. It is widely applied in fields such as network security, equipment monitoring, and financial fraud detection. By analyzing normal trends and patterns in historical data, it discovers events that significantly deviate from expected behaviors, such as sudden spikes in network traffic or unusual transaction activities. Time series anomaly detection enable automatic identification of anomalies in data. This technology provides real-time alerts for enterprises and organizations, helping them promptly address potential risks and issues. It plays a crucial role in ensuring system stability and security.
 
 ![](/tmp/images/pipelines/time_series/05.png)
 
@@ -118,7 +118,7 @@ In the above Python script, the following steps are executed:
 |-|-|-|-|
 |`pipeline`| The name of the production line or the path to the production line configuration file. If it is the name of the production line, it must be supported by PaddleX. |`str`|None|
 |`device`| The device for production line model inference. Supports: "gpu", "cpu". |`str`|`gpu`|
-|`enable_hpi`| Whether to enable high-performance inference, only available if the production line supports it. |`bool`|`False`|
+|`use_hpip`| Whether to enable high-performance inference, only available if the production line supports it. |`bool`|`False`|
 
 (2)Invoke the `predict` method of the  production line object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -137,9 +137,8 @@ In the above Python script, the following steps are executed:
 
 | Method         | Description                     | Method Parameters |
 |--------------|-----------------------------|--------------------------------------------------------------------------------------------------------|
-| print        | Prints results to the terminal  | `- format_json`: bool, whether to format the output content with json indentation, default is True;<br>`- indent`: int, json formatting setting, only valid when format_json is True, default is 4;<br>`- ensure_ascii`: bool, json formatting setting, only valid when format_json is True, default is False; |
-| save_to_json | Saves results as a json file   | `- save_path`: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type;<br>`- indent`: int, json formatting setting, default is 4;<br>`- ensure_ascii`: bool, json formatting setting, default is False; |
-| save_to_img  | Saves results as an image file | `- save_path`: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type; |
+| save_to_csv | Saves results as a csv file   | `- save_path`: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type;<br>`- indent`: int, json formatting setting, default is 4;<br>`- ensure_ascii`: bool, json formatting setting, default is False; |
+| save_to_xlsx  | Saves results as table file | `- save_path`: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type; |
 
 If you have a configuration file, you can customize the configurations of the image anomaly detection pipeline by simply modifying the `pipeline` parameter in the `create_pipeline` method to the path of the pipeline configuration file.
 
@@ -161,7 +160,7 @@ If you need to directly apply the pipeline in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX enables users to achieve low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -572,7 +571,7 @@ echo "Output time-series data saved at " . $output_csv_path . "\n";
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the General Time Series Anomaly Detection Pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -602,9 +601,9 @@ For example, if you use an NVIDIA GPU for inference of the time series anomaly d
 ```bash
 paddlex --pipeline ts_ad --input ts_ad.csv --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline ts_ad --input ts_ad.csv --device npu:0
 ```
-If you want to use the General Time-Series Anomaly Detection Pipeline on more diverse hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the General Time-Series Anomaly Detection Pipeline on more diverse hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 5 - 5
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md

@@ -3,12 +3,12 @@
 # 时序分类产线使用教程
 
 ## 1. 通用时序分类产线介绍
-时序分类是一种将时间序列数据归类到预定义类别的技术,广泛应用于行为识别、语音识别和金融趋势分析等领域。它通过分析随时间变化的特征,识别出不同的模式或事件,例如将一段语音信号分类为“问候”或“请求”,或将股票价格走势划分为“上涨”或“下跌”。时序分类通常使用机器学习和深度学习模型,能够有效捕捉时间依赖性和变化规律,以便为数据提供准确的分类标签。这项技术在智能监控、语音助手和市场预测等应用中起着关键作用。
+时序分类是一种将时间序列数据归类到预定义类别的技术,广泛应用于行为识别、金融趋势分析等领域。它通过分析随时间变化的特征,识别出不同的模式或事件,例如将一段语音信号分类为“问候”或“请求”,或将股票价格走势划分为“上涨”或“下跌”。时序分类通常使用机器学习和深度学习模型,能够有效捕捉时间依赖性和变化规律,以便为数据提供准确的分类标签。这项技术在智能监控、市场预测等应用中起着关键作用。
 
 ![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/time_series/01.png)
 
 
-**通用****时序分类****产线中包含了****时序分类****模块,如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型推理速度,请选择推理速度较快的模型,如您更考虑模型存储大小,请选择存储大小较小的模型**。
+**通用****时序分类****产线中包含了****时序分类****模块**。
 
 <details>
    <summary> 👉模型列表详情</summary>
@@ -104,7 +104,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -147,7 +147,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -571,4 +571,4 @@ paddlex --pipeline ts_cls --input ts_cls.csv --device gpu:0
 ```
 paddlex --pipeline ts_cls --input ts_cls.csv --device npu:0
 ```
-若您想在更多种类的硬件上使用通用时序分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用时序分类产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 6 - 6
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification_en.md

@@ -3,11 +3,11 @@
 # Time Series Classification Pipeline Tutorial
 
 ## 1. Introduction to General Time Series Classification Pipeline
-Time series classification is a technique that categorizes time-series data into predefined classes, widely applied in fields such as behavior recognition, speech recognition, and financial trend analysis. By analyzing features that vary over time, it identifies different patterns or events, for example, classifying a speech signal as "greeting" or "request," or categorizing stock price movements as "rising" or "falling." Time series classification typically employs machine learning and deep learning models, effectively capturing temporal dependencies and variation patterns to provide accurate classification labels for data. This technology plays a pivotal role in applications such as intelligent monitoring, voice assistants, and market forecasting.
+Time series classification is a technique that categorizes time-series data into predefined classes, widely applied in fields such as behavior recognition and financial trend analysis. By analyzing features that vary over time, it identifies different patterns or events, for example, classifying a speech signal as "greeting" or "request," or categorizing stock price movements as "rising" or "falling." Time series classification typically employs machine learning and deep learning models, effectively capturing temporal dependencies and variation patterns to provide accurate classification labels for data. This technology plays a pivotal role in applications such as intelligent monitoring and market forecasting.
 
 ![](/tmp/images/pipelines/time_series/01.png)
 
-**The General Time Series Classification Pipeline includes a Time Series Classification module. If you prioritize model accuracy, choose a model with higher accuracy. If you prioritize inference speed, select a model with faster inference. If you prioritize model size, choose a model with a smaller storage footprint.**
+**The General Time Series Classification Pipeline includes a Time Series Classification module.**
 
 <details>
    <summary> 👉Model List Details</summary>
@@ -108,7 +108,7 @@ In the above Python script, the following steps are executed:
 |-----------|-------------|------|---------|
 | `pipeline` | The name of the pipeline or the path to the pipeline configuration file. If it's a pipeline name, it must be supported by PaddleX. | `str` | None |
 | `device` | The device for pipeline model inference. Supports: "gpu", "cpu". | `str` | "gpu" |
-| `enable_hpi` | Whether to enable high-performance inference. Available only if the pipeline supports it. | `bool` | `False` |
+| `use_hpip` | Whether to enable high-performance inference. Available only if the pipeline supports it. | `bool` | `False` |
 
 (2) Call the `predict` method of the pipeline object for inference: The `predict` method takes `x` as a parameter, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -151,7 +151,7 @@ If you need to directly apply the pipeline in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for deployment performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that deeply optimize model inference and pre/post-processing to significantly speed up the end-to-end process. Refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md) for detailed high-performance deployment procedures.
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for deployment performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins that deeply optimize model inference and pre/post-processing to significantly speed up the end-to-end process. Refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md) for detailed High-Performance Inference procedures.
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX enables users to achieve low-cost service-oriented deployment of pipelines. Refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md) for detailed service-oriented deployment procedures.
 
@@ -521,7 +521,7 @@ echo "label: " . $result["label"] . ", score: " . $result["score"];
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing capabilities on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. Refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md) for detailed edge deployment procedures.
 Choose the appropriate deployment method based on your needs to proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the General Time Series Classification Pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the pipeline in your scenario.
 
 ### 4.1 Model Fine-tuning
@@ -556,4 +556,4 @@ At this point, if you wish to switch the hardware to Ascend NPU, simply modify t
 paddlex --pipeline ts_cls --input ts_cls.csv --device npu:0
 ```
 
-If you intend to use the General Time Series Classification Pipeline on a wider range of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you intend to use the General Time Series Classification Pipeline on a wider range of hardware, please refer to the [PaddleX Multi-Hardware Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

+ 4 - 4
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md

@@ -121,7 +121,7 @@ for res in output:
 |-|-|-|-|
 |`pipeline`|产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。|`str`|无|
 |`device`|产线模型推理设备。支持:“gpu”,“cpu”。|`str`|`gpu`|
-|`enable_hpi`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
+|`use_hpip`|是否启用高性能推理,仅当该产线支持高性能推理时可用。|`bool`|`False`|
 
 (2)调用产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为`x`,用于输入待预测数据,支持多种输入方式,具体示例如下:
 
@@ -163,7 +163,7 @@ for res in output:
 
 此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
 
-🚀 **高性能部署**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能部署流程请参考[PaddleX高性能部署指南](../../../pipeline_deploy/high_performance_deploy.md)。
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_deploy.md)。
 
 ☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
 
@@ -628,9 +628,9 @@ PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多
 ```
 paddlex --pipeline ts_fc --input ts_fc.csv --device gpu:0
 ```
-此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu 即可:
+此时,若您想将硬件切换为昇腾 NPU,仅需对 Python 命令中的 `--device` 修改为 npu:0 即可:
 
 ```
 paddlex --pipeline ts_fc --input ts_fc.csv --device npu:0
 ```
-若您想在更多种类的硬件上使用通用时序预测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/installation_other_devices.md)。
+若您想在更多种类的硬件上使用通用时序预测产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 5 - 5
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting_en.md

@@ -119,7 +119,7 @@ In the above Python script, the following steps are executed:
 |-----------|-------------|------|---------------|
 | `pipeline` | The name of the production line or the path to the production line configuration file. If it is the name of the production line, it must be supported by PaddleX. | `str` | None |
 | `device` | The device for production line model inference. Supports: "gpu", "cpu". | `str` | "gpu" |
-| `enable_hpi` | Whether to enable high-performance inference, only available when the production line supports high-performance inference. | `bool` | `False` |
+| `use_hpip` | Whether to enable high-performance inference, only available when the production line supports high-performance inference. | `bool` | `False` |
 
 (2)Invoke the `predict` method of the  production line object for inference prediction: The `predict` method parameter is `x`, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:
 
@@ -162,7 +162,7 @@ If you need to directly apply the pipeline in your Python project, refer to the
 
 Additionally, PaddleX provides three other deployment methods, detailed as follows:
 
-🚀 **High-Performance Deployment**: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed high-performance deployment procedures, refer to the [PaddleX High-Performance Deployment Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
+🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for deployment strategy performance metrics (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end process acceleration. For detailed High-Performance Inference procedures, refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_deploy_en.md).
 
 ☁️ **Service-Oriented Deployment**: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the [PaddleX Service-Oriented Deployment Guide](../../../pipeline_deploy/service_deploy_en.md).
 
@@ -573,7 +573,7 @@ echo "Output time-series data saved at " . $output_csv_path . "\n";
 📱 **Edge Deployment**: Edge deployment is a method that places computing and data processing functions on user devices themselves, enabling devices to directly process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/lite_deploy.md).
 Choose the appropriate deployment method for your model pipeline based on your needs, and proceed with subsequent AI application integration.
 
-## 4. Customization and Fine-tuning
+## 4. Custom Development
 If the default model weights provided by the General Time Series Forecasting Pipeline do not meet your requirements in terms of accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using **your own domain-specific or application-specific data** to improve the recognition performance of the pipeline in your scenario.
 
 #### 4.1 Model Fine-tuning
@@ -602,9 +602,9 @@ For example, if you use an NVIDIA GPU for inference with the time series forecas
 ```bash
 paddlex --pipeline ts_fc --input ts_fc.csv --device gpu:0
 ``````
-At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu`:
+At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:
 
 ```bash
 paddlex --pipeline ts_fc --input ts_fc.csv --device npu:0
 ```
-If you want to use the General Time Series Forecasting Pipeline on a wider range of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/installation_other_devices_en.md).
+If you want to use the General Time Series Forecasting Pipeline on a wider range of hardware, please refer to the [PaddleX Multi-Device Usage Guide](../../../other_devices_support/multi_devices_use_guide_en.md).

部分文件因文件數量過多而無法顯示