瀏覽代碼

Fix bugs and bump paddle2onnx version (#3922)

* Fix HPI doc

* Remove warnings when specifying both hpi_config and pp_option

* Fix rgb-bgr bug

* Update paddle2onnx version

* Fix type hints and doc strings for use_hpip

* Update docs

* Update and fix

* Remove _parallel.py
Lin Manhui 7 月之前
父節點
當前提交
cfab9d0065
共有 100 個文件被更改,包括 138 次插入148 次删除
  1. 2 2
      docs/pipeline_deploy/high_performance_inference.en.md
  2. 2 2
      docs/pipeline_deploy/high_performance_inference.md
  3. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.en.md
  4. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.md
  5. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.en.md
  6. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.md
  7. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.en.md
  8. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.md
  9. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md
  10. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.md
  11. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.en.md
  12. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md
  13. 2 2
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.en.md
  14. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md
  15. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.en.md
  16. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md
  17. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md
  18. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md
  19. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.en.md
  20. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md
  21. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_detection.en.md
  22. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_detection.md
  23. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_segmentation.en.md
  24. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_segmentation.md
  25. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.en.md
  26. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.md
  27. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.en.md
  28. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.md
  29. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.en.md
  30. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md
  31. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.en.md
  32. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md
  33. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.en.md
  34. 1 1
      docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.md
  35. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.en.md
  36. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.md
  37. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.en.md
  38. 1 1
      docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.md
  39. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.en.md
  40. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md
  41. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.en.md
  42. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.md
  43. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.en.md
  44. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.md
  45. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md
  46. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md
  47. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md
  48. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md
  49. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md
  50. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md
  51. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md
  52. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md
  53. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md
  54. 1 1
      docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.md
  55. 1 1
      docs/pipeline_usage/tutorials/speech_pipelines/multilingual_speech_recognition.en.md
  56. 1 1
      docs/pipeline_usage/tutorials/speech_pipelines/multilingual_speech_recognition.md
  57. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.en.md
  58. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md
  59. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.en.md
  60. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md
  61. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.en.md
  62. 1 1
      docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md
  63. 1 1
      docs/pipeline_usage/tutorials/video_pipelines/video_classification.en.md
  64. 1 1
      docs/pipeline_usage/tutorials/video_pipelines/video_classification.md
  65. 1 1
      docs/pipeline_usage/tutorials/video_pipelines/video_detection.en.md
  66. 1 1
      docs/pipeline_usage/tutorials/video_pipelines/video_detection.md
  67. 1 1
      docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.en.md
  68. 1 1
      docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.md
  69. 0 8
      paddlex/inference/models/base/predictor/base_predictor.py
  70. 7 9
      paddlex/inference/pipelines/__init__.py
  71. 2 2
      paddlex/inference/pipelines/anomaly_detection/pipeline.py
  72. 2 2
      paddlex/inference/pipelines/base.py
  73. 2 2
      paddlex/inference/pipelines/doc_preprocessor/pipeline.py
  74. 2 2
      paddlex/inference/pipelines/formula_recognition/pipeline.py
  75. 2 2
      paddlex/inference/pipelines/image_classification/pipeline.py
  76. 2 2
      paddlex/inference/pipelines/image_multilabel_classification/pipeline.py
  77. 2 2
      paddlex/inference/pipelines/instance_segmentation/pipeline.py
  78. 2 2
      paddlex/inference/pipelines/keypoint_detection/pipeline.py
  79. 2 2
      paddlex/inference/pipelines/layout_parsing/pipeline.py
  80. 2 2
      paddlex/inference/pipelines/layout_parsing/pipeline_v2.py
  81. 2 2
      paddlex/inference/pipelines/m_3d_bev_detection/pipeline.py
  82. 2 2
      paddlex/inference/pipelines/multilingual_speech_recognition/pipeline.py
  83. 2 2
      paddlex/inference/pipelines/object_detection/pipeline.py
  84. 2 2
      paddlex/inference/pipelines/ocr/pipeline.py
  85. 2 2
      paddlex/inference/pipelines/open_vocabulary_detection/pipeline.py
  86. 2 2
      paddlex/inference/pipelines/open_vocabulary_segmentation/pipeline.py
  87. 2 2
      paddlex/inference/pipelines/pp_chatocr/pipeline_base.py
  88. 2 2
      paddlex/inference/pipelines/pp_chatocr/pipeline_v3.py
  89. 2 2
      paddlex/inference/pipelines/pp_chatocr/pipeline_v4.py
  90. 2 2
      paddlex/inference/pipelines/rotated_object_detection/pipeline.py
  91. 2 2
      paddlex/inference/pipelines/seal_recognition/pipeline.py
  92. 2 2
      paddlex/inference/pipelines/semantic_segmentation/pipeline.py
  93. 2 2
      paddlex/inference/pipelines/small_object_detection/pipeline.py
  94. 2 2
      paddlex/inference/pipelines/table_recognition/pipeline.py
  95. 2 2
      paddlex/inference/pipelines/table_recognition/pipeline_v2.py
  96. 2 2
      paddlex/inference/pipelines/ts_anomaly_detection/pipeline.py
  97. 2 2
      paddlex/inference/pipelines/ts_classification/pipeline.py
  98. 2 2
      paddlex/inference/pipelines/ts_forecasting/pipeline.py
  99. 2 2
      paddlex/inference/pipelines/video_classification/pipeline.py
  100. 2 2
      paddlex/inference/pipelines/video_detection/pipeline.py

+ 2 - 2
docs/pipeline_deploy/high_performance_inference.en.md

@@ -263,7 +263,7 @@ The optional values for `backend` are as follows:
   <tr>
     <td><code>paddle</code></td>
     <td>Paddle Inference engine; supports enhancing GPU inference performance using the Paddle Inference TensorRT subgraph engine.</td>
-    <td>CPU, GPU</td>
+    <td>CPU, GPU, NPU</td>
   </tr>
   <tr>
     <td><code>openvino</code></td>
@@ -322,7 +322,7 @@ The available configuration items for `backend_config` vary for different backen
 
 ### 2.3 Modifying the High-Performance Inference Configuration
 
-Due to the diversity of actual deployment environments and requirements, the default configuration might not meet all needs. In such cases, manual adjustment of the high-performance inference configuration may be necessary. Users can modify the configuration by editing the **pipeline/module configuration file** or by passing the `hpi_config` field in the parameters via **CLI** or **Python API**. **Parameters passed via CLI or Python API will override the settings in the pipeline/module configuration file.** The following examples illustrate how to modify the configuration.
+Due to the diversity of actual deployment environments and requirements, the default configuration might not meet all needs. In such cases, manual adjustment of the high-performance inference configuration may be necessary. Users can modify the configuration by editing the **pipeline/module configuration file** or by passing the `hpi_config` field in the parameters via **CLI** or **Python API**. **Parameters passed via CLI or Python API will override the settings in the pipeline/module configuration file.** Different levels of configurations in the config file are automatically merged, and the deepest-level settings take the highest priority. The following examples illustrate how to modify the configuration.
 
 **For the general OCR pipeline, use the `onnxruntime` backend for all models:**
 

+ 2 - 2
docs/pipeline_deploy/high_performance_inference.md

@@ -264,7 +264,7 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
   <tr>
     <td><code>paddle</code></td>
     <td>Paddle Inference 推理引擎,支持通过 Paddle Inference TensorRT 子图引擎的方式提升模型的 GPU 推理性能。</td>
-    <td>CPU,GPU</td>
+    <td>CPU,GPU,NPU</td>
   </tr>
   <tr>
     <td><code>openvino</code></td>
@@ -323,7 +323,7 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
 
 ### 2.3 修改高性能推理配置
 
-由于实际部署环境和需求的多样性,默认配置可能无法满足所有要求。这时,可能需要手动调整高性能推理配置。用户可以通过修改**产线/模块配置文件**、**CLI**或**Python API**所传递参数中的 `hpi_config` 字段内容来修改配置。**通过 CLI 或 Python API 传递的参数将覆盖产线/模块配置文件中的设置**。以下将结合一些例子介绍如何修改配置。
+由于实际部署环境和需求的多样性,默认配置可能无法满足所有要求。这时,可能需要手动调整高性能推理配置。用户可以通过修改**产线/模块配置文件**、**CLI**或**Python API**所传递参数中的 `hpi_config` 字段内容来修改配置。**通过 CLI 或 Python API 传递的参数将覆盖产线/模块配置文件中的设置**。配置文件中不同层级的配置将自动合并,最深层的配置具有最高的优先级。以下将结合一些例子介绍如何修改配置。
 
 **通用OCR产线的所有模型使用 `onnxruntime` 后端:**
 

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.en.md

@@ -225,7 +225,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/3d_bev_detection.md

@@ -216,7 +216,7 @@ python paddlex/inference/models/3d_bev_detection/visualizer_3d.py --save_path=".
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.en.md

@@ -221,7 +221,7 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/face_recognition.md

@@ -221,7 +221,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.en.md

@@ -189,7 +189,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.md

@@ -188,7 +188,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.en.md

@@ -208,7 +208,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/human_keypoint_detection.md

@@ -203,7 +203,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.en.md

@@ -147,7 +147,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_anomaly_detection.md

@@ -152,7 +152,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 2 - 2
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.en.md

@@ -814,7 +814,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>
@@ -997,7 +997,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_classification.md

@@ -811,7 +811,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.en.md

@@ -182,7 +182,7 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/image_multi_label_classification.md

@@ -183,7 +183,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.en.md

@@ -291,7 +291,7 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/instance_segmentation.md

@@ -293,7 +293,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.en.md

@@ -471,7 +471,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/object_detection.md

@@ -489,7 +489,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_detection.en.md

@@ -158,7 +158,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_detection.md

@@ -157,7 +157,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_segmentation.en.md

@@ -158,7 +158,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/open_vocabulary_segmentation.md

@@ -155,7 +155,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.en.md

@@ -200,7 +200,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/pedestrian_attribute_recognition.md

@@ -198,7 +198,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.en.md

@@ -156,7 +156,7 @@ In the above Python script, the following steps were executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/rotated_object_detection.md

@@ -154,7 +154,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.en.md

@@ -323,7 +323,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/semantic_segmentation.md

@@ -327,7 +327,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.en.md

@@ -174,7 +174,7 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/small_object_detection.md

@@ -175,7 +175,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.en.md

@@ -182,7 +182,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/cv_pipelines/vehicle_attribute_recognition.md

@@ -196,7 +196,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.en.md

@@ -467,7 +467,7 @@ The relevant parameter descriptions are as follows:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v3.md

@@ -466,7 +466,7 @@ PP-ChatOCRv3-doc 预测的流程、API说明、产出说明如下:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.en.md

@@ -543,7 +543,7 @@ The following are the parameter descriptions:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_v4.md

@@ -714,7 +714,7 @@ PP-ChatOCRv4 预测的流程、API说明、产出说明如下:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.en.md

@@ -545,7 +545,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/OCR.md

@@ -555,7 +555,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.en.md

@@ -790,7 +790,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/PP-StructureV3.md

@@ -739,7 +739,7 @@ for item in markdown_images:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.en.md

@@ -188,7 +188,7 @@ In the above Python script, the following steps were executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/doc_preprocessor.md

@@ -190,7 +190,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.en.md

@@ -393,7 +393,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/formula_recognition.md

@@ -390,7 +390,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.en.md

@@ -665,7 +665,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/layout_parsing.md

@@ -702,7 +702,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.en.md

@@ -693,7 +693,7 @@ In the above Python script, the following steps were executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/seal_recognition.md

@@ -665,7 +665,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.en.md

@@ -747,7 +747,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition.md

@@ -700,7 +700,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.en.md

@@ -767,7 +767,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/ocr_pipelines/table_recognition_v2.md

@@ -781,7 +781,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/speech_pipelines/multilingual_speech_recognition.en.md

@@ -125,7 +125,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used. Not supported for now.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used. Not supported for now.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/speech_pipelines/multilingual_speech_recognition.md

@@ -125,7 +125,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。目前暂不支持。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。目前暂不支持。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.en.md

@@ -197,7 +197,7 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_anomaly_detection.md

@@ -199,7 +199,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.en.md

@@ -161,7 +161,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_classification.md

@@ -159,7 +159,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.en.md

@@ -213,7 +213,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/time_series_pipelines/time_series_forecasting.md

@@ -224,7 +224,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/video_pipelines/video_classification.en.md

@@ -170,7 +170,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/video_pipelines/video_classification.md

@@ -171,7 +171,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/video_pipelines/video_detection.en.md

@@ -119,7 +119,7 @@ In the above Python script, the following steps are executed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/video_pipelines/video_detection.md

@@ -121,7 +121,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.en.md

@@ -87,7 +87,7 @@ In the above Python script, the following steps are performed:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>Whether to enable the high-performance inference plugin. If set to `None`, the setting from the configuration file will be used. Not supported for now.</td>
+<td>Whether to enable the high-performance inference plugin. If set to <code>None</code>, the setting from the configuration file or <code>config</code> will be used. Not supported for now.</td>
 <td><code>bool</code></td>
 <td>None</td>
 <td><code>None</code></td>

+ 1 - 1
docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.md

@@ -88,7 +88,7 @@ for res in output:
 </tr>
 <tr>
 <td><code>use_hpip</code></td>
-<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件中的配置。目前暂不支持。</td>
+<td>是否启用高性能推理插件。如果为 <code>None</code>,则使用配置文件或 <code>config</code> 中的配置。目前暂不支持。</td>
 <td><code>bool</code> | <code>None</code></td>
 <td>无</td>
 <td><code>None</code></td>

+ 0 - 8
paddlex/inference/models/base/predictor/base_predictor.py

@@ -118,17 +118,9 @@ class BasePredictor(
         self.batch_sampler.batch_size = batch_size
         self._use_hpip = use_hpip
         if not use_hpip:
-            if hpi_config is not None:
-                logging.warning(
-                    "`hpi_config` will be ignored when not using the high-performance inference plugin."
-                )
             self._pp_option = self._prepare_pp_option(pp_option, device)
         else:
             require_hpip()
-            if pp_option is not None:
-                logging.warning(
-                    "`pp_option` will be ignored when using the high-performance inference plugin."
-                )
             self._hpi_config = self._prepare_hpi_config(hpi_config, device)
 
         logging.debug(f"{self.__class__.__name__}: {self.model_dir}")

+ 7 - 9
paddlex/inference/pipelines/__init__.py

@@ -126,7 +126,8 @@ def create_pipeline(
         pp_option (Optional[PaddlePredictorOption], optional): The options for
             the PaddlePredictor. Defaults to None.
         use_hpip (Optional[bool], optional): Whether to use the high-performance
-            inference plugin (HPIP). Defaults to None.
+            inference plugin (HPIP). If set to None, the setting from the
+            configuration file or `config` will be used. Defaults to None.
         hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional): The
             high-performance inference configuration dictionary.
             Defaults to None.
@@ -150,20 +151,17 @@ def create_pipeline(
                 pipeline,
                 config["pipeline_name"],
             )
+        config = config.copy()
     pipeline_name = config["pipeline_name"]
-    if device is None:
-        device = config.get("device", None)
-    if use_hpip is None:
-        use_hpip = config.get("use_hpip", False)
-    if hpi_config is None:
-        hpi_config = config.get("hpi_config", None)
+    if use_hpip is not None:
+        config["use_hpip"] = use_hpip
+    if hpi_config is not None:
+        config["hpi_config"] = hpi_config
 
     pipeline = BasePipeline.get(pipeline_name)(
         config=config,
         device=device,
         pp_option=pp_option,
-        use_hpip=use_hpip,
-        hpi_config=hpi_config,
         *args,
         **kwargs,
     )

+ 2 - 2
paddlex/inference/pipelines/anomaly_detection/pipeline.py

@@ -44,9 +44,9 @@ class AnomalyDetectionPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/base.py

@@ -48,9 +48,9 @@ class BasePipeline(ABC, metaclass=AutoRegisterABCMetaClass):
             device (str, optional): The device to use for prediction. Defaults to None.
             pp_option (PaddlePredictorOption, optional): The options for PaddlePredictor. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__()

+ 2 - 2
paddlex/inference/pipelines/doc_preprocessor/pipeline.py

@@ -48,9 +48,9 @@ class DocPreprocessorPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/formula_recognition/pipeline.py

@@ -52,9 +52,9 @@ class FormulaRecognitionPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/image_classification/pipeline.py

@@ -45,9 +45,9 @@ class ImageClassificationPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/image_multilabel_classification/pipeline.py

@@ -44,8 +44,8 @@ class ImageMultiLabelClassificationPipeline(BasePipeline):
             config (Dict): Configuration dictionary containing model and other parameters.
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
-            use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+            use_hpip (Optional[bool], optional): Whether to use the
+                high-performance inference plugin (HPIP) by default. Defaults to None.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
                 The high-performance inference configuration dictionary.
                 Defaults to None.

+ 2 - 2
paddlex/inference/pipelines/instance_segmentation/pipeline.py

@@ -45,9 +45,9 @@ class InstanceSegmentationPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/keypoint_detection/pipeline.py

@@ -47,9 +47,9 @@ class KeypointDetectionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/layout_parsing/pipeline.py

@@ -51,9 +51,9 @@ class LayoutParsingPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/layout_parsing/pipeline_v2.py

@@ -68,9 +68,9 @@ class LayoutParsingPipelineV2(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/m_3d_bev_detection/pipeline.py

@@ -45,9 +45,9 @@ class BEVDet3DPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/multilingual_speech_recognition/pipeline.py

@@ -45,9 +45,9 @@ class MultilingualSpeechRecognitionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/object_detection/pipeline.py

@@ -45,9 +45,9 @@ class ObjectDetectionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/ocr/pipeline.py

@@ -55,9 +55,9 @@ class OCRPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/open_vocabulary_detection/pipeline.py

@@ -45,9 +45,9 @@ class OpenVocabularyDetectionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/open_vocabulary_segmentation/pipeline.py

@@ -47,9 +47,9 @@ class OpenVocabularySegmentationPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/pp_chatocr/pipeline_base.py

@@ -37,9 +37,9 @@ class PP_ChatOCR_Pipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/pp_chatocr/pipeline_v3.py

@@ -54,9 +54,9 @@ class PP_ChatOCRv3_Pipeline(PP_ChatOCR_Pipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
             initial_predictor (bool, optional): Whether to initialize the predictor. Defaults to True.
         """

+ 2 - 2
paddlex/inference/pipelines/pp_chatocr/pipeline_v4.py

@@ -62,9 +62,9 @@ class PP_ChatOCRv4_Pipeline(PP_ChatOCR_Pipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
             initial_predictor (bool, optional): Whether to initialize the predictor. Defaults to True.
         """

+ 2 - 2
paddlex/inference/pipelines/rotated_object_detection/pipeline.py

@@ -45,9 +45,9 @@ class RotatedObjectDetectionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/seal_recognition/pipeline.py

@@ -49,9 +49,9 @@ class SealRecognitionPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/semantic_segmentation/pipeline.py

@@ -45,9 +45,9 @@ class SemanticSegmentationPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/small_object_detection/pipeline.py

@@ -45,9 +45,9 @@ class SmallObjectDetectionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/table_recognition/pipeline.py

@@ -54,9 +54,9 @@ class TableRecognitionPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/table_recognition/pipeline_v2.py

@@ -64,9 +64,9 @@ class TableRecognitionPipelineV2(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/ts_anomaly_detection/pipeline.py

@@ -44,9 +44,9 @@ class TSAnomalyDetPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/ts_classification/pipeline.py

@@ -44,9 +44,9 @@ class TSClsPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/ts_forecasting/pipeline.py

@@ -44,9 +44,9 @@ class TSFcPipeline(BasePipeline):
             device (str, optional): Device to run the predictions on. Defaults to None.
             pp_option (PaddlePredictorOption, optional): PaddlePredictor options. Defaults to None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
 

+ 2 - 2
paddlex/inference/pipelines/video_classification/pipeline.py

@@ -45,9 +45,9 @@ class VideoClassificationPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

+ 2 - 2
paddlex/inference/pipelines/video_detection/pipeline.py

@@ -45,9 +45,9 @@ class VideoDetectionPipeline(BasePipeline):
             device (str): The device to run the prediction on. Default is None.
             pp_option (PaddlePredictorOption): Options for PaddlePaddle predictor. Default is None.
             use_hpip (bool, optional): Whether to use the high-performance
-                inference plugin (HPIP). Defaults to False.
+                inference plugin (HPIP) by default. Defaults to False.
             hpi_config (Optional[Union[Dict[str, Any], HPIConfig]], optional):
-                The high-performance inference configuration dictionary.
+                The default high-performance inference configuration dictionary.
                 Defaults to None.
         """
         super().__init__(

Some files were not shown because too many files changed in this diff