소스 검색

[Fix] Use strict version check for paddle2onnx (#3867)

* Use strict version check for paddle2onnx

* Fix cache doc
Lin Manhui 7 달 전
부모
커밋
d2a88e8a7b
4개의 변경된 파일61개의 추가작업 그리고 36개의 파일을 삭제
  1. 4 4
      docs/pipeline_deploy/high_performance_inference.en.md
  2. 14 14
      docs/pipeline_deploy/high_performance_inference.md
  3. 33 9
      paddlex/utils/deps.py
  4. 10 9
      paddlex/utils/install.py

+ 4 - 4
docs/pipeline_deploy/high_performance_inference.en.md

@@ -24,7 +24,7 @@ In real production environments, many applications impose strict performance met
 
 Before using the high-performance inference plugin, please ensure that you have completed the PaddleX installation according to the [PaddleX Local Installation Tutorial](../installation/installation.en.md) and have run the quick inference using the PaddleX pipeline command line or the PaddleX pipeline Python script as described in the usage instructions.
 
-The high-performance inference plugin supports handling multiple model formats, including **PaddlePaddle static graph (`.pdmodel`, `.json`)**, **ONNX (`.onnx`)** and **Huawei OM (`.om`)**, among others. For ONNX models, it is recommended to convert them using the [Paddle2ONNX Plugin](./paddle2onnx.en.md). If multiple model formats are present in the model directory, PaddleX will automatically choose the appropriate one as needed, and aotimatic model conversion may be performed.
+The high-performance inference plugin supports handling multiple model formats, including **PaddlePaddle static graph (`.pdmodel`, `.json`)**, **ONNX (`.onnx`)** and **Huawei OM (`.om`)**, among others. For ONNX models, you can convert them using the [Paddle2ONNX Plugin](./paddle2onnx.en.md). If multiple model formats are present in the model directory, PaddleX will automatically choose the appropriate one as needed, and automatic model conversion may be performed. **It is recommended to install the Paddle2ONNX plugin first before installing the high-performance inference plugin, so that PaddleX can convert model formats when needed.**
 
 ### 1.1 Installing the High-Performance Inference Plugin
 
@@ -248,7 +248,7 @@ Common configuration items for high-performance inference include:
 </tr>
 <tr>
   <td><code>auto_paddle2onnx</code></td>
-  <td>Whether to enable the <a href="./paddle2onnx.en.md">Paddle2ONNX plugin</a> to automatically convert a Paddle model to an ONNX model.</td>
+  <td>Whether to automatically convert the PaddlePaddle static graph model to an ONNX model. When the Paddle2ONNX plugin is unavailable, no conversion will be performed.</td>
   <td><code>bool</code></td>
   <td><code>True</code></td>
 </tr>
@@ -494,9 +494,9 @@ SubModules:
 
 ### 2.5 Model Cache Description
 
-The model cache is stored in the `.cache` directory under the model directory, including files such as `shape_range_info.pbtxt` and those starting with `trt_serialized` generated when using the `tensorrt` or `paddle` backends.
+The model caches are stored in the `.cache` directory under the model directory.
 
-**After modifying TensorRT-related configurations, it is recommended to clear the cache to avoid the new configuration being overridden by the cache.**
+**After modifying configurations related to Paddle Inference TensorRT subgraph engine or TensorRT, it is recommended to clear the caches to avoid the new configuration being overridden by the cache.**
 
 When the `auto_paddle2onnx` option is enabled, an `inference.onnx` file may be automatically generated in the model directory.
 

+ 14 - 14
docs/pipeline_deploy/high_performance_inference.md

@@ -24,7 +24,7 @@ comments: true
 
 使用高性能推理插件前,请确保您已经按照 [PaddleX本地安装教程](../installation/installation.md) 完成了PaddleX的安装,且按照PaddleX产线命令行使用说明或PaddleX产线Python脚本使用说明跑通了产线的快速推理。
 
-高性能推理插件支持处理 **PaddlePaddle 静态图(`.pdmodel`、 `.json`)**、**ONNX(`.onnx`)**、**华为 OM(`.om`)** 等多种模型格式。对于 ONNX 模型,建议使用 [Paddle2ONNX 插件](./paddle2onnx.md) 转换得到。如果模型目录中存在多种格式的模型,PaddleX 会根据需要自动选择,并可能进行自动模型转换。
+高性能推理插件支持处理 **PaddlePaddle 静态图(`.pdmodel`、 `.json`)**、**ONNX(`.onnx`)**、**华为 OM(`.om`)** 等多种模型格式。对于 ONNX 模型,可以使用 [Paddle2ONNX 插件](./paddle2onnx.md) 转换得到。如果模型目录中存在多种格式的模型,PaddleX 会根据需要自动选择,并可能进行自动模型转换。**建议在安装高性能推理插件前,首先安装 Paddle2ONNX 插件,以便 PaddleX 可以在需要时转换模型格式。**
 
 ### 1.1 安装高性能推理插件
 
@@ -232,25 +232,25 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
 <tbody>
 <tr>
 <td><code>auto_config</code></td>
-<td>是否启用安全自动配置模式。<br /><code>True</code>为启用安全自动配置模式,<code>False</code>为启用无限制手动配置模式。</td>
+<td>是否启用安全自动配置模式。<br /><code>True</code> 为启用安全自动配置模式,<code>False</code> 为启用无限制手动配置模式。</td>
 <td><code>bool</code></td>
 <td><code>True</code></td>
 </tr>
 <tr>
   <td><code>backend</code></td>
-  <td>用于指定要使用的推理后端。在无限制手动配置模式下不能为<code>None</code>。</td>
+  <td>用于指定要使用的推理后端。在无限制手动配置模式下不能为 <code>None</code>。</td>
   <td><code>str | None</code></td>
   <td><code>None</code></td>
 </tr>
 <tr>
   <td><code>backend_config</code></td>
-  <td>推理后端的配置,若不为<code>None</code>则可以覆盖推理后端的默认配置项。</td>
+  <td>推理后端的配置,若不为 <code>None</code> 则可以覆盖推理后端的默认配置项。</td>
   <td><code>dict | None</code></td>
   <td><code>None</code></td>
 </tr>
 <tr>
   <td><code>auto_paddle2onnx</code></td>
-  <td>是否启用 <a href="./paddle2onnx.md">Paddle2ONNX插件</a> 将Paddle模型自动转换为ONNX模型。</td>
+  <td>是否将 PaddlePaddle 静态图模型自动转换为 ONNX 模型。当 Paddle2ONNX 插件不可用时,不执行转换。</td>
   <td><code>bool</code></td>
   <td><code>True</code></td>
 </tr>
@@ -305,16 +305,16 @@ output = model.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/
   </tr>
   <tr>
     <td><code>openvino</code></td>
-    <td><code>cpu_num_threads</code>(<code>int</code>):CPU 推理使用的逻辑处理器数量。默认为<code>8</code>。</td>
+    <td><code>cpu_num_threads</code>(<code>int</code>):CPU 推理使用的逻辑处理器数量。默认为 <code>8</code>。</td>
   </tr>
   <tr>
     <td><code>onnxruntime</code></td>
-    <td><code>cpu_num_threads</code>(<code>int</code>):CPU 推理时算子内部的并行计算线程数。默认为<code>8</code>。</td>
+    <td><code>cpu_num_threads</code>(<code>int</code>):CPU 推理时算子内部的并行计算线程数。默认为 <code>8</code>。</td>
   </tr>
   <tr>
     <td><code>tensorrt</code></td>
     <td>
-      <code>precision</code>(<code>str</code>):使用的精度,<code>"fp16"</code>或<code>"fp32"</code>。默认为<code>"fp32"</code>。
+      <code>precision</code>(<code>str</code>):使用的精度,<code>"fp16"</code>  <code>"fp32"</code>。默认为 <code>"fp32"</code>。
       <br />
       <code>dynamic_shapes</code>(<code>dict</code>):动态形状配置,指定每个输入对应的最小形状、优化形状以及最大形状。格式为:<code>{输入张量名称}: [{最小形状}, {优化形状}, {最大形状}]</code>。动态形状是 TensorRT 延迟指定部分或全部张量维度直到运行时的能力,更多介绍请参考 <a href="https://docs.nvidia.com/deeplearning/tensorrt/latest/inference-library/work-dynamic-shapes.html">TensorRT 官方文档</a>。
     </td>
@@ -340,7 +340,7 @@ hpi_config:
 ```
 
 </details>
-<details><summary>👉 CLI传参方式(点击展开)</summary>
+<details><summary>👉 CLI 传参方式(点击展开)</summary>
 
 ```bash
 paddlex \
@@ -352,7 +352,7 @@ paddlex \
 ```
 
 </details>
-<details><summary>👉 Python API传参方式(点击展开)</summary>
+<details><summary>👉 Python API 传参方式(点击展开)</summary>
 
 ```python
 from paddlex import create_pipeline
@@ -379,7 +379,7 @@ Predict:
 ```
 
 </details>
-<details><summary>👉 CLI传参方式(点击展开)</summary>
+<details><summary>👉 CLI 传参方式(点击展开)</summary>
 
 ```bash
 python main.py \
@@ -393,7 +393,7 @@ python main.py \
 ```
 
 </details>
-<details><summary>👉 Python API传参方式(点击展开)</summary>
+<details><summary>👉 Python API 传参方式(点击展开)</summary>
 
 ```python
 from paddlex import create_model
@@ -495,9 +495,9 @@ SubModules:
 
 ### 2.5 模型缓存说明
 
-模型缓存会存放在模型目录下的 `.cache` 目录下,包括使用 `tensorrt` 或 `paddle` 后端时产生的 `shape_range_info.pbtxt`与`trt_serialized`开头的文件
+模型缓存会存放在模型目录下的 `.cache` 目录
 
-**修改 TensorRT 相关配置后,建议清理缓存,以避免出现缓存导致新配置不生效的情况。**
+**修改 Paddle Inference TensorRT 子图引擎或 TensorRT 相关配置后,建议清理缓存,以避免出现缓存导致新配置不生效的情况。**
 
 当启用`auto_paddle2onnx`选项时,可能会在模型目录下自动生成`inference.onnx`文件。
 

+ 33 - 9
paddlex/utils/deps.py

@@ -20,6 +20,7 @@ from collections import defaultdict
 from functools import lru_cache, wraps
 
 from packaging.requirements import Requirement
+from packaging.version import Version
 
 from . import logging
 
@@ -38,7 +39,7 @@ def _get_extra_name_and_remove_extra_marker(dep_spec):
         return None, dep_spec
 
 
-def get_extras():
+def _get_extras():
     metadata = importlib.metadata.metadata("paddlex")
     extras = {}
     # XXX: The `metadata.get_all` used here is not well documented.
@@ -55,23 +56,24 @@ def get_extras():
     return extras
 
 
-EXTRAS = get_extras()
+EXTRAS = _get_extras()
 
 
-def get_dep_specs():
-    dep_specs = []
+def _get_dep_specs():
+    dep_specs = defaultdict(list)
     for dep_spec in importlib.metadata.requires("paddlex"):
         extra_name, dep_spec = _get_extra_name_and_remove_extra_marker(dep_spec)
         if extra_name is None or extra_name == "all":
             dep_spec = dep_spec.rstrip()
-            dep_specs.append(dep_spec)
+            req = Requirement(dep_spec)
+            dep_specs[req.name].append(dep_spec)
     return dep_specs
 
 
-DEP_SPECS = get_dep_specs()
+DEP_SPECS = _get_dep_specs()
 
 
-def get_dep_version(dep):
+def _get_dep_version(dep):
     try:
         return importlib.metadata.version(dep)
     except importlib.metadata.PackageNotFoundError:
@@ -79,15 +81,37 @@ def get_dep_version(dep):
 
 
 @lru_cache()
-def is_dep_available(dep, /):
+def is_dep_available(dep, /, check_version=None):
     # Currently for several special deps we check if the import packages exist.
+    if dep in ("paddlepaddle", "paddle-custom-device", "ultra-infer") and check_version:
+        raise ValueError(
+            "Currently, `check_version` is not allowed to be `True` for `paddlepaddle`, `paddle-custom-device`, and `ultra-infer`."
+        )
     if dep == "paddlepaddle":
         return importlib.util.find_spec("paddle") is not None
     elif dep == "paddle-custom-device":
         return importlib.util.find_spec("paddle_custom_device") is not None
     elif dep == "ultra-infer":
         return importlib.util.find_spec("ultra_infer") is not None
-    return get_dep_version(dep) is not None
+    else:
+        if dep != "paddle2onnx" and dep not in DEP_SPECS:
+            raise ValueError("Unknown dependency")
+    if check_version is None:
+        if dep == "paddle2onnx":
+            check_version = True
+        else:
+            check_version = False
+    version = _get_dep_version(dep)
+    if version is None:
+        return False
+    if check_version:
+        if dep == "paddle2onnx":
+            return Version(version) in Requirement(get_paddle2onnx_spec()).specifier
+        for dep_spec in DEP_SPECS[dep]:
+            if Version(version) in Requirement(dep_spec).specifier:
+                return True
+    else:
+        return True
 
 
 def require_deps(*deps, obj_name=None):

+ 10 - 9
paddlex/utils/install.py

@@ -29,15 +29,16 @@ def install_packages_from_requirements_file(
 
     # TODO: Precompute or cache the constraints
     with tempfile.NamedTemporaryFile("w", suffix=".txt", delete=False) as f:
-        for req in DEP_SPECS:
-            req = Requirement(req)
-            if req.marker and not req.marker.evaluate():
-                continue
-            if req.url:
-                req = f"{req.name}@{req.url}"
-            else:
-                req = f"{req.name}{req.specifier}"
-            f.write(req + "\n")
+        for reqs in DEP_SPECS.values():
+            for req in reqs:
+                req = Requirement(req)
+                if req.marker and not req.marker.evaluate():
+                    continue
+                if req.url:
+                    req = f"{req.name}@{req.url}"
+                else:
+                    req = f"{req.name}{req.specifier}"
+                f.write(req + "\n")
         constraints_file_path = f.name
 
     args = [