Browse Source

add pipeline doc for docbee (#3841)

Zhang Zelun 7 tháng trước cách đây
mục cha
commit
528a51b88d

+ 469 - 0
docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.en.md

@@ -0,0 +1,469 @@
+# Document Understanding Pipeline User Guide
+
+## 1. Introduction to Document Understanding Pipeline
+
+The Document Understanding Pipeline is an advanced document processing technology based on Vision-Language Models (VLM), designed to overcome the limitations of traditional document processing. Traditional methods rely on fixed templates or predefined rules to parse documents, but this pipeline uses the multimodal capabilities of VLM to accurately answer user questions by integrating visual and linguistic information with just the input of document images and user queries. This technology does not require pre-training for specific document formats, allowing it to flexibly handle diverse document content and significantly enhance the generalization and practicality of document processing. It has broad application prospects in scenarios such as intelligent Q&A and information extraction. Currently, this pipeline does not support secondary development of VLM models, but future support is planned.
+
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/pipelines/doc_understanding/doc_understanding.png">
+
+<b>The Document Understanding Pipeline includes document-based vision-language model modules. You can choose the model to use based on the benchmark test data below.</b>
+
+<b>If you prioritize model accuracy, choose a model with higher accuracy; if you care more about inference speed, choose a faster model; if you are concerned about storage size, choose a model with a smaller storage footprint.</b>
+
+<p><b>Document-based Vision-Language Model Modules (Optional):</b></p>
+
+<table>
+<tr>
+<th>Model</th><th>Model Download Link</th>
+<th>Model Storage Size (GB)</th>
+<th>Description</th>
+</tr>
+<tr>
+<td>PP-DocBee-2B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-DocBee-2B_infer.tar">Inference Model</a></td>
+<td>4.2</td>
+<td rowspan="2">PP-DocBee is a self-developed multimodal large model by the PaddlePaddle team, focusing on document understanding with excellent performance on Chinese document understanding tasks. The model is fine-tuned with nearly 5 million multimodal datasets for document understanding, including general VQA, OCR, chart, text-rich documents, mathematics and complex reasoning, synthetic data, and pure text data, with different training data ratios. On several authoritative English document understanding evaluation benchmarks in academia, PP-DocBee has achieved SOTA for models of the same parameter scale. In internal business Chinese scenarios, PP-DocBee also outperforms current popular open and closed-source models.</td>
+</tr>
+<tr>
+<td>PP-DocBee-7B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-DocBee-7B_infer.tar">Inference Model</a></td>
+<td>15.8</td>
+</tr>
+</table>
+
+## 2. Quick Start
+
+### 2.1 Local Experience
+
+> ❗ Before using the Document Understanding Pipeline locally, ensure you have installed the PaddleX wheel package according to the [PaddleX Local Installation Guide](../../../installation/installation.md). If you wish to selectively install dependencies, refer to the relevant instructions in the installation guide. The dependency group for this pipeline is `multimodal`.
+
+#### 2.1.1 Integration via Python Script
+
+* The Document Understanding Pipeline can be quickly inferred with just a few lines of code as shown below:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="doc_understanding")
+output = pipeline.predict(
+    {
+        "image": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png",
+        "query": "Identify the contents of this table"
+    }
+)
+for res in output:
+    res.print()
+    res.save_to_json("./output/")
+```
+
+In the above Python script, the following steps are performed:
+
+1. Instantiate the Document Understanding Pipeline object through `create_pipeline()`. The parameter details are as follows:
+
+<table>
+<thead>
+<tr>
+<th>Parameter</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>pipeline</code></td>
+<td>Pipeline name or configuration file path. If it's a pipeline name, it must be a pipeline supported by PaddleX.</td>
+<td><code>str</code></td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>config</code></td>
+<td>Specific configuration information for the pipeline (if set simultaneously with <code>pipeline</code>, it has a higher priority and requires the pipeline name to be consistent with <code>pipeline</code>).</td>
+<td><code>dict[str, Any]</code></td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>device</code></td>
+<td>Inference device for the pipeline. Supports specifying a specific GPU card number, such as "gpu:0", or other hardware card numbers, like "npu:0", or CPU as "cpu".</td>
+<td><code>str</code></td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>use_hpip</code></td>
+<td>Whether to enable high-performance inference, only available if the pipeline supports high-performance inference.</td>
+<td><code>bool</code></td>
+<td><code>False</code></td>
+</tr>
+</tbody>
+</table>
+
+2. Call the `predict()` method of the Document Understanding Pipeline object for inference prediction. This method returns a `generator`. Below are the parameters of the `predict()` method and their descriptions:
+
+<table>
+<thead>
+<tr>
+<th>Parameter</th>
+<th>Description</th>
+<th>Type</th>
+<th>Options</th>
+<th>Default</th>
+</tr>
+</thead>
+<tr>
+<td><code>input</code></td>
+<td>Data to be predicted, currently only supports dictionary-type input</td>
+<td><code>Python Dict</code></td>
+<td>
+<ul>
+  <li><b>Python Dict</b>: For PP-DocBee, the input format is: <code>{"image":/path/to/image, "query": user question}</code>, representing the input image and the corresponding user question.</li>
+</ul>
+</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>device</code></td>
+<td>Inference device for the pipeline</td>
+<td><code>str|None</code></td>
+<td>
+<ul>
+  <li><b>CPU</b>: e.g., <code>cpu</code> for CPU inference;</li>
+  <li><b>GPU</b>: e.g., <code>gpu:0</code> for inference on the first GPU;</li>
+  <li><b>NPU</b>: e.g., <code>npu:0</code> for inference on the first NPU;</li>
+  <li><b>XPU</b>: e.g., <code>xpu:0</code> for inference on the first XPU;</li>
+  <li><b>MLU</b>: e.g., <code>mlu:0</code> for inference on the first MLU;</li>
+  <li><b>DCU</b>: e.g., <code>dcu:0</code> for inference on the first DCU;</li>
+  <li><b>None</b>: If set to <code>None</code>, the default value of this parameter initialized by the pipeline will be used. During initialization, it will preferentially use the local GPU 0 device if available, otherwise the CPU device will be used;</li>
+</ul>
+</td>
+<td><code>None</code></td>
+</table>
+
+3. Process the prediction results. The prediction result for each sample is a corresponding Result object, and supports operations such as printing and saving as a `json` file:
+
+<table>
+<thead>
+<tr>
+<th>Method</th>
+<th>Description</th>
+<th>Parameter</th>
+<th>Type</th>
+<th>Description</th>
+<th>Default</th>
+</tr>
+</thead>
+<tr>
+<td rowspan = "3"><code>print()</code></td>
+<td rowspan = "3">Print results to the terminal</td>
+<td><code>format_json</code></td>
+<td><code>bool</code></td>
+<td>Whether to format the output content with <code>JSON</code> indentation</td>
+<td><code>True</code></td>
+</tr>
+<tr>
+<td><code>indent</code></td>
+<td><code>int</code></td>
+<td>Specify indentation level to beautify the <code>JSON</code> output, making it more readable, effective only when <code>format_json</code> is <code>True</code></td>
+<td>4</td>
+</tr>
+<tr>
+<td><code>ensure_ascii</code></td>
+<td><code>bool</code></td>
+<td>Control whether to escape non-<code>ASCII</code> characters to <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> retains the original characters, effective only when <code>format_json</code> is <code>True</code></td>
+<td><code>False</code></td>
+</tr>
+<tr>
+<td rowspan = "3"><code>save_to_json()</code></td>
+<td rowspan = "3">Save results as a json format file</td>
+<td><code>save_path</code></td>
+<td><code>str</code></td>
+<td>Path to save the file. If it is a directory, the saved file name is consistent with the input file type name</td>
+<td>None</td>
+</tr>
+<tr>
+<td><code>indent</code></td>
+<td><code>int</code></td>
+<td>Specify indentation level to beautify the <code>JSON</code> output, making it more readable, effective only when <code>format_json</code> is <code>True</code></td>
+<td>4</td>
+</tr>
+<tr>
+<td><code>ensure_ascii</code></td>
+<td><code>bool</code></td>
+<td>Control whether to escape non-<code>ASCII</code> characters to <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> retains the original characters, effective only when <code>format_json</code> is <code>True</code></td>
+<td><code>False</code></td>
+</tr>
+</table>
+
+- Calling the `print()` method will print the results to the terminal. The printed content includes:
+
+  - `image`: `(str)` The input path of the image
+
+  - `query`: `(str)` The question related to the input image
+
+  - `result`: `(str)` The output result from the model
+
+- Calling the `save_to_json()` method will save the above content to the specified `save_path`. If specified as a directory, the saved path will be `save_path/{your_img_basename}_res.json`. If specified as a file, it will be directly saved to that file.
+
+* Additionally, you can also access the visualized image with results and prediction results through attributes, as follows:
+
+<table>
+<thead>
+<tr>
+<th>Attribute</th>
+<th>Description</th>
+</tr>
+</thead>
+<tr>
+<td rowspan = "1"><code>json</code></td>
+<td rowspan = "1">Get the prediction result in <code>json</code> format</td>
+</tr>
+<tr>
+<td rowspan = "2"><code>img</code></td>
+<td rowspan = "2">Get the visualized image in <code>dict</code> format</td>
+</tr>
+</table>
+
+- The prediction result obtained from the `json` attribute is data of type dict, and the related content is consistent with the content saved by calling the `save_to_json()` method.
+
+Additionally, you can obtain the configuration file for the Document Understanding Pipeline and load the configuration file for prediction. You can execute the following command to save the result in `my_path`:
+
+```
+paddlex --get_pipeline_config doc_understanding --save_path ./my_path
+```
+
+If you have obtained the configuration file, you can customize various configurations of the Document Understanding Pipeline, just modify the `pipeline` parameter value in the `create_pipeline` method to the path of the pipeline configuration file. For example:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="./my_path/doc_understanding.yaml")
+output = pipeline.predict(
+    {
+        "image": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png",
+        "query": "Identify the contents of this table"
+    }
+)
+for res in output:
+    res.print()
+    res.save_to_json("./output/")
+```
+
+<b>Note:</b> The parameters in the configuration file are the initialization parameters of the pipeline. If you want to change the initialization parameters of the Document Understanding Pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Meanwhile, CLI prediction also supports passing in the configuration file, `--pipeline` specifies the path of the configuration file.
+
+## 3. Development Integration/Deployment
+
+If the pipeline meets your requirements for inference speed and accuracy, you can directly proceed to development integration/deployment.
+
+If you need to directly apply the pipeline in your Python project, you can refer to the example code in [2.1.2 Python Script Method](#212-python-script-method-integration).
+
+In addition, PaddleX also provides three other deployment methods, as detailed below:
+
+🚀 <b>High-Performance Inference</b> (This pipeline does not support it currently): In real-world production environments, many applications have strict standards for performance indicators of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin aimed at deeply optimizing model inference and pre-post processing to achieve significant speed improvements in end-to-end processes. For detailed high-performance inference processes, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference.md).
+
+☁️ <b>Service Deployment</b>: Service deployment is a common form of deployment in real-world production environments. By encapsulating inference functions into services, clients can access these services via network requests to obtain inference results. PaddleX supports various pipeline service deployment solutions. For detailed pipeline service deployment processes, please refer to the [PaddleX Service Deployment Guide](../../../pipeline_deploy/serving.md).
+
+Below is a basic service deployment API reference and multilingual service call example:
+
+<details><summary>API Reference</summary>
+
+<p>For the main operations provided by the service:</p>
+<ul>
+<li>The HTTP request method is POST.</li>
+<li>Both the request body and response body are JSON data (JSON objects).</li>
+<li>When the request is processed successfully, the response status code is <code>200</code>, and the response body attributes are as follows:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>logId</code></td>
+<td><code>string</code></td>
+<td>The UUID of the request.</td>
+</tr>
+<tr>
+<td><code>errorCode</code></td>
+<td><code>integer</code></td>
+<td>Error code. Fixed at <code>0</code>.</td>
+</tr>
+<tr>
+<td><code>errorMsg</code></td>
+<td><code>string</code></td>
+<td>Error description. Fixed at <code>"Success"</code>.</td>
+</tr>
+<tr>
+<td><code>result</code></td>
+<td><code>object</code></td>
+<td>Operation result.</td>
+</tr>
+</tbody>
+</table>
+<ul>
+<li>When the request is not processed successfully, the response body attributes are as follows:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>logId</code></td>
+<td><code>string</code></td>
+<td>The UUID of the request.</td>
+</tr>
+<tr>
+<td><code>errorCode</code></td>
+<td><code>integer</code></td>
+<td>Error code. The same as the response status code.</td>
+</tr>
+<tr>
+<td><code>errorMsg</code></td>
+<td><code>string</code></td>
+<td>Error description.</td>
+</tr>
+</tbody>
+</table>
+<p>The main operations provided by the service are as follows:</p>
+<ul>
+<li><b><code>infer</code></b></li>
+</ul>
+<p>Perform object detection on an image.</p>
+<p><code>POST /open-vocabulary-detection</code></p>
+<ul>
+<li>The request body attributes are as follows:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+<th>Required</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>image</code></td>
+<td><code>string</code></td>
+<td>A URL of the image file accessible by the server or the Base64 encoded result of the image file content.</td>
+<td>Yes</td>
+</tr>
+<tr>
+<td><code>prompt</code></td>
+<td><code>string</code></td>
+<td>The text prompt used for prediction.</td>
+<td>Yes</td>
+</tr>
+<tr>
+<td><code>thresholds</code></td>
+<td><code>object</code> | <code>null</code></td>
+<td>The thresholds used for model prediction.</td>
+<td>No</td>
+</tr>
+</tbody>
+</table>
+<ul>
+<li>When the request is processed successfully, the response body's <code>result</code> has the following attributes:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>detectedObjects</code></td>
+<td><code>array</code></td>
+<td>Information on the position, category, etc., of the object.</td>
+</tr>
+<tr>
+<td><code>image</code></td>
+<td><code>string</code></td>
+<td>Result image of object detection. The image is in JPEG format, encoded in Base64.</td>
+</tr>
+</tbody>
+</table>
+<p>Each element in <code>detectedObjects</code> is an <code>object</code>, with the following attributes:</p>
+<table>
+<thead>
+<tr>
+<th>Name</th>
+<th>Type</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>bbox</code></td>
+<td><code>array</code></td>
+<td>The position of the object. The elements in the array are the x-coordinate of the upper-left corner, the y-coordinate of the upper-left corner, the x-coordinate of the lower-right corner, and the y-coordinate of the lower-right corner respectively.</td>
+</tr>
+<tr>
+<td><code>categoryName</code></td>
+<td><code>string</code></td>
+<td>The category name of the object.</td>
+</tr>
+<tr>
+<td><code>score</code></td>
+<td><code>number</code></td>
+<td>The score of the object.</td>
+</tr>
+</tbody>
+</table>
+<p>An example of <code>result</code> is as follows:</p>
+<pre><code class="language-json">{
+"detectedObjects": [
+{
+"bbox": [
+404.4967956542969,
+90.15770721435547,
+506.2465515136719,
+285.4187316894531
+],
+"categoryName": "bird",
+"score": 0.7418514490127563
+},
+{
+"bbox": [
+155.33145141601562,
+81.10954284667969,
+199.71136474609375,
+167.4235382080078
+],
+"categoryName": "dog",
+"score": 0.7328268885612488
+}
+],
+"image": "xxxxxx"
+}
+</code></pre></details>
+
+<details><summary>Examples of Multilingual Service Calls</summary>
+
+<details>
+<summary>Python</summary>
+
+<pre><code class="language-python">import base64
+
+</code></pre></details>
+</details>
+<br/>
+
+📱 <b>Edge Deployment</b>: Edge deployment is a way of placing computing and data processing functions on the user device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment processes, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md).
+You can choose the appropriate deployment method for your needs and proceed with subsequent AI application integration.
+
+
+## 4. Secondary Development
+
+Currently, this pipeline does not support fine-tuning training and only supports inference integration. Future support for fine-tuning training is planned.
+
+## 5. Multi-Hardware Support
+
+Currently, this pipeline only supports GPU and CPU inference. Future support for more hardware adaptations is planned.

+ 468 - 0
docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.md

@@ -0,0 +1,468 @@
+---
+comments: true
+---
+
+# 文档理解产线使用教程
+
+## 1. 文档理解产线介绍
+文档理解产线是基于视觉-语言模型(VLM)打造的先进文档处理技术,旨在突破传统文档处理的局限。传统方法依赖固定模板或预定义规则解析文档,而该产线借助VLM的多模态能力,仅需输入文档图片和用户问题,即可通过融合视觉与语言信息,精准回答用户提问。这种技术无需针对特定文档格式预训练,能够灵活应对多样化文档内容,显著提升文档处理的泛化性与实用性,在智能问答、信息提取等场景中具有广阔应用前景。本产线目前暂不支持对VLM模型的二次开发,后续计划支持。
+
+<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/pipelines/doc_understanding/doc_understanding.png">
+
+<b>文档理解产线中包含了文档类视觉语言模型模块,您可以根据下方的基准测试数据选择使用的模型</b>。
+
+<b>如果您更注重模型的精度,请选择精度较高的模型;如果您更在意模型的推理速度,请选择推理速度较快的模型;如果您关注模型的存储大小,请选择存储体积较小的模型。</b>
+
+<p><b>文档类视觉语言模型模块(可选):</b></p>
+
+<table>
+<tr>
+<th>模型</th><th>模型下载链接</th>
+<th>模型存储大小(GB)</th>
+<th>介绍</th>
+</tr>
+<tr>
+<td>PP-DocBee-2B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-DocBee-2B_infer.tar">推理模型</a></td>
+<td>4.2</td>
+<td rowspan="2">PP-DocBee 是飞桨团队自研的一款专注于文档理解的多模态大模型,在中文文档理解任务上具有卓越表现。该模型通过近 500 万条文档理解类多模态数据集进行微调优化,各种数据集包括了通用VQA类、OCR类、图表类、text-rich文档类、数学和复杂推理类、合成数据类、纯文本数据等,并设置了不同训练数据配比。在学术界权威的几个英文文档理解评测榜单上,PP-DocBee基本都达到了同参数量级别模型的SOTA。在内部业务中文场景类的指标上,PP-DocBee也高于目前的热门开源和闭源模型。</td>
+</tr>
+<tr>
+<td>PP-DocBee-7B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0rc0/PP-DocBee-7B_infer.tar">推理模型</a></td>
+<td>15.8</td>
+</tr>
+</table>
+
+## 2. 快速开始
+
+### 2.1 本地体验
+> ❗ 在本地使用文档理解产线前,请确保您已经按照[PaddleX本地安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。如果您希望选择性安装依赖,请参考安装教程中的相关说明。该产线对应的依赖分组为 `multimodal`。
+
+#### 2.1.1 Python脚本方式集成
+* 文档理解产线可以通过几行代码即可完成产线的快速推理,推理代码如下:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="doc_understanding")
+output = pipeline.predict(
+    {
+        "image": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png",
+        "query": "识别这份表格的内容"
+    }
+)
+for res in output:
+    res.print()
+    res.save_to_json("./output/")
+```
+
+在上述 Python 脚本中,执行了如下几个步骤:
+
+(1)通过 `create_pipeline()` 实例化 文档理解产线 产线对象,具体参数说明如下:
+
+<table>
+<thead>
+<tr>
+<th>参数</th>
+<th>参数说明</th>
+<th>参数类型</th>
+<th>默认值</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>pipeline</code></td>
+<td>产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。</td>
+<td><code>str</code></td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>config</code></td>
+<td>产线具体的配置信息(如果和<code>pipeline</code>同时设置,优先级高于<code>pipeline</code>,且要求产线名和<code>pipeline</code>一致)。</td>
+<td><code>dict[str, Any]</code></td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>device</code></td>
+<td>产线推理设备。支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。</td>
+<td><code>str</code></td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>use_hpip</code></td>
+<td>是否启用高性能推理,仅当该产线支持高性能推理时可用。</td>
+<td><code>bool</code></td>
+<td><code>False</code></td>
+</tr>
+</tbody>
+</table>
+
+(2)调用 文档理解产线 产线对象的 `predict()` 方法进行推理预测。该方法将返回一个 `generator`。以下是 `predict()` 方法的参数及其说明:
+
+<table>
+<thead>
+<tr>
+<th>参数</th>
+<th>参数说明</th>
+<th>参数类型</th>
+<th>可选项</th>
+<th>默认值</th>
+</tr>
+</thead>
+<tr>
+<td><code>input</code></td>
+<td>待预测数据,目前仅支持字典类型的输入</td>
+<td><code>Python Dict</code></td>
+<td>
+<ul>
+  <li><b>Python Dict</b>:如PP-DocBee的输入形式为: <code>{"image":/path/to/image, "query": user question}</code> ,分别表示输入的图像和对应的用户问题</li>
+</ul>
+</td>
+<td><code>None</code></td>
+</tr>
+<tr>
+<td><code>device</code></td>
+<td>产线推理设备</td>
+<td><code>str|None</code></td>
+<td>
+<ul>
+  <li><b>CPU</b>:如 <code>cpu</code> 表示使用 CPU 进行推理;</li>
+  <li><b>GPU</b>:如 <code>gpu:0</code> 表示使用第 1 块 GPU 进行推理;</li>
+  <li><b>NPU</b>:如 <code>npu:0</code> 表示使用第 1 块 NPU 进行推理;</li>
+  <li><b>XPU</b>:如 <code>xpu:0</code> 表示使用第 1 块 XPU 进行推理;</li>
+  <li><b>MLU</b>:如 <code>mlu:0</code> 表示使用第 1 块 MLU 进行推理;</li>
+  <li><b>DCU</b>:如 <code>dcu:0</code> 表示使用第 1 块 DCU 进行推理;</li>
+  <li><b>None</b>:如果设置为 <code>None</code>, 将默认使用产线初始化的该参数值,初始化时,会优先使用本地的 GPU 0号设备,如果没有,则使用 CPU 设备;</li>
+</ul>
+</td>
+<td><code>None</code></td>
+</table>
+
+(3)对预测结果进行处理,每个样本的预测结果均为对应的Result对象,且支持打印、保存为`json`文件的操作:
+
+<table>
+<thead>
+<tr>
+<th>方法</th>
+<th>方法说明</th>
+<th>参数</th>
+<th>参数类型</th>
+<th>参数说明</th>
+<th>默认值</th>
+</tr>
+</thead>
+<tr>
+<td rowspan = "3"><code>print()</code></td>
+<td rowspan = "3">打印结果到终端</td>
+<td><code>format_json</code></td>
+<td><code>bool</code></td>
+<td>是否对输出内容进行使用 <code>JSON</code> 缩进格式化</td>
+<td><code>True</code></td>
+</tr>
+<tr>
+<td><code>indent</code></td>
+<td><code>int</code></td>
+<td>指定缩进级别,以美化输出的 <code>JSON</code> 数据,使其更具可读性,仅当 <code>format_json</code> 为 <code>True</code> 时有效</td>
+<td>4</td>
+</tr>
+<tr>
+<td><code>ensure_ascii</code></td>
+<td><code>bool</code></td>
+<td>控制是否将非 <code>ASCII</code> 字符转义为 <code>Unicode</code>。设置为 <code>True</code> 时,所有非 <code>ASCII</code> 字符将被转义;<code>False</code> 则保留原始字符,仅当<code>format_json</code>为<code>True</code>时有效</td>
+<td><code>False</code></td>
+</tr>
+<tr>
+<td rowspan = "3"><code>save_to_json()</code></td>
+<td rowspan = "3">将结果保存为json格式的文件</td>
+<td><code>save_path</code></td>
+<td><code>str</code></td>
+<td>保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致</td>
+<td>无</td>
+</tr>
+<tr>
+<td><code>indent</code></td>
+<td><code>int</code></td>
+<td>指定缩进级别,以美化输出的 <code>JSON</code> 数据,使其更具可读性,仅当 <code>format_json</code> 为 <code>True</code> 时有效</td>
+<td>4</td>
+</tr>
+<tr>
+<td><code>ensure_ascii</code></td>
+<td><code>bool</code></td>
+<td>控制是否将非 <code>ASCII</code> 字符转义为 <code>Unicode</code>。设置为 <code>True</code> 时,所有非 <code>ASCII</code> 字符将被转义;<code>False</code> 则保留原始字符,仅当<code>format_json</code>为<code>True</code>时有效</td>
+<td><code>False</code></td>
+</tr>
+</table>
+
+- 调用`print()` 方法会将结果打印到终端,打印到终端的内容解释如下:
+
+    - `image`: `(str)` 图像的输入路径
+
+    - `query`: `(str)` 针对输入图像的问题
+
+    - `result`: `(str)` 模型的输出结果
+
+- 调用`save_to_json()` 方法会将上述内容保存到指定的`save_path`中,如果指定为目录,则保存的路径为`save_path/{your_img_basename}_res.json`,如果指定为文件,则直接保存到该文件中。
+
+* 此外,也支持通过属性获取带结果的可视化图像和预测结果,具体如下:
+
+<table>
+<thead>
+<tr>
+<th>属性</th>
+<th>属性说明</th>
+</tr>
+</thead>
+<tr>
+<td rowspan = "1"><code>json</code></td>
+<td rowspan = "1">获取预测的 <code>json</code> 格式的结果</td>
+</tr>
+<tr>
+<td rowspan = "2"><code>img</code></td>
+<td rowspan = "2">获取格式为 <code>dict</code> 的可视化图像</td>
+</tr>
+</table>
+
+- `json` 属性获取的预测结果为dict类型的数据,相关内容与调用 `save_to_json()` 方法保存的内容一致。
+
+此外,您可以获取 文档理解产线 产线配置文件,并加载配置文件进行预测。可执行如下命令将结果保存在 `my_path` 中:
+
+```
+paddlex --get_pipeline_config doc_understanding --save_path ./my_path
+```
+
+若您获取了配置文件,即可对文档理解产线各项配置进行自定义,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可。示例如下:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="./my_path/doc_understanding.yaml")
+output = pipeline.predict(
+    {
+        "image": "https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png",
+        "query": "识别这份表格的内容"
+    }
+)
+for res in output:
+    res.print()
+    res.save_to_json("./output/")
+```
+
+<b>注:</b> 配置文件中的参数为产线初始化参数,如果希望更改文档理解产线初始化参数,可以直接修改配置文件中的参数,并加载配置文件进行预测。同时,CLI 预测也支持传入配置文件,`--pipeline` 指定配置文件的路径即可。
+
+## 3. 开发集成/部署
+如果产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
+
+若您需要将产线直接应用在您的Python项目中,可以参考 [2.1.2 Python脚本方式](#212-python脚本方式集成)中的示例代码。
+
+此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
+
+🚀 <b>高性能推理</b>(本产线暂不支持):在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_inference.md)。
+
+☁️ <b>服务化部署</b>:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持多种产线服务化部署方案,详细的产线服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/serving.md)。
+
+以下是基础服务化部署的API参考与多语言服务调用示例:
+
+<details><summary>API参考</summary>
+
+<p>对于服务提供的主要操作:</p>
+<ul>
+<li>HTTP请求方法为POST。</li>
+<li>请求体和响应体均为JSON数据(JSON对象)。</li>
+<li>当请求处理成功时,响应状态码为<code>200</code>,响应体的属性如下:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>名称</th>
+<th>类型</th>
+<th>含义</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>logId</code></td>
+<td><code>string</code></td>
+<td>请求的UUID。</td>
+</tr>
+<tr>
+<td><code>errorCode</code></td>
+<td><code>integer</code></td>
+<td>错误码。固定为<code>0</code>。</td>
+</tr>
+<tr>
+<td><code>errorMsg</code></td>
+<td><code>string</code></td>
+<td>错误说明。固定为<code>"Success"</code>。</td>
+</tr>
+<tr>
+<td><code>result</code></td>
+<td><code>object</code></td>
+<td>操作结果。</td>
+</tr>
+</tbody>
+</table>
+<ul>
+<li>当请求处理未成功时,响应体的属性如下:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>名称</th>
+<th>类型</th>
+<th>含义</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>logId</code></td>
+<td><code>string</code></td>
+<td>请求的UUID。</td>
+</tr>
+<tr>
+<td><code>errorCode</code></td>
+<td><code>integer</code></td>
+<td>错误码。与响应状态码相同。</td>
+</tr>
+<tr>
+<td><code>errorMsg</code></td>
+<td><code>string</code></td>
+<td>错误说明。</td>
+</tr>
+</tbody>
+</table>
+<p>服务提供的主要操作如下:</p>
+<ul>
+<li><b><code>infer</code></b></li>
+</ul>
+<p>对图像进行目标检测。</p>
+<p><code>POST /open-vocabulary-detection</code></p>
+<ul>
+<li>请求体的属性如下:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>名称</th>
+<th>类型</th>
+<th>含义</th>
+<th>是否必填</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>image</code></td>
+<td><code>string</code></td>
+<td>服务器可访问的图像文件的URL或图像文件内容的Base64编码结果。</td>
+<td>是</td>
+</tr>
+<tr>
+<td><code>prompt</code></td>
+<td><code>string</code></td>
+<td>预测使用的文本提示词。</td>
+<td>是</td>
+</tr>
+<tr>
+<td><code>thresholds</code></td>
+<td><code>object</code> | <code>null</code></td>
+<td>模型预测使用的阈值。</td>
+<td>否</td>
+</tr>
+</tbody>
+</table>
+<ul>
+<li>请求处理成功时,响应体的<code>result</code>具有如下属性:</li>
+</ul>
+<table>
+<thead>
+<tr>
+<th>名称</th>
+<th>类型</th>
+<th>含义</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>detectedObjects</code></td>
+<td><code>array</code></td>
+<td>目标的位置、类别等信息。</td>
+</tr>
+<tr>
+<td><code>image</code></td>
+<td><code>string</code></td>
+<td>目标检测结果图。图像为JPEG格式,使用Base64编码。</td>
+</tr>
+</tbody>
+</table>
+<p><code>detectedObjects</code>中的每个元素为一个<code>object</code>,具有如下属性:</p>
+<table>
+<thead>
+<tr>
+<th>名称</th>
+<th>类型</th>
+<th>含义</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>bbox</code></td>
+<td><code>array</code></td>
+<td>目标位置。数组中元素依次为边界框左上角x坐标、左上角y坐标、右下角x坐标以及右下角y坐标。</td>
+</tr>
+<tr>
+<td><code>categoryName</code></td>
+<td><code>string</code></td>
+<td>目标类别名。</td>
+</tr>
+<tr>
+<td><code>score</code></td>
+<td><code>number</code></td>
+<td>目标得分。</td>
+</tr>
+</tbody>
+</table>
+<p><code>result</code>示例如下:</p>
+<pre><code class="language-json">{
+&quot;detectedObjects&quot;: [
+{
+&quot;bbox&quot;: [
+404.4967956542969,
+90.15770721435547,
+506.2465515136719,
+285.4187316894531
+],
+&quot;categoryName&quot;: "bird",
+&quot;score&quot;: 0.7418514490127563
+},
+{
+&quot;bbox&quot;: [
+155.33145141601562,
+81.10954284667969,
+199.71136474609375,
+167.4235382080078
+],
+&quot;categoryName&quot;: "dog",
+&quot;score&quot;: 0.7328268885612488
+}
+],
+&quot;image&quot;: &quot;xxxxxx&quot;
+}
+</code></pre></details>
+
+<details><summary>多语言调用服务示例</summary>
+
+<details>
+<summary>Python</summary>
+
+
+<pre><code class="language-python">import base64
+
+</code></pre></details>
+</details>
+<br/>
+
+📱 <b>端侧部署</b>:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
+您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
+
+
+## 4. 二次开发
+当前产线暂时不支持微调训练,仅支持推理集成。关于该产线的微调训练,计划在未来支持。
+
+## 5. 多硬件支持
+当前产线暂时仅支持GPU和CPU推理。关于该产线对于更多硬件的适配,计划在未来支持。