浏览代码

add pp-shitu (#2243)

Tingquan Gao 1 年之前
父节点
当前提交
9db3de702a

+ 785 - 0
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition.md

@@ -0,0 +1,785 @@
+简体中文 | [English](general_image_recognition_en.md)
+
+# 通用图像识别产线使用教程
+
+## 1. 通用图像识别产线介绍
+
+通用图像识别产线旨在解决开放域目标定位及识别问题,目前 PaddleX 的通用图像识别产线支持 PP-ShiTuV2。
+
+PP-ShiTuV2 是一个实用的通用图像识别系统,主要由主体检测、特征学习和向量检索三个模块组成。该系统从骨干网络选择和调整、损失函数的选择、数据增强、学习率变换策略、正则化参数选择、预训练模型使用以及模型裁剪量化多个方面,融合改进多种策略,对各个模块进行优化,最终在多个实际应用场景上的检索性能均有较好效果。
+
+![](https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/pipelines/general_image_recognition/pp_shitu_v2.jpg)
+
+**通用图像识别产线中包含了主体检测模块和图像特征模块**,有若干模型可供选择,您可以根据下边的 benchmark 数据来选择使用的模型。**如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型推理速度,请选择推理速度较快的模型,如您更考虑模型存储大小,请选择存储大小较小的模型**。
+
+<summary> 👉模型列表详情</summary>
+
+**主体检测模块:**
+
+<table>
+  <tr>
+    <th>模型</th>
+    <th>mAP(0.5:0.95)</th>
+    <th>mAP(0.5)</th>
+    <th>GPU推理耗时(ms)</th>
+    <th>CPU推理耗时 (ms)</th>
+    <th>模型存储大小(M)</th>
+    <th>介绍</th>
+  </tr>
+  <tr>
+    <td>PP-ShiTuV2_det</td>
+    <td>41.5</td>
+    <td>62.0</td>
+    <td>33.7</td>
+    <td>537.0</td>
+    <td>27.54</td>
+    <td>基于PicoDet_LCNet_x2_5的主体检测模型,模型可能会同时检测出多个常见主体。</td>
+  </tr>
+</table>
+
+注:以上精度指标为 PaddleClas 主体检测数据集。
+
+**图像特征模块:**
+
+
+<table>
+  <tr>
+    <th>模型</th>
+    <th>recall@1 (%)</th>
+    <th>GPU推理耗时 (ms)</th>
+    <th>CPU推理耗时 (ms)</th>
+    <th>模型存储大小 (M)</th>
+    <th>介绍</th>
+  </tr>
+  <tr>
+    <td>PP-ShiTuV2_rec</td>
+    <td>84.2</td>
+    <td>5.23428</td>
+    <td>19.6005</td>
+    <td>16.3 M</td>
+    <td rowspan="3">PP-ShiTuV2是一个通用图像特征系统,由主体检测、特征提取、向量检索三个模块构成,这些模型是其中的特征提取模块的模型之一,可以根据系统的情况选择不同的模型。</td>
+  </tr>
+  <tr>
+    <td>PP-ShiTuV2_rec_CLIP_vit_base</td>
+    <td>88.69</td>
+    <td>13.1957</td>
+    <td>285.493</td>
+    <td>306.6 M</td>
+  </tr>
+  <tr>
+    <td>PP-ShiTuV2_rec_CLIP_vit_large</td>
+    <td>91.03</td>
+    <td>51.1284</td>
+    <td>1131.28</td>
+    <td>1.05 G</td>
+  </tr>
+</table>
+
+注:以上精度指标为 AliProducts recall@1。所有模型 GPU 推理耗时基于 NVIDIA Tesla T4 机器,精度类型为 FP32, CPU 推理速度基于 Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz,线程数为8,精度类型为 FP32。
+
+## 2. 快速开始
+
+PaddleX 所提供的预训练的模型产线均可以快速体验效果,你可以在本地使用 Python 体验通用图像识别产线的效果。
+
+### 2.1 在线体验
+
+暂不支持在线体验。
+
+### 2.2 本地体验
+
+> ❗ 在本地使用通用图像识别产线前,请确保您已经按照[PaddleX安装教程](../../../installation/installation.md)完成了PaddleX的wheel包安装。
+
+#### 2.2.1 命令行方式体验
+
+该产线暂不支持命令行体验。
+
+默认使用内置的的通用图像识别产线配置文件,若您需要自定义配置文件,可执行如下命令获取:
+
+<details>
+   <summary> 👉点击展开</summary>
+
+```bash
+paddlex --get_pipeline_config PP-ShiTuV2
+```
+
+执行后,通用图像识别产线配置文件将被保存在当前路径。若您希望自定义保存位置,可执行如下命令(假设自定义保存位置为` ./my_path`):
+
+```bash
+paddlex --get_pipeline_config PP-ShiTuV2 --save_path ./my_path
+```
+
+</details>
+
+#### 2.2.2 Python脚本方式集成
+
+* 在该产线的运行示例中需要预先构建特征向量库,您可以下载官方提供的饮料识别测试数据集[drink_dataset_v2.0](https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v2.0.tar) 构建特征向量库。若您希望用私有数据集,可以参考[2.3节 构建特征库的数据组织方式](#23-构建特征库的数据组织方式)。之后通过几行代码即可完成建立特征向量库和通用图像识别产线的快速推理。
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(pipeline="PP-ShiTuV2")
+
+pipeline.build_index(data_root="drink_dataset_v2.0/gallery/", index_dir="index_dir")
+
+output = pipeline.predict("./drink_dataset_v2.0/test_images/", index_dir="index_dir")
+for res in output:
+    res.print()
+    res.save_to_img("./output/")
+```
+
+在上述 Python 脚本中,执行了如下几个步骤:
+
+(1)实例化 `create_pipeline` 实例化 通用图像识别 产线对象。具体参数说明如下:
+
+| 参数 | 参数说明 | 参数类型 | 默认值 |
+| - | - | - | - |
+| `pipeline` | 产线名称或是产线配置文件路径。如为产线名称,则必须为 PaddleX 所支持的产线。 |`str` | 无 |
+| `index_dir` | 产线推理预测所用的检索库文件所在的目录,如不传入该参数,则需要在`predict()`中指定`index_dir`。 |`str` | None |
+| `device` | 产线模型推理设备。支持:“gpu”,“cpu”。 |`str` | `gpu` |
+| `use_hpip` | 是否启用高性能推理,仅当该产线支持高性能推理时可用。 | `bool` | `False` |
+
+(2)调用通用图像识别产线对象的 `build_index` 方法,构建特征向量库。具体参数为说明如下:
+
+|参数|参数说明|参数类型|默认值|
+|-|-|-|-|
+|`data_root`|数据集的根目录,数据组织方式参考[2.3节 构建特征库的数据组织方式](#2.3-构建特征库的数据组织方式)|`str`|无|
+|`index_dir`|特征库的保存路径。成功调用`build_index`方法后会在改路径下生成两个文件: `"id_map.pkl"` 保存了图像ID与图像特征标签之间的映射关系;`“vector.index”`存储了每张图像的特征向量|`str`|无|
+
+(3)调用通用图像识别产线对象的 `predict` 方法进行推理预测:`predict` 方法参数为 `input`,用于输入待预测数据,支持多种输入方式,具体示例如下:
+
+| 参数类型 | 参数说明 |
+| - | - |
+| Python Var | 支持直接传入Python变量,如`numpy.ndarray`表示的图像数据。 |
+| str        | 支持传入待预测数据文件路径,如图像文件的本地路径:`/root/data/img.jpg`。 |
+| str        | 支持传入待预测数据文件URL,如图像文件的网络URL:[示例](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/yuanqisenlin.jpeg)。 |
+| str        | 支持传入本地目录,该目录下需包含待预测数据文件,如本地路径:`/root/data/`。 |
+| dict       | 支持传入字典类型,字典的key需与具体任务对应,如图像分类任务对应\"img\",字典的val支持上述类型数据,例如:`{\"img\": \"/root/data1\"}`。 |
+| list       | 支持传入列表,列表元素需为上述类型数据,如`[numpy.ndarray, numpy.ndarray],[\"/root/data/img1.jpg\", \"/root/data/img2.jpg\"]`,`[\"/root/data1\", \"/root/data2\"]`,`[{\"img\": \"/root/data1\"}, {\"img\": \"/root/data2/img.jpg\"}]`。 |
+
+另外,`predict`方法支持参数`index_dir`用于设置检索库:
+| 参数类型 | 参数说明 |
+| - | - |
+| `index_dir` | 产线推理预测所用的检索库文件所在的目录,如不传入该参数,则默认使用在`create_pipeline()`中通过参数`index_dir`指定的检索库。 |`str` | None |
+
+(4)调用 `predict` 方法获取预测结果:`predict` 方法为 `generator`,因此需要通过调用获得预测结果,`predict` 将方法以 batch 为单位对数据进行预测。
+
+(5)对预测结果进行处理:每个样本的预测结果均为 `dict` 类型,且支持打印,或保存为文件,支持保存的类型与具体产线相关,如:
+
+| 方法 | 说明 | 方法参数 |
+| - | - | - |
+| print        | 打印结果到终端              | `- format_json`:bool类型,是否对输出内容进行使用json缩进格式化,默认为True;<br>`- indent`:int类型,json格式化设置,仅当format_json为True时有效,默认为4;<br>`- ensure_ascii`:bool类型,json格式化设置,仅当format_json为True时有效,默认为False; |
+| save_to_json | 将结果保存为json格式的文件   | `- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致;<br>`- indent`:int类型,json格式化设置,默认为4;<br>`- ensure_ascii`:bool类型,json格式化设置,默认为False; |
+| save_to_img  | 将结果保存为图像格式的文件  | `- save_path`:str类型,保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致; |
+
+若您获取了配置文件,即可对通用图像识别产线各项配置进行自定义,只需要修改 `create_pipeline` 方法中的 `pipeline` 参数值为产线配置文件路径即可。
+
+例如,若您的配置文件保存在 `./my_path/PP-ShiTuV2.yaml` ,则只需执行:
+
+```python
+from paddlex import create_pipeline
+pipeline = create_pipeline(pipeline="./my_path/PP-ShiTuV2.yaml", index_dir="index_dir")
+
+output = pipeline.predict("./drink_dataset_v2.0/test_images/")
+for res in output:
+    res.print()
+    res.save_to_img("./output/")
+```
+
+
+#### 2.2.3 特征库的添加和删除操作
+
+若您希望将更多的图像添加到特征库中,则可以调用 `append_index` 方法;删除图像特征,则可以调用 `remove_index` 方法。
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline("PP-ShiTuV2")
+pipeline.build_index(data_root="drink_dataset_v2.0/gallery/", index_dir="index_dir", index_type="IVF")
+pipeline.append_index(data_root="drink_dataset_v2.0/gallery/", index_dir="index_dir", index_type="IVF")
+pipeline.remove_index(data_root="drink_dataset_v2.0/gallery/", index_dir="index_dir", index_type="IVF")
+```
+
+上述方法参数说明如下:
+|参数|参数说明|参数类型|默认值|
+|-|-|-|-|
+|`data_root`|要添加的数据集的根目录。数据组织方式与构建特征库时相同,参考[2.3节 构建特征库的数据组织方式](#2.3-构建特征库的数据组织方式)|`str`|无|
+|`index_dir`|特征库的存储目录,在 `append_index` 和 `remove_index` 中,同时也是被修改(或删除)的特征库的路径,|`str`|无|
+|`index_type`|支持 `HNSW32`、`IVF`、`Flat`。其中,`HNSW32` 检索速度较快且精度较高,但不支持 `remove_index()` 操作;`IVF` 检索速度较快但精度相对较低,支持 `append_index()` 和 `remove_index()` 操作;`Flat` 检索速度较低精度较高,支持 `append_index()` 和 `remove_index()` 操作。|`str`|`HNSW32`|
+|`metric_type`|支持:`IP`,内积(Inner Product);`L2`,欧几里得距离(Euclidean Distance)。|`str`|`IP`|
+
+### 2.3 构建特征库的数据组织方式
+
+PaddleX 的通用图像识别产线示例需要使用预先构建好的特征库进行特征检索。如果您希望用私有数据构建特征向量库,则需要按照如下方式组织数据:
+
+```bash
+data_root             # 数据集根目录,目录名称可以改变
+├── images            # 图像的保存目录,目录名称可以改变
+│   │   ...
+└── gallery.txt       # 特征库数据集标注文件,文件名称不可改变。每行给出待检索图像路径和图像标签,使用空格分隔,内容举例: “0/0.jpg 脉动”
+```
+
+## 3. 开发集成/部署
+
+如果通用图像识别产线可以达到您对产线推理速度和精度的要求,您可以直接进行开发集成/部署。
+
+若您需要将通用图像识别产线直接应用在您的Python项目中,可以参考 [2.2.2 Python脚本方式](#222-python脚本方式集成)中的示例代码。
+
+此外,PaddleX 也提供了其他三种部署方式,详细说明如下:
+
+🚀 **高性能推理**:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleX 提供高性能推理插件,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[PaddleX高性能推理指南](../../../pipeline_deploy/high_performance_inference.md)。
+
+☁️ **服务化部署**:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。PaddleX 支持用户以低成本实现产线的服务化部署,详细的服务化部署流程请参考[PaddleX服务化部署指南](../../../pipeline_deploy/service_deploy.md)。
+
+下面是API参考和多语言服务调用示例:
+
+<details>
+<summary>API参考</summary>
+
+对于服务提供的所有操作:
+
+- 响应体以及POST请求的请求体均为JSON数据(JSON对象)。
+- 当请求处理成功时,响应状态码为`200`,响应体的属性如下:
+
+    |名称|类型|含义|
+    |-|-|-|
+    |`errorCode`|`integer`|错误码。固定为`0`。|
+    |`errorMsg`|`string`|错误说明。固定为`"Success"`。|
+
+    响应体还可能有`result`属性,类型为`object`,其中存储操作结果信息。
+
+- 当请求处理未成功时,响应体的属性如下:
+
+    |名称|类型|含义|
+    |-|-|-|
+    |`errorCode`|`integer`|错误码。与响应状态码相同。|
+    |`errorMsg`|`string`|错误说明。|
+
+服务提供的操作如下:
+
+- **`infer`**
+
+    获取图像OCR结果。
+
+    `POST /ocr`
+
+    - 请求体的属性如下:
+
+        |名称|类型|含义|是否必填|
+        |-|-|-|-|
+        |`image`|`string`|服务可访问的图像文件的URL或图像文件内容的Base64编码结果。|是|
+        |`inferenceParams`|`object`|推理参数。|否|
+
+        `inferenceParams`的属性如下:
+
+        |名称|类型|含义|是否必填|
+        |-|-|-|-|
+        |`maxLongSide`|`integer`|推理时,若文本检测模型的输入图像较长边的长度大于`maxLongSide`,则将对图像进行缩放,使其较长边的长度等于`maxLongSide`。|否|
+
+    - 请求处理成功时,响应体的`result`具有如下属性:
+
+        |名称|类型|含义|
+        |-|-|-|
+        |`texts`|`array`|文本位置、内容和得分。|
+        |`image`|`string`|OCR结果图,其中标注检测到的文本位置。图像为JPEG格式,使用Base64编码。|
+
+        `texts`中的每个元素为一个`object`,具有如下属性:
+
+        |名称|类型|含义|
+        |-|-|-|
+        |`poly`|`array`|文本位置。数组中元素依次为包围文本的多边形的顶点坐标。|
+        |`text`|`string`|文本内容。|
+        |`score`|`number`|文本识别得分。|
+
+        `result`示例如下:
+
+        ```json
+        {
+          "texts": [
+            {
+              "poly": [
+                [
+                  444,
+                  244
+                ],
+                [
+                  705,
+                  244
+                ],
+                [
+                  705,
+                  311
+                ],
+                [
+                  444,
+                  311
+                ]
+              ],
+              "text": "北京南站",
+              "score": 0.9
+            },
+            {
+              "poly": [
+                [
+                  992,
+                  248
+                ],
+                [
+                  1263,
+                  251
+                ],
+                [
+                  1263,
+                  318
+                ],
+                [
+                  992,
+                  315
+                ]
+              ],
+              "text": "天津站",
+              "score": 0.5
+            }
+          ],
+          "image": "xxxxxx"
+        }
+        ```
+
+</details>
+
+<details>
+<summary>多语言调用服务示例</summary>
+
+<details>
+<summary>Python</summary>
+
+```python
+import base64
+import requests
+
+API_URL = "http://localhost:8080/ocr" # 服务URL
+image_path = "./demo.jpg"
+output_image_path = "./out.jpg"
+
+# 对本地图像进行Base64编码
+with open(image_path, "rb") as file:
+    image_bytes = file.read()
+    image_data = base64.b64encode(image_bytes).decode("ascii")
+
+payload = {"image": image_data}  # Base64编码的文件内容或者图像URL
+
+# 调用API
+response = requests.post(API_URL, json=payload)
+
+# 处理接口返回数据
+assert response.status_code == 200
+result = response.json()["result"]
+with open(output_image_path, "wb") as file:
+    file.write(base64.b64decode(result["image"]))
+print(f"Output image saved at {output_image_path}")
+print("\nDetected texts:")
+print(result["texts"])
+```
+
+</details>
+
+<details>
+<summary>C++</summary>
+
+```cpp
+#include <iostream>
+#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib
+#include "nlohmann/json.hpp" // https://github.com/nlohmann/json
+#include "base64.hpp" // https://github.com/tobiaslocker/base64
+
+int main() {
+    httplib::Client client("localhost:8080");
+    const std::string imagePath = "./demo.jpg";
+    const std::string outputImagePath = "./out.jpg";
+
+    httplib::Headers headers = {
+        {"Content-Type", "application/json"}
+    };
+
+    // 对本地图像进行Base64编码
+    std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
+    std::streamsize size = file.tellg();
+    file.seekg(0, std::ios::beg);
+
+    std::vector<char> buffer(size);
+    if (!file.read(buffer.data(), size)) {
+        std::cerr << "Error reading file." << std::endl;
+        return 1;
+    }
+    std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
+    std::string encodedImage = base64::to_base64(bufferStr);
+
+    nlohmann::json jsonObj;
+    jsonObj["image"] = encodedImage;
+    std::string body = jsonObj.dump();
+
+    // 调用API
+    auto response = client.Post("/ocr", headers, body, "application/json");
+    // 处理接口返回数据
+    if (response && response->status == 200) {
+        nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
+        auto result = jsonResponse["result"];
+
+        encodedImage = result["image"];
+        std::string decodedString = base64::from_base64(encodedImage);
+        std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end());
+        std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
+        if (outputImage.is_open()) {
+            outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size());
+            outputImage.close();
+            std::cout << "Output image saved at " << outPutImagePath << std::endl;
+        } else {
+            std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl;
+        }
+
+        auto texts = result["texts"];
+        std::cout << "\nDetected texts:" << std::endl;
+        for (const auto& text : texts) {
+            std::cout << text << std::endl;
+        }
+    } else {
+        std::cout << "Failed to send HTTP request." << std::endl;
+        return 1;
+    }
+
+    return 0;
+}
+```
+
+</details>
+
+<details>
+<summary>Java</summary>
+
+```java
+import okhttp3.*;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.node.ObjectNode;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.util.Base64;
+
+public class Main {
+    public static void main(String[] args) throws IOException {
+        String API_URL = "http://localhost:8080/ocr"; // 服务URL
+        String imagePath = "./demo.jpg"; // 本地图像
+        String outputImagePath = "./out.jpg"; // 输出图像
+
+        // 对本地图像进行Base64编码
+        File file = new File(imagePath);
+        byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
+        String imageData = Base64.getEncoder().encodeToString(fileContent);
+
+        ObjectMapper objectMapper = new ObjectMapper();
+        ObjectNode params = objectMapper.createObjectNode();
+        params.put("image", imageData); // Base64编码的文件内容或者图像URL
+
+        // 创建 OkHttpClient 实例
+        OkHttpClient client = new OkHttpClient();
+        MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
+        RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
+        Request request = new Request.Builder()
+                .url(API_URL)
+                .post(body)
+                .build();
+
+        // 调用API并处理接口返回数据
+        try (Response response = client.newCall(request).execute()) {
+            if (response.isSuccessful()) {
+                String responseBody = response.body().string();
+                JsonNode resultNode = objectMapper.readTree(responseBody);
+                JsonNode result = resultNode.get("result");
+                String base64Image = result.get("image").asText();
+                JsonNode texts = result.get("texts");
+
+                byte[] imageBytes = Base64.getDecoder().decode(base64Image);
+                try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
+                    fos.write(imageBytes);
+                }
+                System.out.println("Output image saved at " + outputImagePath);
+                System.out.println("\nDetected texts: " + texts.toString());
+            } else {
+                System.err.println("Request failed with code: " + response.code());
+            }
+        }
+    }
+}
+```
+
+</details>
+
+<details>
+<summary>Go</summary>
+
+```go
+package main
+
+import (
+    "bytes"
+    "encoding/base64"
+    "encoding/json"
+    "fmt"
+    "io/ioutil"
+    "net/http"
+)
+
+func main() {
+    API_URL := "http://localhost:8080/ocr"
+    imagePath := "./demo.jpg"
+    outputImagePath := "./out.jpg"
+
+    // 对本地图像进行Base64编码
+    imageBytes, err := ioutil.ReadFile(imagePath)
+    if err != nil {
+        fmt.Println("Error reading image file:", err)
+        return
+    }
+    imageData := base64.StdEncoding.EncodeToString(imageBytes)
+
+    payload := map[string]string{"image": imageData} // Base64编码的文件内容或者图像URL
+    payloadBytes, err := json.Marshal(payload)
+    if err != nil {
+        fmt.Println("Error marshaling payload:", err)
+        return
+    }
+
+    // 调用API
+    client := &http.Client{}
+    req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
+    if err != nil {
+        fmt.Println("Error creating request:", err)
+        return
+    }
+
+    res, err := client.Do(req)
+    if err != nil {
+        fmt.Println("Error sending request:", err)
+        return
+    }
+    defer res.Body.Close()
+
+    // 处理接口返回数据
+    body, err := ioutil.ReadAll(res.Body)
+    if err != nil {
+        fmt.Println("Error reading response body:", err)
+        return
+    }
+    type Response struct {
+        Result struct {
+            Image      string   `json:"image"`
+            Texts []map[string]interface{} `json:"texts"`
+        } `json:"result"`
+    }
+    var respData Response
+    err = json.Unmarshal([]byte(string(body)), &respData)
+    if err != nil {
+        fmt.Println("Error unmarshaling response body:", err)
+        return
+    }
+
+    outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
+    if err != nil {
+        fmt.Println("Error decoding base64 image data:", err)
+        return
+    }
+    err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
+    if err != nil {
+        fmt.Println("Error writing image to file:", err)
+        return
+    }
+    fmt.Printf("Image saved at %s.jpg\n", outputImagePath)
+    fmt.Println("\nDetected texts:")
+    for _, text := range respData.Result.Texts {
+        fmt.Println(text)
+    }
+}
+```
+
+</details>
+
+<details>
+<summary>C#</summary>
+
+```csharp
+using System;
+using System.IO;
+using System.Net.Http;
+using System.Net.Http.Headers;
+using System.Text;
+using System.Threading.Tasks;
+using Newtonsoft.Json.Linq;
+
+class Program
+{
+    static readonly string API_URL = "http://localhost:8080/ocr";
+    static readonly string imagePath = "./demo.jpg";
+    static readonly string outputImagePath = "./out.jpg";
+
+    static async Task Main(string[] args)
+    {
+        var httpClient = new HttpClient();
+
+        // 对本地图像进行Base64编码
+        byte[] imageBytes = File.ReadAllBytes(imagePath);
+        string image_data = Convert.ToBase64String(imageBytes);
+
+        var payload = new JObject{ { "image", image_data } }; // Base64编码的文件内容或者图像URL
+        var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");
+
+        // 调用API
+        HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
+        response.EnsureSuccessStatusCode();
+
+        // 处理接口返回数据
+        string responseBody = await response.Content.ReadAsStringAsync();
+        JObject jsonResponse = JObject.Parse(responseBody);
+
+        string base64Image = jsonResponse["result"]["image"].ToString();
+        byte[] outputImageBytes = Convert.FromBase64String(base64Image);
+
+        File.WriteAllBytes(outputImagePath, outputImageBytes);
+        Console.WriteLine($"Output image saved at {outputImagePath}");
+        Console.WriteLine("\nDetected texts:");
+        Console.WriteLine(jsonResponse["result"]["texts"].ToString());
+    }
+}
+```
+
+</details>
+
+<details>
+<summary>Node.js</summary>
+
+```js
+const axios = require('axios');
+const fs = require('fs');
+
+const API_URL = 'http://localhost:8080/ocr'
+const imagePath = './demo.jpg'
+const outputImagePath = "./out.jpg";
+
+let config = {
+   method: 'POST',
+   maxBodyLength: Infinity,
+   url: API_URL,
+   data: JSON.stringify({
+    'image': encodeImageToBase64(imagePath)  // Base64编码的文件内容或者图像URL
+  })
+};
+
+// 对本地图像进行Base64编码
+function encodeImageToBase64(filePath) {
+  const bitmap = fs.readFileSync(filePath);
+  return Buffer.from(bitmap).toString('base64');
+}
+
+// 调用API
+axios.request(config)
+.then((response) => {
+    // 处理接口返回数据
+    const result = response.data["result"];
+    const imageBuffer = Buffer.from(result["image"], 'base64');
+    fs.writeFile(outputImagePath, imageBuffer, (err) => {
+      if (err) throw err;
+      console.log(`Output image saved at ${outputImagePath}`);
+    });
+    console.log("\nDetected texts:");
+    console.log(result["texts"]);
+})
+.catch((error) => {
+  console.log(error);
+});
+```
+
+</details>
+
+<details>
+<summary>PHP</summary>
+
+```php
+<?php
+
+$API_URL = "http://localhost:8080/ocr"; // 服务URL
+$image_path = "./demo.jpg";
+$output_image_path = "./out.jpg";
+
+// 对本地图像进行Base64编码
+$image_data = base64_encode(file_get_contents($image_path));
+$payload = array("image" => $image_data); // Base64编码的文件内容或者图像URL
+
+// 调用API
+$ch = curl_init($API_URL);
+curl_setopt($ch, CURLOPT_POST, true);
+curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
+curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
+$response = curl_exec($ch);
+curl_close($ch);
+
+// 处理接口返回数据
+$result = json_decode($response, true)["result"];
+file_put_contents($output_image_path, base64_decode($result["image"]));
+echo "Output image saved at " . $output_image_path . "\n";
+echo "\nDetected texts:\n";
+print_r($result["texts"]);
+
+?>
+```
+
+</details>
+</details>
+<br/>
+
+📱 **端侧部署**:端侧部署是一种将计算和数据处理功能放在用户设备本身上的方式,设备可以直接处理数据,而不需要依赖远程的服务器。PaddleX 支持将模型部署在 Android 等端侧设备上,详细的端侧部署流程请参考[PaddleX端侧部署指南](../../../pipeline_deploy/edge_deploy.md)。
+您可以根据需要选择合适的方式部署模型产线,进而进行后续的 AI 应用集成。
+
+
+## 4. 二次开发
+
+如果通用图像识别产线提供的默认模型权重在您的场景中,精度或速度不满意,您可以尝试利用**您自己拥有的特定领域或应用场景的数据**对现有模型进行进一步的**微调**,以提升通用该产线的在您的场景中的识别效果。
+
+### 4.1 模型微调
+
+由于通用图像识别产线包含两个模块(主体检测模块和图像特征模块),模型产线的效果不及预期可能来自于其中任何一个模块。
+
+您可以对识别效果差的图片进行分析,如果在分析过程中发现有较多的主体目标未被检测出来,那么可能是主体检测模型存在不足,您需要参考[主体检测模块开发教程](../../../module_usage/tutorials/cv_modules/object_detection.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/object_detection.md#四二次开发)章节,使用您的私有数据集对主体检测模型进行微调;如果在已检测到的主体出现匹配错误,这表明图像特征模型需要进一步改进,您需要参考[图像特征模块开发教程](../../../module_usage/tutorials/cv_modules/image_feature.md)中的[二次开发](../../../module_usage/tutorials/cv_modules/image_feature.md#四二次开发)章节,对图像特征模型进行微调。
+
+### 4.2 模型应用
+
+当您使用私有数据集完成微调训练后,可获得本地模型权重文件。
+
+若您需要使用微调后的模型权重,只需对产线配置文件做修改,将微调后模型权重的本地路径替换至产线配置文件中的对应位置即可:
+
+```yaml
+Pipeline:
+  device: "gpu:0"
+  det_model: "./PP-ShiTuV2_det_infer/"        #可修改为微调后主体检测模型的本地路径
+  rec_model: "./PP-ShiTuV2_rec_infer/"        #可修改为微调后图像特征模型的本地路径
+  det_batch_size: 1
+  rec_batch_size: 1
+  device: gpu
+```
+随后, 参考[2.2 本地体验](#22-本地体验)中的命令行方式或Python脚本方式,加载修改后的产线配置文件即可。
+
+##  5. 多硬件支持
+
+PaddleX 支持英伟达 GPU、昆仑芯 XPU、昇腾 NPU和寒武纪 MLU 等多种主流硬件设备,**仅需修改 `--device`参数**即可完成不同硬件之间的无缝切换。
+
+例如,使用Python运行通用图像识别产线时,将运行设备从英伟达 GPU 更改为昇腾 NPU,仅需将脚本中的 `device` 修改为 npu 即可:
+
+```python
+from paddlex import create_pipeline
+
+pipeline = create_pipeline(
+    pipeline="PP-ShiTuV2",
+    device="npu:0" # gpu:0 --> npu:0
+    )
+```
+
+若您想在更多种类的硬件上使用通用图像识别产线,请参考[PaddleX多硬件使用指南](../../../other_devices_support/multi_devices_use_guide.md)。

+ 3 - 0
docs/pipeline_usage/tutorials/cv_pipelines/general_image_recognition_en.md

@@ -0,0 +1,3 @@
+[简体中文](general_image_recognition.md) | English
+
+Comming Soon

+ 1 - 0
paddlex/inference/components/__init__.py

@@ -15,3 +15,4 @@
 from .transforms import *
 from .paddle_predictor import *
 from .task_related import *
+from .retrieval import *

+ 15 - 0
paddlex/inference/components/retrieval/__init__.py

@@ -0,0 +1,15 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from .faiss import FaissIndexer

+ 278 - 0
paddlex/inference/components/retrieval/faiss.py

@@ -0,0 +1,278 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import pickle
+from pathlib import Path
+import faiss
+import numpy as np
+
+from ....utils import logging
+from ..base import BaseComponent
+
+
+class FaissIndexer(BaseComponent):
+
+    INPUT_KEYS = "feature"
+    OUTPUT_KEYS = ["label", "score"]
+    DEAULT_INPUTS = {"feature": "feature"}
+    DEAULT_OUTPUTS = {"label": "label", "score": "score"}
+
+    ENABLE_BATCH = True
+
+    def __init__(
+        self,
+        index_dir,
+        metric_type="IP",
+        return_k=1,
+        score_thres=None,
+        hamming_radius=None,
+    ):
+        super().__init__()
+        index_dir = Path(index_dir)
+        vector_path = (index_dir / "vector.index").as_posix()
+        id_map_path = (index_dir / "id_map.pkl").as_posix()
+
+        if metric_type == "hamming":
+            self._indexer = faiss.read_index_binary(vector_path)
+            self.hamming_radius = hamming_radius
+        else:
+            self._indexer = faiss.read_index(vector_path)
+            self.score_thres = score_thres
+        with open(id_map_path, "rb") as fd:
+            self.id_map = pickle.load(fd)
+        self.metric_type = metric_type
+        self.return_k = return_k
+
+    def apply(self, feature):
+        """apply"""
+        scores_list, ids_list = self._indexer.search(np.array(feature), self.return_k)
+        preds = []
+        for scores, ids in zip(scores_list, ids_list):
+            labels = []
+            for id in ids:
+                if id > 0:
+                    labels.append(self.id_map[id])
+            preds.append({"score": scores, "label": labels})
+
+        if self.metric_type == "hamming":
+            idxs = np.where(scores_list[:, 0] > self.hamming_radius)[0]
+        else:
+            idxs = np.where(scores_list[:, 0] < self.score_thres)[0]
+        for idx in idxs:
+            preds[idx] = {"score": None, "label": None}
+        return preds
+
+
+class FaissBuilder:
+
+    SUPPORT_MODE = ("new", "remove", "append")
+    SUPPORT_METRIC_TYPE = ("hamming", "IP", "L2")
+    SUPPORT_INDEX_TYPE = ("Flat", "IVF", "HNSW32")
+    BINARY_METRIC_TYPE = ("hamming",)
+    BINARY_SUPPORT_INDEX_TYPE = ("Flat", "IVF", "BinaryHash")
+
+    def __init__(self, predict, mode="new", index_type="HNSW32", metric_type="IP"):
+        super().__init__()
+        assert (
+            mode in self.SUPPORT_MODE
+        ), f"Supported modes only: {self.SUPPORT_MODE}. But received {mode}!"
+        assert (
+            metric_type in self.SUPPORT_METRIC_TYPE
+        ), f"Supported metric types only: {self.SUPPORT_METRIC_TYPE}!"
+        assert (
+            index_type in self.SUPPORT_INDEX_TYPE
+        ), f"Supported index types only: {self.SUPPORT_INDEX_TYPE}!"
+
+        self._predict = predict
+        self._mode = mode
+        self._metric_type = metric_type
+        self._index_type = index_type
+
+    def _get_index_type(self, num=None):
+        # if IVF method, cal ivf number automaticlly
+        if self._index_type == "IVF":
+            index_type = self._index_type + str(min(int(num // 8), 65536))
+            if self._metric_type in self.BINARY_METRIC_TYPE:
+                index_type += ",BFlat"
+            else:
+                index_type += ",Flat"
+
+        # for binary index, add B at head of index_type
+        if self._metric_type in self.BINARY_METRIC_TYPE:
+            assert (
+                self._index_type in self.BINARY_SUPPORT_INDEX_TYPE
+            ), f"The metric type({self._metric_type}) only support {self.BINARY_SUPPORT_INDEX_TYPE} index types!"
+            index_type = "B" + index_type
+
+        if self._index_type == "HNSW32":
+            logging.warning("The HNSW32 method dose not support 'remove' operation")
+            index_type = "HNSW32"
+
+        if self._index_type == "Flat":
+            index_type = "Flat"
+
+        return index_type
+
+    def _get_metric_type(self):
+        if self._metric_type == "hamming":
+            return faiss.METRIC_Hamming
+        elif self._metric_type == "jaccard":
+            return faiss.METRIC_Jaccard
+        elif self._metric_type == "IP":
+            return faiss.METRIC_INNER_PRODUCT
+        elif self._metric_type == "L2":
+            return faiss.METRIC_L2
+
+    def build(
+        self,
+        label_file,
+        image_root,
+        index_dir,
+    ):
+        file_list, gallery_docs = get_file_list(label_file, image_root)
+
+        features = [res["feature"] for res in self._predict(file_list)]
+        dtype = np.uint8 if self._metric_type in self.BINARY_METRIC_TYPE else np.float32
+        features = np.array(features).astype(dtype)
+        vector_num, vector_dim = features.shape
+
+        if self._metric_type in self.BINARY_METRIC_TYPE:
+            index = faiss.index_binary_factory(
+                vector_dim,
+                self._get_index_type(vector_num),
+                self._get_metric_type(),
+            )
+        else:
+            index = faiss.index_factory(
+                vector_dim,
+                self._get_index_type(vector_num),
+                self._get_metric_type(),
+            )
+            index = faiss.IndexIDMap2(index)
+        ids = {}
+
+        # calculate id for new data
+        index, ids = self._add_gallery(index, ids, features, gallery_docs)
+        self._save_gallery(index, ids, index_dir)
+
+    def remove(
+        self,
+        label_file,
+        image_root,
+        index_dir,
+    ):
+        file_list, gallery_docs = get_file_list(label_file, image_root)
+
+        # load vector.index and id_map.pkl
+        index, ids = self._load_index(index_dir)
+
+        if self._index_type == "HNSW32":
+            raise RuntimeError(
+                "The index_type: HNSW32 dose not support 'remove' operation"
+            )
+
+        # remove ids in id_map, remove index data in faiss index
+        index, ids = self._rm_id_in_galllery(index, ids, gallery_docs)
+        self._save_gallery(index, ids, index_dir)
+
+    def append(
+        self,
+        label_file,
+        image_root,
+        index_dir,
+    ):
+        file_list, gallery_docs = get_file_list(label_file, image_root)
+        features = [res["feature"] for res in self._predict(file_list)]
+        dtype = np.uint8 if self._metric_type in self.BINARY_METRIC_TYPE else np.float32
+        features = np.array(features).astype(dtype)
+
+        # load vector.index and id_map.pkl
+        index, ids = self._load_index(index_dir)
+
+        # calculate id for new data
+        index, ids = self._add_gallery(index, ids, features, gallery_docs)
+        self._save_gallery(index, ids, index_dir)
+
+    def _load_index(self, index_dir):
+        assert os.path.join(
+            index_dir, "vector.index"
+        ), "The vector.index dose not exist in {} when 'index_operation' is not None".format(
+            index_dir
+        )
+        assert os.path.join(
+            index_dir, "id_map.pkl"
+        ), "The id_map.pkl dose not exist in {} when 'index_operation' is not None".format(
+            index_dir
+        )
+        index = faiss.read_index(os.path.join(index_dir, "vector.index"))
+        with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd:
+            ids = pickle.load(fd)
+        assert index.ntotal == len(
+            ids.keys()
+        ), "data number in index is not equal in in id_map"
+        return index, ids
+
+    def _add_gallery(self, index, ids, gallery_features, gallery_docs):
+        start_id = max(ids.keys()) + 1 if ids else 0
+        ids_now = (np.arange(0, len(gallery_docs)) + start_id).astype(np.int64)
+
+        # only train when new index file
+        if self._mode == "new":
+            if self._metric_type in self.BINARY_METRIC_TYPE:
+                index.add(gallery_features)
+            else:
+                index.train(gallery_features)
+
+        if not self._metric_type in self.BINARY_METRIC_TYPE:
+            index.add_with_ids(gallery_features, ids_now)
+
+        for i, d in zip(list(ids_now), gallery_docs):
+            ids[i] = d
+        return index, ids
+
+    def _rm_id_in_galllery(self, index, ids, gallery_docs):
+        remove_ids = list(filter(lambda k: ids.get(k) in gallery_docs, ids.keys()))
+        remove_ids = np.asarray(remove_ids)
+        index.remove_ids(remove_ids)
+        for k in remove_ids:
+            del ids[k]
+
+        return index, ids
+
+    def _save_gallery(self, index, ids, index_dir):
+        Path(index_dir).mkdir(parents=True, exist_ok=True)
+        if self._metric_type in self.BINARY_METRIC_TYPE:
+            faiss.write_index_binary(index, os.path.join(index_dir, "vector.index"))
+        else:
+            faiss.write_index(index, os.path.join(index_dir, "vector.index"))
+
+        with open(os.path.join(index_dir, "id_map.pkl"), "wb") as fd:
+            pickle.dump(ids, fd)
+
+
+def get_file_list(data_file, root_dir, delimiter="\t"):
+    root_dir = Path(root_dir)
+    files = []
+    labels = []
+    lines = []
+    with open(data_file, "r", encoding="utf-8") as f:
+        lines = f.readlines()
+    for line in lines:
+        path, label = line.strip().split(delimiter)
+        file_path = root_dir / path
+        files.append(file_path.as_posix())
+        labels.append(label)
+
+    return files, labels

+ 4 - 4
paddlex/inference/components/task_related/clas.py

@@ -113,12 +113,12 @@ class NormalizeFeatures(BaseComponent):
     """Normalize Features Transform"""
 
     INPUT_KEYS = ["pred"]
-    OUTPUT_KEYS = ["rec_feature"]
+    OUTPUT_KEYS = ["feature"]
     DEAULT_INPUTS = {"pred": "pred"}
-    DEAULT_OUTPUTS = {"rec_feature": "rec_feature"}
+    DEAULT_OUTPUTS = {"feature": "feature"}
 
     def apply(self, pred):
         """apply"""
         feas_norm = np.sqrt(np.sum(np.square(pred[0]), axis=0, keepdims=True))
-        rec_feature = np.divide(pred[0], feas_norm)
-        return {"rec_feature": rec_feature}
+        feature = np.divide(pred[0], feas_norm)
+        return {"feature": feature}

+ 1 - 1
paddlex/inference/models/general_recognition.py

@@ -95,5 +95,5 @@ class ShiTuRecPredictor(BasicPredictor):
         return NormalizeFeatures()
 
     def _pack_res(self, data):
-        keys = ["input_path", "rec_feature"]
+        keys = ["input_path", "feature"]
         return BaseResult({key: data[key] for key in keys})

+ 5 - 1
paddlex/inference/pipelines/__init__.py

@@ -37,6 +37,7 @@ from .table_recognition import TableRecPipeline
 from .seal_recognition import SealOCRPipeline
 from .ppchatocrv3 import PPChatOCRPipeline
 from .layout_parsing import LayoutParsingPipeline
+from .pp_shitu_v2 import ShiTuV2Pipeline
 
 
 def load_pipeline_config(pipeline: str) -> Dict[str, Any]:
@@ -79,7 +80,10 @@ def create_pipeline_from_config(
     elif "pp_option" in pipeline_setting:
         predictor_kwargs["pp_option"] = pipeline_setting.pop("pp_option")
 
-    device = device if device else pipeline_setting.pop("device", None)
+    if device:
+        pipeline_setting.pop("device", None)
+    else:
+        device = pipeline_setting.pop("device", None)
 
     pipeline_setting.update(kwargs)
     pipeline = BasePipeline.get(pipeline_name)(

+ 178 - 0
paddlex/inference/pipelines/pp_shitu_v2.py

@@ -0,0 +1,178 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from pathlib import Path
+import numpy as np
+
+from ..utils.io import ImageReader
+from ..components import CropByBoxes, FaissIndexer
+from ..components.retrieval.faiss import FaissBuilder
+from ..results import ShiTuResult
+from .base import BasePipeline
+
+
+class ShiTuV2Pipeline(BasePipeline):
+    """ShiTuV2 Pipeline"""
+
+    entities = "PP-ShiTuV2"
+
+    def __init__(
+        self,
+        det_model,
+        rec_model,
+        det_batch_size=1,
+        rec_batch_size=1,
+        index_dir=None,
+        metric_type="IP",
+        score_thres=None,
+        hamming_radius=None,
+        return_k=5,
+        device=None,
+        predictor_kwargs=None,
+    ):
+        super().__init__(device, predictor_kwargs)
+        self._build_predictor(det_model, rec_model)
+        self.set_predictor(det_batch_size, rec_batch_size, device)
+        self._metric_type, self._return_k, self._score_thres, self._hamming_radius = (
+            metric_type,
+            return_k,
+            score_thres,
+            hamming_radius,
+        )
+        self._indexer = self._build_indexer(index_dir) if index_dir else None
+
+    def _build_indexer(self, index_dir):
+        return FaissIndexer(
+            index_dir,
+            self._metric_type,
+            self._return_k,
+            self._score_thres,
+            self._hamming_radius,
+        )
+
+    def _build_predictor(self, det_model, rec_model):
+        self.det_model = self._create(model=det_model)
+        self.rec_model = self._create(model=rec_model)
+        self._crop_by_boxes = CropByBoxes()
+        self._img_reader = ImageReader(backend="opencv")
+
+    def set_predictor(self, det_batch_size=None, rec_batch_size=None, device=None):
+        if det_batch_size:
+            self.det_model.set_predictor(batch_size=det_batch_size)
+        if rec_batch_size:
+            self.rec_model.set_predictor(batch_size=rec_batch_size)
+        if device:
+            self.det_model.set_predictor(device=device)
+            self.rec_model.set_predictor(device=device)
+
+    def predict(self, input, index_dir=None, **kwargs):
+        indexer = self._build_indexer(index_dir) if index_dir else self._indexer
+        assert indexer
+        self.set_predictor(**kwargs)
+        for det_res in self.det_model(input):
+            rec_res = self.get_rec_result(det_res, indexer)
+            yield self.get_final_result(det_res, rec_res)
+
+    def get_rec_result(self, det_res, indexer):
+        full_img = self._img_reader.read(det_res["input_path"])
+        w, h = full_img.shape[:2]
+        det_res["boxes"].append(
+            {"cls_id": 0, "label": "full_img", "score": 0, "coordinate": [0, 0, h, w]}
+        )
+        subs_of_img = list(self._crop_by_boxes(det_res))
+        img_list = [img["img"] for img in subs_of_img]
+        all_rec_res = list(self.rec_model(img_list))
+        all_rec_res = next(indexer(all_rec_res))
+        output = {"label": [], "score": []}
+        for res in all_rec_res:
+            output["label"].append(res["label"])
+            output["score"].append(res["score"])
+        return output
+
+    def get_final_result(self, det_res, rec_res):
+        single_img_res = {"input_path": det_res["input_path"], "boxes": []}
+        for i, obj in enumerate(det_res["boxes"]):
+            rec_scores = rec_res["score"][i]
+            labels = rec_res["label"][i]
+            single_img_res["boxes"].append(
+                {
+                    "labels": labels,
+                    "rec_scores": rec_scores,
+                    "det_score": obj["score"],
+                    "coordinate": obj["coordinate"],
+                }
+            )
+        return ShiTuResult(single_img_res)
+
+    def _build_index(
+        self,
+        data_root,
+        index_dir,
+        mode="new",
+        metric_type="IP",
+        index_type="HNSW32",
+        **kwargs,
+    ):
+        self.set_predictor(**kwargs)
+        self._metric_type = metric_type if metric_type else self._metric_type
+        builder = FaissBuilder(
+            self.rec_model.predict,
+            mode=mode,
+            metric_type=self._metric_type,
+            index_type=index_type,
+        )
+        if mode == "new":
+            builder.build(Path(data_root) / "gallery.txt", data_root, index_dir)
+        elif mode == "remove":
+            builder.remove(Path(data_root) / "gallery.txt", data_root, index_dir)
+        elif mode == "append":
+            builder.append(Path(data_root) / "gallery.txt", data_root, index_dir)
+        else:
+            raise Exception("`mode` only support `new`, `remove` and `append`.")
+
+    def build_index(
+        self, data_root, index_dir, metric_type="IP", index_type="HNSW32", **kwargs
+    ):
+        self._build_index(
+            data_root=data_root,
+            index_dir=index_dir,
+            mode="new",
+            metric_type=metric_type,
+            index_type=index_type,
+            **kwargs,
+        )
+
+    def remove_index(
+        self, data_root, index_dir, metric_type="IP", index_type="HNSW32", **kwargs
+    ):
+        self._build_index(
+            data_root=data_root,
+            index_dir=index_dir,
+            mode="remove",
+            metric_type=metric_type,
+            index_type=index_type,
+            **kwargs,
+        )
+
+    def append_index(
+        self, data_root, index_dir, metric_type="IP", index_type="HNSW32", **kwargs
+    ):
+        self._build_index(
+            data_root=data_root,
+            index_dir=index_dir,
+            mode="append",
+            metric_type=metric_type,
+            index_type=index_type,
+            **kwargs,
+        )

+ 1 - 0
paddlex/inference/results/__init__.py

@@ -26,3 +26,4 @@ from .instance_seg import InstanceSegResult
 from .ts import TSFcResult, TSAdResult, TSClsResult
 from .warp import DocTrResult
 from .chat_ocr import *
+from .shitu import ShiTuResult

+ 8 - 7
paddlex/inference/results/det.py

@@ -36,18 +36,19 @@ def draw_box(img, boxes):
 
     draw_thickness = int(max(img.size) * 0.005)
     draw = ImageDraw.Draw(img)
-    clsid2color = {}
+    label2color = {}
     catid2fontcolor = {}
     color_list = get_colormap(rgb=True)
 
     for i, dt in enumerate(boxes):
-        clsid, bbox, score = dt["cls_id"], dt["coordinate"], dt["score"]
-        if clsid not in clsid2color:
+        # clsid = dt["cls_id"]
+        label, bbox, score = dt["label"], dt["coordinate"], dt["score"]
+        if label not in label2color:
             color_index = i % len(color_list)
-            clsid2color[clsid] = color_list[color_index]
-            catid2fontcolor[clsid] = font_colormap(color_index)
-        color = tuple(clsid2color[clsid])
-        font_color = tuple(catid2fontcolor[clsid])
+            label2color[label] = color_list[color_index]
+            catid2fontcolor[label] = font_colormap(color_index)
+        color = tuple(label2color[label])
+        font_color = tuple(catid2fontcolor[label])
 
         xmin, ymin, xmax, ymax = bbox
         # draw bbox

+ 35 - 0
paddlex/inference/results/shitu.py

@@ -0,0 +1,35 @@
+# copyright (c) 2024 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+from .base import CVResult
+from .det import draw_box
+
+
+class ShiTuResult(CVResult):
+
+    def _to_img(self):
+        """apply"""
+        image = self._img_reader.read(self["input_path"])
+        boxes = [
+            {
+                "coordinate": box["coordinate"],
+                "label": box["labels"][0],
+                "score": box["rec_scores"][0],
+            }
+            for box in self["boxes"]
+            if box["rec_scores"] is not None
+        ]
+        image = draw_box(image, boxes)
+        return image

+ 13 - 0
paddlex/pipelines/PP-ShiTuV2.yaml

@@ -0,0 +1,13 @@
+Global:
+  pipeline_name: PP-ShiTuV2
+  input: ./drink_dataset_v2.0/test_images/100.jpeg
+  
+Pipeline:
+  det_model: PP-ShiTuV2_det
+  rec_model: PP-ShiTuV2_rec
+  det_batch_size: 1
+  rec_batch_size: 1
+  device: gpu
+  index_dir: "./drink_dataset_v2.0/index/"
+  score_thres: 0.5
+  return_k: 5