Browse Source

fix: update README files to include lmdeploy-engine and adjust accuracy details

myhloli 2 weeks ago
parent
commit
6c27bc7f53
2 changed files with 18 additions and 12 deletions
  1. 9 6
      README.md
  2. 9 6
      README_zh-CN.md

+ 9 - 6
README.md

@@ -632,12 +632,13 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
         <tr>
             <th rowspan="2">Parsing Backend</th>
             <th rowspan="2">pipeline <br> (Accuracy<sup>1</sup> 82+)</th>
-            <th colspan="4">vlm (Accuracy<sup>1</sup> 90+)</th>
+            <th colspan="5">vlm (Accuracy<sup>1</sup> 90+)</th>
         </tr>
         <tr>
             <th>transformers</th>
             <th>mlx-engine</th>
             <th>vllm-engine / <br>vllm-async-engine</th>
+            <th>lmdeploy-engine</th>
             <th>http-client</th>
         </tr>
     </thead>
@@ -648,6 +649,7 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
             <td>Good compatibility, <br>but slower</td>
             <td>Faster than transformers</td>
             <td>Fast, compatible with the vLLM ecosystem</td>
+            <td>Fast, compatible with the LMDeploy ecosystem</td>
             <td>Suitable for OpenAI-compatible servers<sup>5</sup></td>
         </tr>
         <tr>
@@ -655,33 +657,34 @@ A WebUI developed based on Gradio, with a simple interface and only core parsing
             <td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
             <td style="text-align:center;">macOS<sup>3</sup></td>
             <td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
+            <td style="text-align:center;">Linux<sup>2</sup> / Windows </td>
             <td>Any</td>
         </tr>
         <tr>
             <th>CPU inference support</th>
             <td colspan="2" style="text-align:center;">✅</td>
-            <td colspan="2" style="text-align:center;">❌</td>
+            <td colspan="3" style="text-align:center;">❌</td>
             <td>Not required</td>
         </tr>
         <tr>
             <th>GPU Requirements</th><td colspan="2" style="text-align:center;">Volta or later architectures, 6 GB VRAM or more, or Apple Silicon</td>
             <td>Apple Silicon</td>
-            <td>Volta or later architectures, 8 GB VRAM or more</td>
+            <td colspan="2" style="text-align:center;">Volta or later architectures, 8 GB VRAM or more</td>
             <td>Not required</td>
         </tr>
         <tr>
             <th>Memory Requirements</th>
-            <td colspan="4" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
+            <td colspan="5" style="text-align:center;">Minimum 16 GB, 32 GB recommended</td>
             <td>8 GB</td>
         </tr>
         <tr>
             <th>Disk Space Requirements</th>
-            <td colspan="4" style="text-align:center;">20 GB or more, SSD recommended</td>
+            <td colspan="5" style="text-align:center;">20 GB or more, SSD recommended</td>
             <td>2 GB</td>
         </tr>
         <tr>
             <th>Python Version</th>
-            <td colspan="5" style="text-align:center;">3.10-3.13</td>
+            <td colspan="6" style="text-align:center;">3.10-3.13</td>
         </tr>
     </tbody>
 </table>

+ 9 - 6
README_zh-CN.md

@@ -619,12 +619,13 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
         <tr>
             <th rowspan="2">解析后端</th>
             <th rowspan="2">pipeline <br> (精度<sup>1</sup> 82+)</th>
-            <th colspan="4">vlm (精度<sup>1</sup> 90+)</th>
+            <th colspan="5">vlm (精度<sup>1</sup> 90+)</th>
         </tr>
         <tr>
             <th>transformers</th>
             <th>mlx-engine</th>
             <th>vllm-engine / <br>vllm-async-engine</th>
+            <th>lmdeploy-engine</th>
             <th>http-client</th>
         </tr>
     </thead>
@@ -635,6 +636,7 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
             <td>兼容性好, 速度较慢</td>
             <td>比transformers快</td>
             <td>速度快, 兼容vllm生态</td>
+            <td>速度快, 兼容lmdeploy生态</td>
             <td>适用于OpenAI兼容服务器<sup>5</sup></td>
         </tr>
         <tr>
@@ -642,33 +644,34 @@ https://github.com/user-attachments/assets/4bea02c9-6d54-4cd6-97ed-dff14340982c
             <td colspan="2" style="text-align:center;">Linux<sup>2</sup> / Windows / macOS</td>
             <td style="text-align:center;">macOS<sup>3</sup></td>
             <td style="text-align:center;">Linux<sup>2</sup> / Windows<sup>4</sup> </td>
+            <td style="text-align:center;">Linux<sup>2</sup> / Windows </td>
             <td>不限</td>
         </tr>
         <tr>
             <th>CPU推理支持</th>
             <td colspan="2" style="text-align:center;">✅</td>
-            <td colspan="2" style="text-align:center;">❌</td>
+            <td colspan="3" style="text-align:center;">❌</td>
             <td >不需要</td>
         </tr>
         <tr>
             <th>GPU要求</th><td colspan="2" style="text-align:center;">Volta及以后架构, 6G显存以上或Apple Silicon</td>
             <td>Apple Silicon</td>
-            <td>Volta及以后架构, 8G显存以上</td>
+            <td colspan="2" style="text-align:center;">Volta及以后架构, 8G显存以上</td>
             <td>不需要</td>
         </tr>
         <tr>
             <th>内存要求</th>
-            <td colspan="4" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
+            <td colspan="5" style="text-align:center;">最低16GB以上, 推荐32GB以上</td>
             <td>8GB</td>
         </tr>
         <tr>
             <th>磁盘空间要求</th>
-            <td colspan="4" style="text-align:center;">20GB以上, 推荐使用SSD</td>
+            <td colspan="5" style="text-align:center;">20GB以上, 推荐使用SSD</td>
             <td>2GB</td>
         </tr>
         <tr>
             <th>python版本</th>
-            <td colspan="5" style="text-align:center;">3.10-3.13</td>
+            <td colspan="6" style="text-align:center;">3.10-3.13</td>
         </tr>
     </tbody>
 </table>