Browse Source

Merge pull request #2063 from myhloli/dev

update docs
Xiaomeng Zhao 7 months ago
parent
commit
7de9668d64

+ 14 - 18
README.md

@@ -215,7 +215,7 @@ There are three different ways to experience MinerU:
     </tr>
     </tr>
     <tr>
     <tr>
         <td colspan="3">Python Version</td>
         <td colspan="3">Python Version</td>
-        <td colspan="3">3.10(Please make sure to create a Python 3.10 virtual environment using conda)</td>
+        <td colspan="3">3.10~3.12</td>
     </tr>
     </tr>
     <tr>
     <tr>
         <td colspan="3">Nvidia Driver Version</td>
         <td colspan="3">Nvidia Driver Version</td>
@@ -225,8 +225,8 @@ There are three different ways to experience MinerU:
     </tr>
     </tr>
     <tr>
     <tr>
         <td colspan="3">CUDA Environment</td>
         <td colspan="3">CUDA Environment</td>
-        <td>Automatic installation [12.1 (pytorch) + 11.8 (paddle)]</td>
-        <td>11.8 (manual installation) + cuDNN v8.7.0 (manual installation)</td>
+        <td>11.8/12.4/12.6</td>
+        <td>11.8/12.4/12.6</td>
         <td>None</td>
         <td>None</td>
     </tr>
     </tr>
     <tr>
     <tr>
@@ -236,11 +236,11 @@ There are three different ways to experience MinerU:
         <td>None</td>
         <td>None</td>
     </tr>
     </tr>
     <tr>
     <tr>
-        <td rowspan="2">GPU Hardware Support List</td>
-        <td colspan="2">GPU VRAM 8GB or more</td>
-        <td colspan="2">2080~2080Ti / 3060Ti~3090Ti / 4060~4090<br>
-        8G VRAM can enable all acceleration features</td>
-        <td rowspan="2">None</td>
+        <td rowspan="2">GPU/MPS Hardware Support List</td>
+        <td colspan="2">GPU VRAM 6GB or more</td>
+        <td colspan="2">All GPUs with Tensor Cores produced from Volta(2017) onwards.<br>
+        More than 6GB VRAM </td>
+        <td rowspan="2">apple slicon</td>
     </tr>
     </tr>
 </table>
 </table>
 
 
@@ -257,9 +257,9 @@ Synced with dev branch updates:
 #### 1. Install magic-pdf
 #### 1. Install magic-pdf
 
 
 ```bash
 ```bash
-conda create -n mineru python=3.10
+conda create -n mineru 'python<3.13' -y
 conda activate mineru
 conda activate mineru
-pip install -U "magic-pdf[full]" --extra-index-url https://wheels.myhloli.com
+pip install -U "magic-pdf[full]"
 ```
 ```
 
 
 #### 2. Download model weight files
 #### 2. Download model weight files
@@ -284,7 +284,7 @@ You can modify certain configurations in this file to enable or disable features
 {
 {
     // other config
     // other config
     "layout-config": {
     "layout-config": {
-        "model": "doclayout_yolo" // Please change to "layoutlmv3" when using layoutlmv3.
+        "model": "doclayout_yolo" 
     },
     },
     "formula-config": {
     "formula-config": {
         "mfd_model": "yolo_v8_mfd",
         "mfd_model": "yolo_v8_mfd",
@@ -292,7 +292,7 @@ You can modify certain configurations in this file to enable or disable features
         "enable": true  // The formula recognition feature is enabled by default. If you need to disable it, please change the value here to "false".
         "enable": true  // The formula recognition feature is enabled by default. If you need to disable it, please change the value here to "false".
     },
     },
     "table-config": {
     "table-config": {
-        "model": "rapid_table",  // Default to using "rapid_table", can be switched to "tablemaster" or "struct_eqtable".
+        "model": "rapid_table", 
         "sub_model": "slanet_plus",  // When the model is "rapid_table", you can choose a sub_model. The options are "slanet_plus" and "unitable"
         "sub_model": "slanet_plus",  // When the model is "rapid_table", you can choose a sub_model. The options are "slanet_plus" and "unitable"
         "enable": true, // The table recognition feature is enabled by default. If you need to disable it, please change the value here to "false".
         "enable": true, // The table recognition feature is enabled by default. If you need to disable it, please change the value here to "false".
         "max_time": 400
         "max_time": 400
@@ -308,7 +308,7 @@ If your device supports CUDA and meets the GPU requirements of the mainline envi
 - [Windows 10/11 + GPU](docs/README_Windows_CUDA_Acceleration_en_US.md)
 - [Windows 10/11 + GPU](docs/README_Windows_CUDA_Acceleration_en_US.md)
 - Quick Deployment with Docker
 - Quick Deployment with Docker
 > [!IMPORTANT]
 > [!IMPORTANT]
-> Docker requires a GPU with at least 8GB of VRAM, and all acceleration features are enabled by default.
+> Docker requires a GPU with at least 6GB of VRAM, and all acceleration features are enabled by default.
 >
 >
 > Before running this Docker, you can use the following command to check if your device supports CUDA acceleration on Docker.
 > Before running this Docker, you can use the following command to check if your device supports CUDA acceleration on Docker.
 > 
 > 
@@ -330,7 +330,7 @@ If your device has NPU acceleration hardware, you can follow the tutorial below
 
 
 ### Using MPS
 ### Using MPS
 
 
-If your device uses Apple silicon chips, you can enable MPS acceleration for certain supported tasks (such as layout detection and formula detection).
+If your device uses Apple silicon chips, you can enable MPS acceleration for your tasks.
 
 
 You can enable MPS acceleration by setting the `device-mode` parameter to `mps` in the `magic-pdf.json` configuration file.
 You can enable MPS acceleration by setting the `device-mode` parameter to `mps` in the `magic-pdf.json` configuration file.
 
 
@@ -341,10 +341,6 @@ You can enable MPS acceleration by setting the `device-mode` parameter to `mps`
 }
 }
 ```
 ```
 
 
-> [!TIP]
-> Since the formula recognition task cannot utilize MPS acceleration, you can disable the formula recognition feature in tasks where it is not needed to achieve optimal performance.
->
-> You can disable the formula recognition feature by setting the `enable` parameter in the `formula-config` section to `false`.
 
 
 ## Usage
 ## Usage
 
 

+ 4 - 8
README_zh-CN.md

@@ -288,7 +288,7 @@ pip install -U "magic-pdf[full]" -i https://mirrors.aliyun.com/pypi/simple
 {
 {
     // other config
     // other config
     "layout-config": {
     "layout-config": {
-        "model": "doclayout_yolo" // 使用layoutlmv3请修改为“layoutlmv3"
+        "model": "doclayout_yolo" 
     },
     },
     "formula-config": {
     "formula-config": {
         "mfd_model": "yolo_v8_mfd",
         "mfd_model": "yolo_v8_mfd",
@@ -296,7 +296,7 @@ pip install -U "magic-pdf[full]" -i https://mirrors.aliyun.com/pypi/simple
         "enable": true  // 公式识别功能默认是开启的,如果需要关闭请修改此处的值为"false"
         "enable": true  // 公式识别功能默认是开启的,如果需要关闭请修改此处的值为"false"
     },
     },
     "table-config": {
     "table-config": {
-        "model": "rapid_table",  // 默认使用"rapid_table",可以切换为"tablemaster"和"struct_eqtable"
+        "model": "rapid_table",  
         "sub_model": "slanet_plus",  // 当model为"rapid_table"时,可以自选sub_model,可选项为"slanet_plus"和"unitable"
         "sub_model": "slanet_plus",  // 当model为"rapid_table"时,可以自选sub_model,可选项为"slanet_plus"和"unitable"
         "enable": true, // 表格识别功能默认是开启的,如果需要关闭请修改此处的值为"false"
         "enable": true, // 表格识别功能默认是开启的,如果需要关闭请修改此处的值为"false"
         "max_time": 400
         "max_time": 400
@@ -312,7 +312,7 @@ pip install -U "magic-pdf[full]" -i https://mirrors.aliyun.com/pypi/simple
 - [Windows10/11 + GPU](docs/README_Windows_CUDA_Acceleration_zh_CN.md)
 - [Windows10/11 + GPU](docs/README_Windows_CUDA_Acceleration_zh_CN.md)
 - 使用Docker快速部署
 - 使用Docker快速部署
 > [!IMPORTANT]
 > [!IMPORTANT]
-> Docker 需设备gpu显存大于等于8GB,默认开启所有加速功能
+> Docker 需设备gpu显存大于等于6GB,默认开启所有加速功能
 > 
 > 
 > 运行本docker前可以通过以下命令检测自己的设备是否支持在docker上使用CUDA加速
 > 运行本docker前可以通过以下命令检测自己的设备是否支持在docker上使用CUDA加速
 > 
 > 
@@ -332,7 +332,7 @@ pip install -U "magic-pdf[full]" -i https://mirrors.aliyun.com/pypi/simple
 [NPU加速教程](docs/README_Ascend_NPU_Acceleration_zh_CN.md)
 [NPU加速教程](docs/README_Ascend_NPU_Acceleration_zh_CN.md)
 
 
 ### 使用MPS
 ### 使用MPS
-如果您的设备使用Apple silicon 芯片,您可以在部分支持的任务(layout检测/公式检测)中开启mps加速:
+如果您的设备使用Apple silicon 芯片,您可以开启mps加速:
 
 
 您可以通过在 `magic-pdf.json` 配置文件中将 `device-mode` 参数设置为 `mps` 来启用 MPS 加速。
 您可以通过在 `magic-pdf.json` 配置文件中将 `device-mode` 参数设置为 `mps` 来启用 MPS 加速。
 
 
@@ -343,10 +343,6 @@ pip install -U "magic-pdf[full]" -i https://mirrors.aliyun.com/pypi/simple
 }
 }
 ```
 ```
 
 
-> [!TIP]
-> 由于公式识别任务无法开启mps加速,您可在不需要识别公式的任务关闭公式识别功能以获得最佳性能。
->
-> 您可以通过将 `formula-config` 部分中的 `enable` 参数设置为 `false` 来禁用公式识别功能。
 
 
 
 
 ## 使用
 ## 使用

+ 10 - 23
docs/README_Ubuntu_CUDA_Acceleration_en_US.md

@@ -9,11 +9,11 @@ nvidia-smi
 If you see information similar to the following, it means that the NVIDIA drivers are already installed, and you can skip Step 2.
 If you see information similar to the following, it means that the NVIDIA drivers are already installed, and you can skip Step 2.
 
 
 > [!NOTE]
 > [!NOTE]
-> Notice:`CUDA Version` should be >= 12.1, If the displayed version number is less than 12.1, please upgrade the driver.
+> Notice:`CUDA Version` should be >= 12.4, If the displayed version number is less than 12.4, please upgrade the driver.
 
 
 ```plaintext
 ```plaintext
 +---------------------------------------------------------------------------------------+
 +---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 537.34                 Driver Version: 537.34       CUDA Version: 12.2     |
+| NVIDIA-SMI 570.133.07             Driver Version: 572.83         CUDA Version: 12.8   |
 |-----------------------------------------+----------------------+----------------------+
 |-----------------------------------------+----------------------+----------------------+
 | GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC |
 | GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC |
 | Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
 | Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
@@ -31,7 +31,7 @@ If no driver is installed, use the following command:
 
 
 ```sh
 ```sh
 sudo apt-get update
 sudo apt-get update
-sudo apt-get install nvidia-driver-545
+sudo apt-get install nvidia-driver-570-server
 ```
 ```
 
 
 Install the proprietary driver and restart your computer after installation.
 Install the proprietary driver and restart your computer after installation.
@@ -53,17 +53,15 @@ In the final step, enter `yes`, close the terminal, and reopen it.
 
 
 ### 4. Create an Environment Using Conda
 ### 4. Create an Environment Using Conda
 
 
-Specify Python version 3.10.
-
-```sh
-conda create -n MinerU python=3.10
-conda activate MinerU
+```bash
+conda create -n mineru 'python<3.13' -y
+conda activate mineru
 ```
 ```
 
 
 ### 5. Install Applications
 ### 5. Install Applications
 
 
 ```sh
 ```sh
-pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com
+pip install -U magic-pdf[full]
 ```
 ```
 > [!IMPORTANT]
 > [!IMPORTANT]
 > After installation, make sure to check the version of `magic-pdf` using the following command:
 > After installation, make sure to check the version of `magic-pdf` using the following command:
@@ -72,7 +70,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com
 > magic-pdf --version
 > magic-pdf --version
 > ```
 > ```
 >
 >
-> If the version number is less than 0.7.0, please report the issue.
+> If the version number is less than 1.3.0, please report the issue.
 
 
 ### 6. Download Models
 ### 6. Download Models
 
 
@@ -100,7 +98,7 @@ magic-pdf -p small_ocr.pdf -o ./output
 
 
 ### 9. Test CUDA Acceleration
 ### 9. Test CUDA Acceleration
 
 
-If your graphics card has at least **8GB** of VRAM, follow these steps to test CUDA acceleration:
+If your graphics card has at least **6GB** of VRAM, follow these steps to test CUDA acceleration:
 
 
 1. Modify the value of `"device-mode"` in the `magic-pdf.json` configuration file located in your home directory.
 1. Modify the value of `"device-mode"` in the `magic-pdf.json` configuration file located in your home directory.
    ```json
    ```json
@@ -111,15 +109,4 @@ If your graphics card has at least **8GB** of VRAM, follow these steps to test C
 2. Test CUDA acceleration with the following command:
 2. Test CUDA acceleration with the following command:
    ```sh
    ```sh
    magic-pdf -p small_ocr.pdf -o ./output
    magic-pdf -p small_ocr.pdf -o ./output
-   ```
-
-### 10. Enable CUDA Acceleration for OCR
-
-1. Download `paddlepaddle-gpu`. Installation will automatically enable OCR acceleration.
-   ```sh
-   python -m pip install paddlepaddle-gpu==3.0.0rc1 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
-   ```
-2. Test OCR acceleration with the following command:
-   ```sh
-   magic-pdf -p small_ocr.pdf -o ./output
-   ```
+   ```

+ 9 - 27
docs/README_Ubuntu_CUDA_Acceleration_zh_CN.md

@@ -9,11 +9,11 @@ nvidia-smi
 如果看到类似如下的信息,说明已经安装了nvidia驱动,可以跳过步骤2
 如果看到类似如下的信息,说明已经安装了nvidia驱动,可以跳过步骤2
 
 
 > [!NOTE]
 > [!NOTE]
-> `CUDA Version` 显示的版本号应 >= 12.1,如显示的版本号小于12.1,请升级驱动
+> `CUDA Version` 显示的版本号应 >= 12.4,如显示的版本号小于12.4,请升级驱动
 
 
 ```plaintext
 ```plaintext
 +---------------------------------------------------------------------------------------+
 +---------------------------------------------------------------------------------------+
-| NVIDIA-SMI 537.34                 Driver Version: 537.34       CUDA Version: 12.2     |
+| NVIDIA-SMI 570.133.07             Driver Version: 572.83         CUDA Version: 12.8   |
 |-----------------------------------------+----------------------+----------------------+
 |-----------------------------------------+----------------------+----------------------+
 | GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC |
 | GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC |
 | Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
 | Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
@@ -31,7 +31,7 @@ nvidia-smi
 
 
 ```bash
 ```bash
 sudo apt-get update
 sudo apt-get update
-sudo apt-get install nvidia-driver-545
+sudo apt-get install nvidia-driver-570-server
 ```
 ```
 
 
 安装专有驱动,安装完成后,重启电脑
 安装专有驱动,安装完成后,重启电脑
@@ -53,17 +53,15 @@ bash Anaconda3-2024.06-1-Linux-x86_64.sh
 
 
 ## 4. 使用conda 创建环境
 ## 4. 使用conda 创建环境
 
 
-需指定python版本为3.10
-
 ```bash
 ```bash
-conda create -n MinerU python=3.10
-conda activate MinerU
+conda create -n mineru 'python<3.13' -y
+conda activate mineru
 ```
 ```
 
 
 ## 5. 安装应用
 ## 5. 安装应用
 
 
 ```bash
 ```bash
-pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i https://mirrors.aliyun.com/pypi/simple
+pip install -U magic-pdf[full] -i https://mirrors.aliyun.com/pypi/simple
 ```
 ```
 
 
 > [!IMPORTANT]
 > [!IMPORTANT]
@@ -73,7 +71,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i h
 > magic-pdf --version
 > magic-pdf --version
 > ```
 > ```
 >
 >
-> 如果版本号小于0.7.0,请到issue中向我们反馈
+> 如果版本号小于1.3.0,请到issue中向我们反馈
 
 
 ## 6. 下载模型
 ## 6. 下载模型
 
 
@@ -99,7 +97,7 @@ magic-pdf -p small_ocr.pdf -o ./output
 
 
 ## 9. 测试CUDA加速
 ## 9. 测试CUDA加速
 
 
-如果您的显卡显存大于等于 **8GB** ,可以进行以下流程,测试CUDA解析加速效果
+如果您的显卡显存大于等于 **6GB** ,可以进行以下流程,测试CUDA解析加速效果
 
 
 **1.修改【用户目录】中配置文件magic-pdf.json中"device-mode"的值**
 **1.修改【用户目录】中配置文件magic-pdf.json中"device-mode"的值**
 
 
@@ -115,20 +113,4 @@ magic-pdf -p small_ocr.pdf -o ./output
 magic-pdf -p small_ocr.pdf -o ./output
 magic-pdf -p small_ocr.pdf -o ./output
 ```
 ```
 > [!TIP]
 > [!TIP]
-> CUDA加速是否生效可以根据log中输出的各个阶段cost耗时来简单判断,通常情况下,`layout detection cost` 和 `mfr time` 应提速10倍以上。
-
-## 10. 为ocr开启cuda加速
-
-**1.下载paddlepaddle-gpu, 安装完成后会自动开启ocr加速**
-
-```bash
-python -m pip install paddlepaddle-gpu==3.0.0rc1 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
-```
-
-**2.运行以下命令测试ocr加速效果**
-
-```bash
-magic-pdf -p small_ocr.pdf -o ./output
-```
-> [!TIP]
-> CUDA加速是否生效可以根据log中输出的各个阶段cost耗时来简单判断,通常情况下,`ocr cost`应提速10倍以上。
+> CUDA加速是否生效可以根据log中输出的各个阶段cost耗时来简单判断,通常情况下,使用cuda加速会比cpu更快。

+ 13 - 25
docs/README_Windows_CUDA_Acceleration_en_US.md

@@ -2,10 +2,11 @@
 
 
 ### 1. Install CUDA and cuDNN
 ### 1. Install CUDA and cuDNN
 
 
-Required versions: CUDA 11.8 + cuDNN 8.7.0
+You need to install a CUDA version that is compatible with torch's requirements. Currently, torch supports CUDA 11.8/12.4/12.6.
 
 
-- CUDA 11.8: https://developer.nvidia.com/cuda-11-8-0-download-archive
-- cuDNN v8.7.0 (November 28th, 2022), for CUDA 11.x: https://developer.nvidia.com/rdp/cudnn-archive
+- CUDA 11.8 https://developer.nvidia.com/cuda-11-8-0-download-archive
+- CUDA 12.4 https://developer.nvidia.com/cuda-12-4-0-download-archive
+- CUDA 12.6 https://developer.nvidia.com/cuda-12-6-0-download-archive
 
 
 ### 2. Install Anaconda
 ### 2. Install Anaconda
 
 
@@ -15,17 +16,15 @@ Download link: https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Windows-x86
 
 
 ### 3. Create an Environment Using Conda
 ### 3. Create an Environment Using Conda
 
 
-Python version must be 3.10.
-
-```
-conda create -n MinerU python=3.10
-conda activate MinerU
+```bash
+conda create -n mineru 'python<3.13' -y
+conda activate mineru
 ```
 ```
 
 
 ### 4. Install Applications
 ### 4. Install Applications
 
 
 ```
 ```
-pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com
+pip install -U magic-pdf[full]
 ```
 ```
 
 
 > [!IMPORTANT]
 > [!IMPORTANT]
@@ -35,7 +34,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com
 > magic-pdf --version
 > magic-pdf --version
 > ```
 > ```
 >
 >
-> If the version number is less than 0.7.0, please report it in the issues section.
+> If the version number is less than 1.3.0, please report it in the issues section.
 
 
 ### 5. Download Models
 ### 5. Download Models
 
 
@@ -60,12 +59,12 @@ Download a sample file from the repository and test it.
 
 
 ### 8. Test CUDA Acceleration
 ### 8. Test CUDA Acceleration
 
 
-If your graphics card has at least 8GB of VRAM, follow these steps to test CUDA-accelerated parsing performance.
+If your graphics card has at least 6GB of VRAM, follow these steps to test CUDA-accelerated parsing performance.
 
 
-1. **Overwrite the installation of torch and torchvision** supporting CUDA.
+1. **Overwrite the installation of torch and torchvision** supporting CUDA.(Please select the appropriate index-url based on your CUDA version. For more details, refer to the [PyTorch official website](https://pytorch.org/get-started/locally/).)
 
 
    ```
    ```
-   pip install --force-reinstall torch==2.3.1 torchvision==0.18.1 "numpy<2.0.0" --index-url https://download.pytorch.org/whl/cu118
+   pip install --force-reinstall torch==2.6.0 torchvision==0.21.1 "numpy<2.0.0" --index-url https://download.pytorch.org/whl/cu124
    ```
    ```
 
 
 2. **Modify the value of `"device-mode"`** in the `magic-pdf.json` configuration file located in your user directory.
 2. **Modify the value of `"device-mode"`** in the `magic-pdf.json` configuration file located in your user directory.
@@ -81,15 +80,4 @@ If your graphics card has at least 8GB of VRAM, follow these steps to test CUDA-
 
 
    ```
    ```
    magic-pdf -p small_ocr.pdf -o ./output
    magic-pdf -p small_ocr.pdf -o ./output
-   ```
-
-### 9. Enable CUDA Acceleration for OCR
-
-1. **Download paddlepaddle-gpu**, which will automatically enable OCR acceleration upon installation.
-   ```
-   pip install paddlepaddle-gpu==2.6.1
-   ```
-2. **Run the following command to test OCR acceleration**:
-   ```
-   magic-pdf -p small_ocr.pdf -o ./output
-   ```
+   ```

+ 11 - 28
docs/README_Windows_CUDA_Acceleration_zh_CN.md

@@ -2,10 +2,11 @@
 
 
 ## 1. 安装cuda和cuDNN
 ## 1. 安装cuda和cuDNN
 
 
-需要安装的版本 CUDA 11.8 + cuDNN 8.7.0
+需要安装符合torch要求的cuda版本,torch目前支持11.8/12.4/12.6
 
 
 - CUDA 11.8 https://developer.nvidia.com/cuda-11-8-0-download-archive
 - CUDA 11.8 https://developer.nvidia.com/cuda-11-8-0-download-archive
-- cuDNN v8.7.0 (November 28th, 2022), for CUDA 11.x https://developer.nvidia.com/rdp/cudnn-archive
+- CUDA 12.4 https://developer.nvidia.com/cuda-12-4-0-download-archive
+- CUDA 12.6 https://developer.nvidia.com/cuda-12-6-0-download-archive
 
 
 ## 2. 安装anaconda
 ## 2. 安装anaconda
 
 
@@ -16,17 +17,15 @@ https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2024.06-1-Window
 
 
 ## 3. 使用conda 创建环境
 ## 3. 使用conda 创建环境
 
 
-需指定python版本为3.10
-
 ```bash
 ```bash
-conda create -n MinerU python=3.10
-conda activate MinerU
+conda create -n mineru 'python<3.13' -y
+conda activate mineru
 ```
 ```
 
 
 ## 4. 安装应用
 ## 4. 安装应用
 
 
 ```bash
 ```bash
-pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i https://mirrors.aliyun.com/pypi/simple
+pip install -U magic-pdf[full] -i https://mirrors.aliyun.com/pypi/simple
 ```
 ```
 
 
 > [!IMPORTANT]
 > [!IMPORTANT]
@@ -36,7 +35,7 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i h
 > magic-pdf --version
 > magic-pdf --version
 > ```
 > ```
 >
 >
-> 如果版本号小于0.7.0,请到issue中向我们反馈
+> 如果版本号小于 1.3.0 ,请到issue中向我们反馈
 
 
 ## 5. 下载模型
 ## 5. 下载模型
 
 
@@ -61,12 +60,12 @@ pip install -U magic-pdf[full] --extra-index-url https://wheels.myhloli.com -i h
 
 
 ## 8. 测试CUDA加速
 ## 8. 测试CUDA加速
 
 
-如果您的显卡显存大于等于 **8GB** ,可以进行以下流程,测试CUDA解析加速效果
+如果您的显卡显存大于等于 **6GB** ,可以进行以下流程,测试CUDA解析加速效果
 
 
-**1.覆盖安装支持cuda的torch和torchvision**
+**1.覆盖安装支持cuda的torch和torchvision**(请根据cuda版本选择合适的index-url,具体可参考[torch官网](https://pytorch.org/get-started/locally/))
 
 
 ```bash
 ```bash
-pip install --force-reinstall torch==2.3.1 torchvision==0.18.1 "numpy<2.0.0" --index-url https://download.pytorch.org/whl/cu118
+pip install --force-reinstall torch==2.6.0 torchvision==0.21.1 "numpy<2.0.0" --index-url https://download.pytorch.org/whl/cu124
 ```
 ```
 
 
 **2.修改【用户目录】中配置文件magic-pdf.json中"device-mode"的值**
 **2.修改【用户目录】中配置文件magic-pdf.json中"device-mode"的值**
@@ -84,20 +83,4 @@ magic-pdf -p small_ocr.pdf -o ./output
 ```
 ```
 
 
 > [!TIP]
 > [!TIP]
-> CUDA加速是否生效可以根据log中输出的各个阶段的耗时来简单判断,通常情况下,`layout detection time` 和 `mfr time` 应提速10倍以上。
-
-## 9. 为ocr开启cuda加速
-
-**1.下载paddlepaddle-gpu, 安装完成后会自动开启ocr加速**
-
-```bash
-pip install paddlepaddle-gpu==2.6.1
-```
-
-**2.运行以下命令测试ocr加速效果**
-
-```bash
-magic-pdf -p small_ocr.pdf -o ./output
-```
-> [!TIP]
-> CUDA加速是否生效可以根据log中输出的各个阶段cost耗时来简单判断,通常情况下,`ocr time`应提速10倍以上。
+> CUDA加速是否生效可以根据log中输出的各个阶段的耗时来简单判断,通常情况下,cuda加速后运行速度比cpu更快。