Repository files navigation

Ascend Extension for PyTorch

Overview

This repository develops the Ascend Extension for PyTorch named torch_npu to adapt Ascend NPU to PyTorch so that developers who use the PyTorch can obtain powerful compute capabilities of Ascend AI Processors .

Ascend is a full-stack AI computing infrastructure for industry applications and services based on Huawei Ascend processors and software. For more information about Ascend, see Ascend Community .

Installation

From Binary

Provide users with wheel package to quickly install torch_npu . Before installing torch_npu , complete the installation of CANN according to Ascend Auxiliary Software . To obtain the CANN installation package, refer to the CANN Installation .

  • Install PyTorch
  • Install PyTorch through pip.

    For Aarch64:

    pip3 install torch==2.1.0

    For x86:

    pip3 install torch==2.1.0+cpu  --index-url https://download.pytorch.org/whl/cpu
  • Install torch-npu dependencies
  • Run the following command to install dependencies.

    pip3 install pyyaml
    pip3 install setuptools

    If the installation fails, use the download link or visit the PyTorch official website to download the installation package of the corresponding version.

    OS arch Python version

    From Source

    In some special scenarios, users may need to compile torch-npu by themselves.Select a branch in table Ascend Auxiliary Software and a Python version in table PyTorch and Python Version Matching Table first. The docker image is recommended for compiling torch-npu through the following steps(It is recommended to mount the working path only and avoid the system path to reduce security risks.), the generated .whl file path is ./dist/. Note that gcc version has the following constraints if you try to compile without using docker image: we recommend to use gcc 10.2 for ARM and gcc 9.3.1 for X86.

    Clone torch-npu

    git clone https://github.com/ascend/pytorch.git -b 2.1.0-7.2.0 --depth 1
    

    Build Docker Image

    cd pytorch/ci/docker/{arch} # {arch} for X86 or ARM
    docker build -t manylinux-builder:v1 .
    

    Enter Docker Container

    docker run -it -v /{code_path}/pytorch:/home/pytorch manylinux-builder:v1 bash
    # {code_path} is the torch_npu source code path
    

    Compile torch-npu

    Take Python 3.8 as an example.

    cd /home/pytorch
    bash ci/build.sh --python=3.8
    

    If you would like to compile with new C++ ABI, then first run this command, at this point, the recommended compilation environment is same to community torch package: glibc 2.28, gcc 11.2.1

    export _GLIBCXX_USE_CXX11_ABI=1
    

    Meanwhile, we support configuring -fabi-version using the following variables,require consistency with the community torch package

    export _ABI_VERSION=16
    

    Getting Started

    Prerequisites

    Initialize CANN environment variable by running the command as shown below.

    # Default path, change it if needed.
    source /usr/local/Ascend/ascend-toolkit/set_env.sh

    Quick Verification

    You can quickly experience Ascend NPU by the following simple examples.

    import torch
    - import torch_npu # No longer needed in torch_npu 2.5.1 and later versions
    x = torch.randn(2, 2).npu()
    y = torch.randn(2, 2).npu()
    z = x.mm(y)
    print(z)

    User Manual

    Refer to API of Ascend Extension for PyTorch for more detailed information.

    PyTorch and Python Version Matching Table

    PyTorch Version Python Version

    Pipeline Status

    Due to the asynchronous development mechanism of upstream and downstream, incompatible modifications in upstream may cause some functions of torch_npu to be unavailable (only upstream and downstream development branches are involved, excluding stable branches). Therefore, we built a set of daily tasks that make it easy to detect relevant issues in time and fix them within 48 hours (under normal circumstances), providing users with the latest features and stable quality.

    CANN Version(Docker Image) Upstream Branch Downstream Branch Period Status

    Suggestions and Communication

    Everyone is welcome to contribute to the community. If you have any questions or suggestions, you can submit Github Issues. We will reply to you as soon as possible. Thank you very much.

    Branch Maintenance Policies

    The version branches of AscendPyTorch have the following maintenance phases:

    Status Duration Description Development 6-12 months Develop new features and fix issues, regularly release new versions. Different strategies are adopted for different versions of PyTorch, with a regular branch development cycle of 6 months and a long-term support branch development cycle of 12 months. Maintained 1 year/3.5 years Regular Release branch for 1 year, Long Term Support branch maintenance for 3.5 years. Fix major issues, do not incorporate new features, and release patch versions based on the impact of fixed bugs. End Of Life (EOL) Do not accept any modification to a branch.

    PyTorch Maintenance Policies

    PyTorch Maintenance Policies Status Launch Date Subsequent Status EOL Date