Skip to content

Commit

Permalink
remove build from src
Browse files Browse the repository at this point in the history
Signed-off-by: Richard Liu <[email protected]>
  • Loading branch information
richardsliu committed Nov 5, 2024
1 parent 4445657 commit b3a6e72
Showing 1 changed file with 6 additions and 30 deletions.
36 changes: 6 additions & 30 deletions docs/source/getting_started/tpu-installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,9 +122,14 @@ Install build dependencies:
.. code-block:: bash
pip install -r requirements-tpu.txt
VLLM_TARGET_DEVICE="tpu" python setup.py develop
sudo apt-get install libopenblas-base libopenmpi-dev libomp-dev
Run the setup script:

.. code-block:: bash
VLLM_TARGET_DEVICE="tpu" python setup.py develop
Provision Cloud TPUs with GKE
-----------------------------

Expand Down Expand Up @@ -152,35 +157,6 @@ Run the Docker image with the following command:
$ # Make sure to add `--privileged --net host --shm-size=16G`.
$ docker run --privileged --net host --shm-size=16G -it vllm-tpu
.. _build_from_source_tpu:

Build from source
-----------------

You can also build and install the TPU backend from source.

First, install the dependencies:

.. code-block:: console
$ # (Recommended) Create a new conda environment.
$ conda create -n myenv python=3.10 -y
$ conda activate myenv
$ # Clean up the existing torch and torch-xla packages.
$ pip uninstall torch torch-xla -y
$ # Install other build dependencies.
$ pip install -r requirements-tpu.txt
Next, build vLLM from source. This will only take a few seconds:

.. code-block:: console
$ VLLM_TARGET_DEVICE="tpu" python setup.py develop
.. note::

Since TPU relies on XLA which requires static shapes, vLLM bucketizes the possible input shapes and compiles an XLA graph for each different shape.
Expand Down

0 comments on commit b3a6e72

Please sign in to comment.