diff --git a/README.md b/README.md index fafc70ee6c70875f25567773f7ec0e47ab2dc588..5c6ad65d45619eb50f6490ca0ecee5ad48a058b0 100644 --- a/README.md +++ b/README.md @@ -24,12 +24,45 @@ The kubeconfig for the remote Kubernetes cluster must be available in the user s Launch a new pod: After launch a new Jupyter kernel is created in the container and utilities required for remote execution are transferred to the pod. The image for the pod must run Jupyter at launch. This could be a Jupyter Docker stack image (see https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) or an extension of those. -Example with Pytorch CUDA 12 in namespace eidfxxxns on the GPU Service and pod name prefix `amy-`. +Example with Pytorch CUDA 12 in namespace eidfxxxns on the GPU Service and pod name prefix `amy-`. By default, it requests one GPU of type `NVIDIA-A100-SXM4-40GB-MIG-1g.5gb`. ``` %kube_exec launch -k /path/to/kube/config -ns eidfxxxns -pp amy- -i quay.io/jupyter/pytorch-notebook:cuda12-latest ``` +## Commands and options + +### `launch` + +Launch a new pod in the Kubernetes cluster. + +* `-k/--kubeconfig`: Path to the kubeconfig (default is `.kube/config`) +* `-ns/--namespace`: Kubernetes namespace +* `-i/--image`: Container image +* `-pp/--pod-prefix`: Pod name prefix (a UUID is appended, optional) +* `-g/--gpus`: Number of GPUs +* `-gp/--gpu-product`: GPU product, currently available: + * `NVIDIA-A100-SXM4-40GB-MIG-1g.5gb` (default) + * `NVIDIA-A100-SXM4-40GB-MIG-3g.20gb` + * `NVIDIA-A100-SXM4-40GB` + * `NVIDIA-A100-SXM4-80GB` + * `NVIDIA-H100-80GB-HBM3` + +### `connect` + +Connect to an existing pod. + +* `-p/--pod-name`: Name of a running pod to connect to +* `-ns/--namespace`: Kubernetes namespace +* `-k/--kubeconfig`: Path to the kubeconfig + +### `list` + +List pods in the namespace. + +* `-ns/--namespace`: Kubernetes namespace +* `-k/--kubeconfig`: Path to the kubeconfig + ## Re-use container Connect to an existing container. A Jupyter server must be running on this container.