Go to the Dask tab in the dashboard.

Specify the name of the cluster, select a running instance to create it in, and choose the number of dask nprocs for each worker. Then click Create.

Once your cluster is finished provisioning you can execute this notebook to scale up the number of workers as the default is zero workers only the scheduler is provisioned.

To use the saturn dask cluster from within jupyter - first instantiate a cluster using SaturnCluster from dask-saturn. dask-saturn, which should be in your base environment. If not you may install it. Just open a terminal and do pip install dask-saturn.

from dask_saturn import SaturnCluster
cluster = SaturnCluster()

You should have a cluster GUI above that you can use to manipulate the number of workers.


Instead of using the GUI, you can call .scale and .adapt programatically to set the number of workers.


The external dashboard link should show up in the GUI and is also accessible on the cluster object:


Once the cluster is instantiated, we can use regular dask client stuff. 

from distributed import Client

client = Client(cluster)
import dask.array as da

x = da.random.random((10000, 10000), chunks=(1000, 1000))

y = x + x.T
z = y[::2, 5000:].mean(axis=1)

When you are done with the cluster you can close it from within the notebook. This will delete all running pods.


Did this answer your question?