Choosing a deployment platform
Nebari can be deployed on a bare-metal server using HPC, any of the major public cloud providers, or on a pre-existing Kubernetes cluster. Review the options below to determine which option best suits your needs.
- Cloud
- HPC
- Local deployment
- Pre-existing Kubernetes cluster
The cloud deployment of Nebari is considered to be the default option. It enables teams to build and maintain a cost-effective and scalable compute/data science platform in the cloud, by using an Infrastructure as Code approach that streamlines the deployment and management of data science infrastructure.
If you are not sure which option to choose, a cloud installation is likely your best option. It is suitable for most use cases, especially if:
You require scalable infrastructure
You aim to have a production environment with GitOps enabled by default
Your team does not have specific expertise within high-performance computing hardware, Kubernetes, Docker, and/or other cloud-native or scalable compute infrastructure technologies
note
The cloud installation is based on Kubernetes, but knowledge of Kubernetes is NOT required nor is in-depth knowledge about the specific provider required either.
Currently, Nebari supports Amazon AWS, DigitalOcean, Google GCP, and Azure.
Nebari HPC is an opinionated open source deployment of JupyterHub based on an HPC jobscheduler (e.g. Slurm). Nebari HPC is a "distribution" of these packages much like Debian and Ubuntu are distributions of Linux.
note
To note, Nebari HPC can be used on other distributed compute hardware, not just HPC hardware specifically. We anticipate that Nebari HPC will be used most often on HPC hardware, however.
The high level goal of this distribution is to form a cohesive set of tools that enable:
Environment management via
conda
andconda-store
Monitoring of compute infrastructure and services
Scalable and efficient compute via Jupyterlab and Dask
On-prem deployment of Jupyterhub without requiring deep DevOps knowledge of the Slurm/HPC and Jupyter ecosystems
Nebari HPC should be your choice if:
You have highly optimized code that requires highly performant infrastructure
You have existing HPC infrastructure
You expect that your projects will not exceed the resources/capabilities of your current infrastructure
For instructions on installing and deploying Nebari HPC, visit the How to install and setup Nebari HPC on bare metal machines section of the documentation.
note
Although it is possible to deploy Nebari HPC in the cloud, it is not generally recommended due to potentially high costs. For more information, check out the [base cost] section of the docs.
This approach is recommended for testing and development of Nebari’s components due to its simplicity. Choose the local mode if:
- You want to test your Kubernetes cluster
- You have available local compute setup
- You want to try out Nebari with a quick install for exploratory purposes, without setting up environment variables
You should choose another installation option, likely a cloud install if you are starting from scratch (you have no compute clusters already in place) and you desire to stand up a production instance of Nebari.
For instructions on installing and deploying Nebari Local, please visit Deploying Nebari on a local machine.
This approach is recommended if you are already using Kubernetes and want to deploy Nebari on your existing cluster.
For instructions on installing and deploying Nebari on an existing Kubernetes cluster, please visit How to install and setup Nebari on an existing Kubernetes infrastructure.
note
As of now, we have only tested this functionality for AWS, but we are continuously working on expanding to other cloud providers.
You should choose another installation option, likely a cloud install if you are starting from scratch (you have no compute clusters already in place) and you desire to stand up a production instance of Nebari.
What's next?
For instructions on installing and deploying Nebari on a particular cloud provider, check out our handy how-to-guides: