Ray libraries

Ray Core provides simple primitives for building distributed Python applications. It’s great for parallelizing single-machine Python applications with minimal code changes. Built on top of Ray Core is a rich ecosystem of high-level libraries and frameworks for scaling specific workloads like reinforcement learning and model serving.

RLlib

Scale reinforcement learning

Ray Tune

Scale hyperparameter tuning

Ray Train

Scale deep learning

Ray Serve

Scale model serving

Ray Core

Scale general Python apps

Ray Datasets

Scale data loading and collection

O'Reilly Learning Ray Book

Get your free copy of early release chapters of Learning Ray, the first and only comprehensive book on Ray and its ecosystem, authored by members on the Ray engineering team

Group 5

Deployment and installation

Ray can be installed on a laptop, a multi-GPU machine, or a cluster with multiple nodes. Installation is easy and you can toggle where your Ray programs run by changing 1 environment variable in your code.

Small black square tilted 45 degrees

Single node

Install Ray on a laptop or or single machine

Black and white checkered icon

Ray Cluster

Launch a multi-node Ray cluster on AWS, GCP, Azure, Kubernetes, and many more.

Dark Anyscale logo

Managed service (Anyscale)

Run and managed your Ray programs on fully-managed clusters.

Integrations

Beyond the native libraries, Ray integrates directly with a rich ecosystem of libraries and frameworks. Scale your workloads with minimal code changes by using Ray as the distributed compute substrate for your existing programs.

logo-pytorch
logo-xgboost
logo-spark
logo-tensorflow
logo-scikit-learn
logo-dask

Want to integrate your library with Ray?

Are you a library developer considering building an integration with Ray? Check out this blog on the three common Ray library integration patterns.

Group 5