Welcome to the official documentation for the Ray ecosystem. Access reference guides, tutorials, quick start examples, code snippets and more to get started on and advance your journey with Ray journey.
Ray Core provides simple primitives for building distributed Python applications. It’s great for parallelizing single-machine Python applications with minimal code changes. Built on top of Ray Core is a rich ecosystem of high-level libraries and frameworks for scaling specific workloads like reinforcement learning and model serving.
Scale reinforcement learning
Scale hyperparameter tuning
Scale deep learning
Scale model serving
Scale general Python apps
Scale data loading and collection
O'Reilly Learning Ray Book
Get your free copy of early release chapters of Learning Ray, the first and only comprehensive book on Ray and its ecosystem, authored by members on the Ray engineering team
Deployment and installation
Ray can be installed on a laptop, a multi-GPU machine, or a cluster with multiple nodes. Installation is easy and you can toggle where your Ray programs run by changing 1 environment variable in your code.
Install Ray on a laptop or or single machine
Launch a multi-node Ray cluster on AWS, GCP, Azure, Kubernetes, and many more.
Managed service (Anyscale)
Run and managed your Ray programs on fully-managed clusters.
Beyond the native libraries, Ray integrates directly with a rich ecosystem of libraries and frameworks. Scale your workloads with minimal code changes by using Ray as the distributed compute substrate for your existing programs.
Want to integrate your library with Ray?
Are you a library developer considering building an integration with Ray? Check out this blog on the three common Ray library integration patterns.