Ray Core
A faster, simpler approach to parallel Python
Ray Core is the foundation of the entire Ray ecosystem. With simple primitives, it lets every engineer easily build scalable, distributed systems in Python in a cloud-agnostic way.

Why Ray Core?
Learn why thousands of engineers and scientists are using Ray for creating scalable, distributed Python applications.
Simple primitives
Flexibly compose distributed applications with tasks, actors, and objects in native Python code.
Multi-cloud
Run the same Ray code on any cloud — AWS, GCP, Azure — or even on-prem.
Dynamic scaling
Ray Core can automatically scale (using Ray Autoscaler) up or down to smoothly handle changing compute load.
Massive scalability envelope
Ray Core can easily scale to thousands of cores, and is getting more scalable with every release.
Open community / ecosystem
With a vibrant dedicated community and rich ecosystem of integrations, fixes and best practices are easy to find.
Laptop -> Cluster with ease
With Ray Client, going from laptop to cluster is as easy as changing 1 variable.
Try it yourself
Install Ray with pip install ray
and give this example a try.
# Approximate pi using random sampling. Generate x and y randomly between 0 and 1.
# if x^2 + y^2 < 1 it's inside the quarter circle. x 4 to get pi.
import ray
from random import random
# Let's start Ray
ray.init()
SAMPLES = 1000000;
# By adding the `@ray.remote` decorator, a regular Python function
# becomes a Ray remote function.
@ray.remote
def pi4_sample():
in_count = 0
for _ in range(SAMPLES):
x, y = random(), random()
if x*x + y*y <= 1:
in_count += 1
return in_count
# To invoke this remote function, use the `remote` method.
# This will immediately return an object ref (a future) and then create
# a task that will be executed on a worker process. Get retreives the result.
future = pi4_sample.remote()
pi = ray.get(future) * 4.0 / SAMPLES
print(f'{pi} is an approximation of pi')
# Now let's do this 100,000 times.
# With regular python this would take 11 hours
# Ray on a modern laptop, roughly 2 hours
# On a 10-node Ray cluster, roughly 10 minutes
BATCHES = 100000
results = []
for _ in range(BATCHES):
results.append(pi4_sample.remote())
output = ray.get(results)
pi = sum(output) * 4.0 / BATCHES / SAMPLES
print(f'{pi} is a way better approximation of pi')

Do more with Ray libraries
Learn how you can scale other components of your machine learning pipelines, such as training, data processing, and serving, just as easily using Ray.
O'Reilly Learning Ray Book
Get your free copy of early release chapters of Learning Ray, the first and only comprehensive book on Ray and its ecosystem, authored by members on the Ray engineering team
