Effortlessly scale your toughest problems
Modern workloads like deep learning and hyperparameter tuning are compute-intensive, and require distributed or parallel execution. Ray makes it effortless to parallelize single machine code — go from a single CPU to multi-core, multi-GPU or multi-node with minimal code changes.
Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray.
Accelerate your hyperparameter search workloads with Ray Tune. Find the best model and reduce training costs by using the latest optimization algorithms.
Deploy your machine learning models at scale with Ray Serve, a Python-first and framework agnostic model serving framework.
Scale reinforcement learning (RL) with RLlib, a framework-agnostic RL library that ships with 30+ cutting-edge RL algorithms including A3C, DQN, and PPO.
General Python apps
Easily build out scalable, distributed systems in Python with simple and composable primitives in Ray Core.
Scale data loading, writing, conversions, and transformations in Python with Ray Datasets.
Powered by Ray
Companies of all sizes and stripes are scaling their most challenging problems with Ray. Recommendation engines, ecosystem restoration, large-scale earth science, automated traffic monitoring, flying boats — these are just a few of the many use cases that engineers, scientists and researchers like you are tackling with Ray.
Amazon performs petabyte-scale table management on their S3-based data lake with Ray, Arrow, and Parquet to deliver mission-critical insights to business units across the company.
What workload are you scaling with Ray?
Supercharge your Ray journey with Anyscale
Anyscale is a managed cloud offering — from the creators of the Ray project — to create, run and manage your Ray workload. If you or your organization prefers the speed and convenience of a managed service over self-managing clusters and the infra they live on, this might be for you. And if not, that’s fine too.