Scale Python across 1000 computers in 1 second. Using one line of code.
Burla is a package with only one function. Here's how it works:
from burla import remote_parallel_mapmy_inputs =list(range(1000))defmy_function(x):print(f"I'm running on my own separate computer in the cloud! #{x}")remote_parallel_map(my_function, my_inputs)
This runs my_function on 1000 vm's in the cloud, in 1 second:
Define any hardware at runtime, Burla scales up to 10,000 CPUs in a single function call.
Load data in parallel from the cloud storage bucket mounted to every worker: /workspace/shared
This creates a pipeline like:
Monitor progress in the dashboard:
Cancel bad runs, filter logs to watch individual inputs, or watch output files appear in the storage UI.
How it works:
With Burla, running code in the cloud feels the same as coding on your laptop:
When functions are run with remote_parallel_map:
Anything they print appears locally (and inside Burla's dashboard).
Any exceptions are thrown locally.
Any packages or local modules they use are (very quickly) cloned on remote machines.
Code starts running in under one second! Even with millions of inputs or thousands of machines.
Features:
π¦ Automatic Package Sync
Burla automatically (and very quickly) clones your Python packages in every remote machine where your code runs.
π Custom Containers
Easily run code in any Docker container.
Public or private, just paste an image URI in the settings, then hit start!
π Network Filesystem
Need to get big data into/out of the cluster? Burla automatically mounts a cloud storage bucket to ./shared in every container.
βοΈ Variable Hardware Per-Function
The func_cpu and func_ram args make it possible to assign big hardware to some functions, and less to others.