Getting Started
Quickstart (managed-instance)
Log in using the email you submitted in the signup form.
Hit the ⏻ Start button to boot 1000 CPUs! (should take 1-2 min)
While booting, run the following in your local terminal:
pip install Burla
burla login
(connects your computer to the cluster)
Once booted, run some code!
# Each call to `compute_square` runs in parallel in it's own separate contianer.
# That's why it finishes quickly even though each function call takes ~1 second.
from time import sleep
from burla import remote_parallel_map
def compute_square(x):
sleep(1) # <- pretend this is some intense math!
print(f"Squaring {x} on a separate computer in the cloud!")
return x * x
squared_numbers = remote_parallel_map(compute_square, list(range(1000)))
Celebrate 🎉🎉🎉🎉 You just ran Python code on 1000 CPU's in 1000 separate containers. That's not something many people know how to do!
Quickstart (self-hosted)
1. Ensure gcloud
is setup and installed:
gcloud
is setup and installed:If you haven't, install the gcloud CLI, and login using application-default credentials.
Ensure gcloud
is pointing at the project you wish to install Burla inside:
To view your current gcloud project run:
gcloud config get project
To change your current gcloud project run:
gcloud config set project <NEW-PROJECT-ID>
2. Run the burla install
command:
burla install
command:Run pip install burla
then run burla install
.
See the install docs for more info regarding permissions.
3. Start a machine and run some code!
Use the Login button on this website to get to your new cluster dashboard.
Hit the ⏻ Start button in the dashboard to turn the cluster on. By default this starts one 4-CPU node. If inactive for >5 minutes this node will shut itself off.
While booting, run
burla login
to connect your local machine to your cluster.Run the example below!
from burla import remote_parallel_map
def my_function(my_input):
print("I'm running on remote computer in the cloud!")
remote_parallel_map(my_function, [1, 2, 3])
Questions? Schedule a call with us, or email [email protected]. We're always happy to talk.