Getting Started

Quickstart (managed-instance)

Don't have a managed instance? Send me an email and we'll get you ASAP! ([email protected])

You should have received an email with a link to your custom deployment. Something like: https://<yourname>.burla.dev

  1. Navigate to your custom link in the browser, and sign in using the email we've authorized.

  2. Hit the ⏻ Start button to boot 1000 CPUs! (should take 1-2 min)

  3. While booting, run the following in your local terminal:

    1. pip install Burla

    2. burla login (connects your computer to the cluster)

  4. Once booted, run some code!

# Each call to `compute_square` runs in parallel in it's own separate contianer.
# That's why it finishes quickly even though each function call takes ~1 second.

from time import sleep
from burla import remote_parallel_map

def compute_square(x):

    sleep(1)  # <- pretend this is some intense math!

    print(f"Squaring {x} on a separate computer in the cloud!")
    return x * x

squared_numbers = remote_parallel_map(compute_square, list(range(1000)))
  1. Celebrate 🎉🎉🎉🎉 You just ran Python code on 1000 CPU's in 1000 separate containers. That's not something many people know how to do!


Installation (self-hosted)

Self-Hosted Burla is currently exclusive to Google Cloud.

We fully intend to support AWS, Azure, and on-prem deployments, but don't yet. We offer fully-managed Burla deployments for those not on GCP.

1. Ensure gcloud is setup and installed:

If you haven't, install the gcloud CLI, and login using application-default credentials.

Ensure gcloud is pointing at the project you wish to install Burla inside:

  • To view your current gcloud project run: gcloud config get project

  • To change your current gcloud project run: gcloud config set project <NEW-PROJECT-ID>

2. Run the burla install command:

Run pip install burla then run burla install.

What permissions does my Google Cloud account need to run burla install ?

If you don't have permissions, run the command anyway, and it will tell you which ones you need!

To run burla install you'll need permission to run these gcloud commands:

  • gcloud services enable ...

  • gcloud compute firewall-rules create ...

  • gcloud secrets create ...

  • gcloud firestore databases create ...

  • gcloud run deploy ...

I've listed the exact required permissions for the burla install command in it's CLI doc.

3. Start a machine and run the quickstart!

  1. Open your new cluster dashboard at the link provided by the burla install command.

  2. Hit the ⏻ Start button in the dashboard to turn the cluster on. By default this starts one 4-CPU node. If inactive for >5 minutes this node will shut itself off.

  3. While booting run burla login, this will connect your local machine to your cluster.

  4. Run the example!

from burla import remote_parallel_map

def my_function(my_input):
    print("I'm running on remote computer in the cloud!")
    
remote_parallel_map(my_function, [1, 2, 3])


Installation (fully-managed)

How to use Burla without a Google Cloud account.

Fully-Managed deployments are manually created by us on an individual basis.

Instructions:

  1. E-Mail [email protected] or fill out the form on the front page of this website.

  2. You'll get an email with your custom Burla instance (https://<yourname>.burla.dev) within a day.

FAQ:

Security:

Each managed Burla deployment is created in completely separate Google Cloud Project (VPC). Burla deployments allow access only to those in the authorized-user list in your settings tab.

How do I pay?

We piggyback off of existing Google Cloud billing infrastructure to track costs coming from your instance (Google Cloud Project) and simply forward you the bill using Stripe. We're happy to give you direct access to view google cloud billing within your project.

How much does it cost?

We charge a simple 2x multiple whatever charges originate from your instance according to Google Cloud billing, this equates to about $0.08 per cpu-hour.


Questions? Schedule a call with us, or email [email protected]. We're always happy to talk.