Burla is a library for running python functions on (lots of) computers in the cloud.

Quickstart:

  1. To install, run: pip install burla

  2. To create an account/login, run: burla login

  3. Click the "start cluster" button at cluster.burla.dev

  4. Once booted, try the following basic example:

from burla import remote_parallel_map

my_inputs = list(range(100))

def my_function(my_input):
    print(f"processing input #{my_input}")
    return my_input * 2
    
results = remote_parallel_map(my_function, my_inputs)

print(f"return values: {list(results)}")

What is Burla:

Burla is kind of like AWS Lambda, except it:

  • deploys code in seconds

  • is invoked like a normal local python function

  • lets you run code on any hardware, and change it change on the fly / per request

  • lets you run code in any custom docker/OCI container

  • has no limit on runtime (lambda has a 15min limit)

  • is open-source, and designed to be self-hosted

To use Burla you must have a cluster running that the client knows about. Currently, our library is hardcoded to only call our free public cluster (cluster.burla.dev). Right now, this cluster is configured to run 16 nodes, each with 32 cpus & 128G ram.

Burla clusters are multi-tenant/ can run many jobs from separate users. Nodes in a burla cluster are single-tenant/ your job will never be on the same machine as another job.

Components / How it works:

Burla's major components are split across 4 separate GitHub repositories.

  1. Burla The python package (the client).

  2. main_service Service representing a single cluster, manages nodes, routes requests to node_services.

  3. node_service Service running on each node, manages containers, routes requests to container_services.

  4. container_service Service running inside each container, executes user submitted functions.

Last updated