Run any Python function on 1000 computers in 1 second.

Iterate at the speed of thought. Not at the speed your lambda function, ETL-pipeline, or Kubernetes service take to redeploy.

One Function, Endless Possibility:

Cover

Orchestrate Data Pipelines

Cover

Develop in Remote Environments

Cover

Simplify AI Agent Development

How It Works:

Burla is an open-source platform for orchestrating parallel Python in the cloud. It only has one function:

from burla import remote_parallel_map

my_inputs = [1, 2, 3]

def my_function(my_input):
    print("I'm running on my own separate computer in the cloud!")
    return my_input
    
return_values = remote_parallel_map(my_function, my_inputs)

With Burla, running code in the cloud feels the same as coding locally:

  • Anything you print appears in your local terminal.

  • Exceptions thrown in your code are thrown on your local machine.

  • Responses are pretty quick, you can call a million simple functions in a couple seconds.

Attach Big Hardware to Functions That Need It:

Zero config files, just simple arguments like func_cpu & func_ram.

from xgboost import XGBClassifier

def train_model(hyper_parameters):
    model = XGBClassifier(n_jobs=64, **hyper_parameters)
    model.fit(training_inputs, training_targets)
    
remote_parallel_map(train_model, parameter_grid, func_cpu=64, func_ram=256)

A Fast, Scalable Task Queue:

Queue up 10 Million function calls, and run them with thousands of containers. Our custom distributed task queue is incredibly fast, keeping hardware utilization high.

This demo is in realtime!

Simple, Flexible Pipelines:

Nest remote_parallel_map calls to build simple, massively parallel pipelines. Use background=True to fire and forget code, then monitor progress from the dashboard.

from burla import remote_parallel_map

def process_record(record):
    # Pretend this does some math per-record!
    return result

def process_file(file):
    results = remote_parallel_map(process_record, split_into_records(file))
    upload_results(results)

def process_files(files):
    remote_parallel_map(process_file, files, func_ram=16)
    

remote_parallel_map(process_files, [files], background=True)

Run Code in any Docker Image, on any Hardware:

Public or private, just paste a link to your image and hit start. Scale to 10,000 CPU's, terabytes of RAM, or 1,000 H100's, everything stays in your cloud.

Deploy Now with Just Two Commands:

(Burla is currently Google Cloud only!)

  1. pip install burla

  2. burla install

See our Getting Started guide for more info:

Stay Up to Date:


Questions? Schedule a call, or email [email protected]. We're always happy to talk.