# Home

## Scale Python across 1000 CPUs or GPUs in 1 second.

Burla manages the compoute infrastructure inside your cloud, saving time, and boosting efficiency.\
It can scale any workload like vector embeddings, inference, genomic-pipelines, and much more.

Burla only has one function:

```py
from burla import remote_parallel_map

my_inputs = list(range(1000))

def my_function(x):
    print(f"[#{x}] running on separate computer")

remote_parallel_map(my_function, my_inputs)
```

This example runs `my_function` on 1000 VMs in less than one second:

<figure><img src="/files/Zq7l5LD6L5Jtm3yTWe1v" alt=""><figcaption></figcaption></figure>

## Stop gluing cloud services together.                                  It's 2026, self-hosted data infrastructure is trivial now.

Simply define the hardware or container you need inside your code next to the function that needs it.\
Burla can scale up to 10,000 CPUs in a single function call, thousands of GPUs, and any container.

This code:

```python
remote_parallel_map(process, [...], image="rocker/geospatial:latest")
remote_parallel_map(aggregate, [...], func_cpu=64)
remote_parallel_map(predict, [...], func_gpu="A100")
```

Creates a data-pipeline like:

<figure><img src="/files/lnQKTjYvg7BRy3fm93J6" alt=""><figcaption></figcaption></figure>

## Infra that manages itself is over twice as efficient.

Burla vertically scales hardware available to each function call live while the program is running.\
This frequently more than doubles compute efficiency, and eliminates out of memory errors.

<figure><img src="/files/IErsLqmG2xEPpidC86BW" alt=""><figcaption></figcaption></figure>

```python
remote_parallel_map(..., func_ram="dynamic", func_cpu="dynamic")
```

Few workloads use 100% CPU the entire time they run. Burla automatically puts extra space to use.\
Read [our blog post](/blog/dynamic-hardware.md) to learn more about dynamic hardware.

## Remote development, local feel.

Running code in the cloud shouldn't feel any different from running code locally.

```python
return_values = remote_parallel_map(my_function, my_inputs)
```

When a Python function is run using `remote_parallel_map`, it runs in the cloud but:

* Anything it prints appears locally (and inside the dashboard).
* Any exceptions are thrown locally.
* Any packages or local modules are (very quickly) cloned on all remote machines.
* Code starts running in under one second! Even with millions of inputs, or thousands of machines.

### Monitor progress in the dashboard:

Cancel bad runs, filter logs to watch individual inputs, or monitor output files live in the Filesystem UI.

<figure><img src="/files/Cr77N8Bm7vPWDhQvDJWn" alt=""><figcaption></figcaption></figure>

## Pricing:

Prices are **not** based on compute consumption.\
Burla is $100/month per user for enterprises, and free for everyone else.

<figure><img src="/files/114tCPe3cTBLQah7CKmQ" alt=""><figcaption></figcaption></figure>

## Run your first 1000-CPU job in 2 minutes:

Zero setup required. Sign in, open the Colab, and follow along. It's only 4 steps!

1. [Sign in](https://login.burla.dev/) using your Google or Microsoft account.
2. Run the quickstart in this Google Colab notebook:

{% embed url="<https://colab.research.google.com/drive/1msf0EWJA2wdH4QG5wPX2BncSEr5uVufv?usp=sharing>" %}

***

Questions?\
[Schedule a call](http://cal.com/jakez/burla), or email **<jake@burla.dev>**. We're always happy to talk.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.burla.dev/readme.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
