Process data in your database quickly.

A way to process database rows in parallel by splitting work into ID ranges.

If your table has millions of rows, one long query loop is usually slow.

A simple faster pattern is:

  1. split rows into many ID ranges

  2. process each range in parallel

  3. combine range results

This pattern works best with a numeric column you can split into ranges, such as an indexed id column.

Before you start

Make sure you have already:

  1. installed Burla: pip install burla

  2. connected your machine: burla login

  3. started your cluster in the Burla dashboard

For this example, also install a PostgreSQL driver:

  1. pip install psycopg2-binary

Step 1: Decide your row ranges

Start with ranges that do not overlap.

Step 2: Write one function that processes one range

Each function call opens its own database connection and handles one ID range.

Step 3: Run all ranges in parallel

Pass the list of ranges to remote_parallel_map.

Step 4: Combine the range results

Now compute one final total from all range outputs.

Step 5: Run a small test before the full job

Always test first with a small ID window.

After small tests succeed, run your full range list.