Last updated
Last updated
burla.remote_parallel_map
Run an arbitrary python function on many remote computers at the same time.
See our for a more detailed description of how to use remote_parallel_map.
Run provided function_
on each item in inputs
at the same time, each on a separate CPU. If more inputs are provided than there are CPU's, they are queued and processed sequentially on each worker.
If the provided function_
raises an exception, the exception, including stack trace is reraised on the client machine.
Parameters
Name
Description
function_
Callable
Python function. Must have single input argument, eg: function_(inputs[0])
does not raise an exception.
inputs
List[Any]
Iterable of elements passable to function_
.
func_cpu
int
(Optional) Number of CPU's made available to every instance of function_
. The maximum possible value is determined by your cluster machine type.
func_ram
int
(Optional) Amount of RAM (GB) made available to every instance of function_
. The maximum possible value is determined by your cluster machine type.
background
bool
(Optional) remote_parallel_map
will return as soon as your inputs and function have been uploaded. The job will continue to run independently in the background.
spinner
bool
(Optional) Set to False
to prevent status indicator/spinner from being displayed.
generator
bool
(Optional) Set to True
to return a Generator
instead of a List
. The generator will yield outputs as they are produced, instead of all at once.
max_parallelism
int
(Optional) Maximum number of function_
instances allowed to be running at the same time. Defaults to #available-cpus divided by func_cpu
.
Returns
Type
Description
List
or Generator
List of objects returned by function_
in no particular order. If Generator=True
, returns generator yielding objects returned by function_
in the order they are produced.
Questions? , or email jake@burla.dev. We're always happy to talk.