# Fleets

Fleets in Zerve provide an easy-to-use parallel processing feature. By leveraging the `spread()` function, you can initialize parallel tasks with a simple Python command. The `spread()` function takes a list as input, fanning out multiple compute blocks corresponding to each element of the list. After execution, the `gather()` function collects all results into a list for subsequent processing.

### **ML Pipeline in Fleets:**

\
Parallel processing in Zerve's fleets can significantly enhance the efficiency of building machine learning (ML) pipelines. Once data processing is complete, multiple algorithms can be applied simultaneously to different subsets of data. This approach allows data scientists to explore various ML models in parallel, quickly identifying which algorithms yield the best results for a given dataset.

Hyperparameter tuning, an essential step in optimizing ML models, can also benefit greatly from the parallel processing capabilities of fleets. By running hyperparameter tuning in fleets, multiple combinations of parameters can be tested simultaneously. This not only speeds up the process but also helps in discovering the most effective configuration to enhance model performance across varied datasets.

In the screenshots below, we will see how Fleet can be used in ML pipeline

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FvTmpxtnzsbCnJ5aD7aNW%2FScreenshot%202025-11-27%20at%2011.10.36%E2%80%AFPM.png?alt=media&#x26;token=e2f49573-f7e8-43d2-8a6b-297aa7cbcdec" alt=""><figcaption><p>Setting up hyperparamter grid</p></figcaption></figure>

Now we can use the spread function to launch fleet

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FO1QsICoxZIkHHKxbtruM%2FScreenshot%202025-11-27%20at%2011.13.25%E2%80%AFPM.png?alt=media&#x26;token=db991abd-99c0-49ff-9038-9483ae8d94ee" alt=""><figcaption><p>Pass Parameters to Spread Function</p></figcaption></figure>

Next we run the model for each parameter combination.&#x20;

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FJgRafyHrWarMHDViEXbM%2FScreenshot%202025-11-27%20at%2011.16.31%E2%80%AFPM.png?alt=media&#x26;token=10e90b0d-cd47-426a-8f83-884b0f9f6472" alt=""><figcaption><p>27 Concurrent Runs Initialized - One for each Hyper Parameter Combination</p></figcaption></figure>

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FyVodYTGknQR23mk1F7sP%2FScreenshot%202025-11-27%20at%2011.18.32%E2%80%AFPM.png?alt=media&#x26;token=6d64d58f-bef2-4d01-863b-070eb5052165" alt=""><figcaption><p>Use Aggregator block to combine all results</p></figcaption></figure>

You can use the best parameters to re-train your model.

### Data Processing with Fleets:

Fleets in Zerve also enable faster data processing by allowing category-level processing within a dataset column. For instance, when dealing with large datasets that contain categorical variables, fleets can process each category in parallel. This strategy reduces processing time substantially, enabling quicker insights and data-driven decisions.

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FtHEGn9WLhrKVm0Npkn5g%2FScreenshot%202025-11-28%20at%202.46.44%E2%80%AFPM.png?alt=media&#x26;token=9172fc7d-a318-496c-9f0d-7baa72b85b25" alt=""><figcaption><p>Setup department list for data processing</p></figcaption></figure>

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FAopCR5VflddrfIq3qGwq%2FScreenshot%202025-11-28%20at%202.48.55%E2%80%AFPM.png?alt=media&#x26;token=0b7e709f-1f50-4742-a042-8e9772e22cf2" alt=""><figcaption><p>Data processing with 5 concurrent runs</p></figcaption></figure>

### LLM Evaluation with Fleets:

Furthermore, testing large language models (LLMs) with multiple prompts across different models can become significantly more efficient using fleets. By distributing the workload across different compute blocks, teams can evaluate LLMs against a variety of inputs in a fraction of the time it would take to perform sequential testing. This parallel approach not only saves time but also provides a more comprehensive understanding of how different models perform under diverse conditions.

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2FLEwgz9tfZNhlhJs3Bl3a%2FScreenshot%202025-11-28%20at%202.58.00%E2%80%AFPM.png?alt=media&#x26;token=5780c1dc-999d-431e-bd46-11c641dd0084" alt=""><figcaption><p>Create a list of Product Descriptions</p></figcaption></figure>

<figure><img src="https://1018070783-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FIQKNeqjEeOp9UwUcB9R9%2Fuploads%2Fvr7T5GBSRwgOfpp9cION%2FScreenshot%202025-11-28%20at%202.58.13%E2%80%AFPM.png?alt=media&#x26;token=2dc47d26-3932-4896-af3a-853fe07b8d6d" alt=""><figcaption><p>Pass it to 5 concurrent LLM requests for processing</p></figcaption></figure>
