Randomized Split
Use the Randomized Split node to create branching logic in your Neuron workflows based on random conditions.
The Randomized Split node allows you to create dynamic branching paths in your Neuron workflows. This guide explains how to use randomized splits effectively.
Functionality
The Randomized Split node acts as a traffic director, splitting incoming requests based on a configurable percentage. This can be useful for:
- A/B Testing: Compare the performance of different AI models, prompts, or node configurations
- Canary Deployments: Gradually roll out changes to a subset of users to monitor stability and performance
- Adding Randomness: Introduce variety into workflows, for example, using different LLMs for each request
Node Properties
- Percentage A: Sets the probability that a workflow will run down the A Path
- Percentage B: Sets the probability that a workflow will run down the B Path
Percentage B is automatically calculated as the remaining percentage.
Usage Examples
Scenario: A/B Testing Different Models
Let’s say you want to compare the output of a base LLM like GPT-3.5 with the more advanced GPT-4 model:
- Add a Randomized Split node to your workflow
- Set “Percentage A” to 50%; “Percentage B” will automatically be set to 50%
- Connect “A Path” to a Call AI Model node configured to use GPT-3.5
- Connect “B Path” to a Call AI Model node configured to use GPT-4
Now, half of your requests will be processed by GPT-3.5, and half by GPT-4, allowing you to compare the results.
Scenario: Implementing a Canary Deployment
You’ve developed a new version of your image generation workflow. To avoid breaking things for all users, you can gradually roll out the changes:
- Add a Randomized Split node
- Set “Percentage A” to 90%; “Percentage B” will automatically be set to 10%. This sends only 10% of requests down the new deployment, while 90% use the known stable setup
- Connect the “A Path” to your current, stable workflow
- Connect the “B Path” to the new workflow
- Over time, you can gradually increase percentage B (and thus reduce percentage A) to expose the new model to more users
Tips and Best Practices
- Use execution logs to track which path is being taken and evaluate the performance of each branch
- Start with small percentages when testing new paths
- Monitor the results of each path to make data-driven decisions
- Consider using multiple splits for more complex testing scenarios
- Combine Randomized Split with other node types to create more complex and dynamic workflows, for example:
- Split traffic between two different LLM models
- Split traffic between two different system prompts