Ever found yourself in a situation where your self-hosted n8n workflow is running like a marathon runner on a sugar high, but in the worst possible way? You know, when too many production executions start partying at the same time, and your system turns into a chaotic mess? Well, I’ve got some good news for you. You can take control of that chaos with self-hosted concurrency control in n8n. It’s like being the bouncer at a club, deciding who gets in and who waits in line. Let’s dive into how you can manage n8n’s self-hosted concurrency, set limits, and optimize your workflow performance with production executions.
Understanding Self-Hosted Concurrency Control in n8n
First off, let’s get one thing straight: this is all about self-hosted n8n. If you’re not running your own instance, you can stop reading right here. But if you are, buckle up because we’re about to take a wild ride into the world of concurrency control.
In regular mode, n8n doesn’t put any limits on how many production executions can run at the same time. Sounds great, right? Not so fast. This can lead to a scenario where too many concurrent executions start thrashing the event loop, causing your system to slow down, become unresponsive, or even crash. It’s like inviting too many guests to a party and realizing your house can’t handle the crowd.
But here’s where you can step in and be the hero. You can set a concurrency limit for production executions in regular mode. This means you get to control how many production executions run concurrently, and any over the limit get queued up. They’ll patiently wait in line until there’s room for them to join the party, processed in a first-in, first-out (FIFO) order.
Enabling and Monitoring Concurrency Control
By default, concurrency control is like that friend who never shows up to the party unless you invite them explicitly. To enable it, you’ve got to set the environment variable: export N8N_CONCURRENCY_PRODUCTION_LIMIT=20
. Now, you’re the one calling the shots.
Wondering how to keep an eye on things? You can monitor concurrency control by watching the logs. Look for messages about executions being added to the queue and released. It’s like having a backstage pass to see who’s waiting to perform next.
And guess what? In a future version, n8n will even show concurrency control in the UI. That’s right, you’ll be able to see the number of active executions and the configured limit right at the top of your project’s or workflow’s executions tab. It’s like getting a VIP table at the club.
What Types of Executions Does Concurrency Control Apply To?
Now, let’s talk about who gets to join the party. Concurrency control applies only to production executions, those started from a webhook or node. It’s like having a guest list at the door.
On the other hand, it doesn’t apply to manual executions, sub-workflow executions, error executions, or those started from the command line interface (CLI). These are like the friends who can always crash at your place, no matter how full the house is.
Handling Queued Executions
So, what happens to those executions that get queued up? Well, they’ll sit tight until there’s room for them to run. But here’s the catch: you can’t retry queued executions. Once they’re in the queue, they’re in it for the long haul.
If you decide to cancel or delete a queued execution, it’s like uninviting someone from the party. They’re out of the queue and won’t be coming back.
And when you restart your n8n instance, it’s like opening the doors again. n8n will resume queued executions up to the concurrency limit and re-enqueue the rest. It’s like letting in a new batch of guests while keeping the others waiting.
Concurrency Control in Queue Mode
Now, let’s switch gears and talk about queue mode. In this mode, you can control how many jobs a worker may run concurrently. It’s like having multiple bouncers at different doors, each managing their own line.
Concurrency control in queue mode is a separate beast from regular mode, but they both dance to the same tune: the N8N_CONCURRENCY_PRODUCTION_LIMIT
environment variable. If you set it to a value other than -1, n8n will use that limit. Otherwise, it’ll fall back to the --concurrency
flag or its default setting. It’s like having a master plan for all your bouncers to follow.
Wrapping Up
So, there you have it. With self-hosted concurrency control in n8n, you can manage your workflow’s performance like a pro. Set limits, queue up executions, and keep your system running smoothly. It’s all about being the bouncer at your own party, making sure everyone gets in without turning your house into a chaotic mess.
Ready to take control of your n8n workflow? Start setting those limits and watch your system performance soar. And hey, if you need more tips on optimizing your n8n setup, check out our other resources. We’ve got plenty more where this came from!