Max Attempts Exceeded Exception Horizon Laravel

Max Attempts Exceeded Exception Horizon 
Laravel

One persistent issue our team faced was managing duplicated events in Laravel's queue system. After numerous attempts to solve it, we finally arrived at a solution that helped us handle duplicated messages without losing critical data.

The Problem: Duplicated Messages in the Queue

Our users often generate activities that trigger multiple events. These events, such as syncing user data with third-party services, get queued for processing. The challenge arises when multiple events with the same key (such as user ID) are added to the queue in quick succession. Without a proper mechanism in place, older events could overwrite newer data, or worse, important data could be lost in third-party systems.

To address this, we initially utilized the WithoutOverlapping feature in Laravel. This ensures that two jobs with the same key do not execute simultaneously. However, we still encountered problems where older jobs were being processed after newer, more relevant ones.

Iteration 1: Increasing the Retry Count

Our first attempt to fix the issue involved increasing the number of retries on the jobs. We thought that by changing the retry count from the default of 3 to 25, we could mitigate the problem, allowing the jobs more chances to succeed.

// Increasing retries in the job class
public $tries = 25;

During testing in our local and development environments, things looked fine. We even conducted load testing, and the system performed well under small loads. However, after deploying the changes to production, the problem returned within a few days. The increased retries only delayed the inevitable and did not solve the root issue.

Iteration 2: Leveraging $maxExceptions

Next, we tried using the $maxExceptions parameter. This parameter controls how many times a job can fail before it is marked as failed. We set it to 1, meaning that if a job failed once, it would be immediately retried until there was no other job with the same key currently in progress.

// Setting maximum exceptions in the job class
public $maxExceptions = 1;

While unconventional at first glance, this approach worked remarkably well. The message was re-queued until there were no other conflicting jobs, ensuring we didn't overwrite new data with older data. However, this came with a downside.

Pros and Cons of $maxExceptions

By using $maxExceptions, we lost Laravel's native auto-retry functionality. If a message failed, it needed to be manually re-run. Fortunately, this wasn’t a big issue for us because our users generate many such events. Each subsequent message in the queue essentially acts as a retry, ensuring that the most recent data is always processed.

Handling Duplicated Events

A key challenge for us was managing the large volume of duplicated events that we didn’t want to process. Laravel provides a Rate Limiting feature that allows you to throttle the execution of jobs. However, we couldn’t simply rate-limit all events because we couldn’t afford to skip the final message in the queue. The last event typically signifies that the user has completed all updates, and the latest state needs to be synchronized with third-party services.

The Solution: Marking Outdated Messages

To overcome this challenge, we implemented a custom middleware: MarkOutdatedMiddleware. This middleware checks for the presence of newer messages with the same key in the queue. If such a message exists, the current message is marked as outdated and skipped, ensuring only the latest message is processed.

Here’s the implementation of MarkOutdatedMiddleware:

class MarkOutdatedMiddleware
{
    public function __construct(readonly protected string $uniqueId, readonly protected int $dispatchedAt) {}

    public function handle($job, $next)
    {
        $lastAddedTime = Cache::driver('redis')->get($this->uniqueId);
        if ($lastAddedTime && $lastAddedTime > $this->dispatchedAt) {
            return;
        }

        return $next($job);
    }
}

This middleware ensures that only the most recent message for a given key is processed, while older, outdated messages are skipped. This solution fit well with our business logic, where the final state update is the most critical.

Also, below you can see an example of Message Handler class:

class SomeJob implements ShouldQueue
{
    use Dispatchable;
    use InteractsWithQueue;
    use Queueable;
    use SerializesModels;

    public $tries = 0;

    public $maxExceptions = 1;

    private int $userId;
    private int $dispatchedAt;

    public function __construct(int $userId, ?int $dispatchedAt = null)
    {
        $this->userId = $userId;
        $this->onQueue('user-queue');
        $this->dispatchedAt = $dispatchedAt ?? time();
        Cache::driver('redis')->put($this->uniqueId(), time(), 600);
    }

    public function middleware(): array
    {
        return [
            new MarkOutdatedMiddleware($this->uniqueId(), $this->dispatchedAt),
            (new WithoutOverlapping($this->uniqueId()))->releaseAfter(5)
                ->expireAfter(30),
        ];
    }

    public function handle(User $user): void
    {
        //Some business logic
    }

    public function uniqueId(): string
    {
        return sprintf('unique-job-for-user-id-%s', $this->getUserId());
    }
}

Conclusion

By combining Laravel's WithoutOverlapping feature with a custom middleware solution, we were able to effectively handle duplicated events and avoid processing outdated messages. The result was a more efficient queue system, free from the problems of overwriting or losing data.


Final Thoughts

This approach may not be suitable for every use case, but it worked well for our high-frequency, user-generated events. If you face similar challenges in managing queues, consider using custom middleware alongside Laravel's existing features.

If you have any questions, feedback, or better solutions, feel free to share them in the comments or reach out directly!

Thank you for reading!