79327108

Date: 2025-01-03 16:56:07
Score: 1
Natty:
Report link

Can this achieve zero-downtime deployment?

Yep, it can! I've done it before. My setup was similar to yours.

At a company I used to work at we did zero-downtime deployments using Deployer (PHP) which would create a new directory (usually 1, 2, 3, 4, etc.), copy the code in, then update the live/ symlink to point to the new version. Then we would run systemctl nginx reload and then we would run a custom script that would clear the opcache.

After modifying the configuration, requests that were not yet completed still return results based on the old project’s logic once their processing finishes. However, from my observation, this change doesn’t take effect immediately after executing nginx -s reload. It seems the logic of the old project persists for a while.

The opcache still has the old code loaded and it will take some time for it to expire depending on the settings (opcache.revalidate_freq and opcache.validate_timestamps). Despite messing with those though, it may still not update as expected.

This method is common, but I have two issues: On macOS Sequoia 15.2, symlinks don’t seem to work with Nginx: Set root in Nginx config to a symlink path. Updated the symlink with ln -sfT to point to a different project. Reloaded Nginx (nginx -s reload), but it still serves the old PHP code. This works on Linux (used by my DevOps colleague), but could it cause issues under high traffic or edge cases?

There is an "issue" with the opcache where it doesn't pick up the updated files despite clearing the cache, but there is a workaround by using $realpath_root instead of $document_root (https://stackoverflow.com/a/23904770/6055465, you can also see an example script in the question for clearing the opcache). I don't fully understand how that all worked and don't have access to the code as it has been a few years and I no longer work for that company.

Why don’t more people adopt this technique for automated deployment, especially in scenarios where there aren’t many redundant hosts? Is it because they haven’t thought of it, or are there potential risks involved?

I think the main reason for it not being common is because companies that require zero downtime deployments will have only one or two people figure it out, implement it once, and then it is never touched or taught again.

Or they use a different method of zero downtime deployments such as using containers + kubernetes. Or they use load balancers and take each host out of the rotation, let connections finish, make updates, add back into the rotation. These two methods are arguably superior because you can update the operating system and other things without downtime rather than just the application.

Advantages, disadvantages, and points to pay attention to, especially during high traffic periods.

A little long, but hopefully that helps you achieve your goal and also gives some explanations to your "why" questions.

Reasons:
  • Blacklisted phrase (1): stackoverflow
  • Long answer (-1):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • Starts with a question (0.5): Can this
  • Low reputation (0.5):
Posted by: Magnie Mozios