System Labs assisted Bailiwick Express with optimising their cloud environment, expanding the disks on some of their Linux servers. The issue they faced arose due to incorrectly configured LVM volumes, which we quickly rectified. Bailiwick approached us to do an audit of the existing system and run Linux based server Infrastructure as an extension of their team.
We designed a new stack comprising of NGINX, APACHE and CentOS7 which leveraged a Varnish, in-memory cache to reduce the load on their database servers. The fact that we had three very similar stacks meant that we could script the deployment as Configuration as code (CAC) using Ansible and we created the playbooks for the Application and DB servers. This means that we can replicate the environment within about 10 mins – this acts as a DR plan in the event of something going wrong.
We set a cache lifetime to 2h. Any page or assets are rendered normally on the first request, the rendered page is stored into the cache – each following request within the 2h lifetime is served directly from memory so that the request never hits the backend or the database. This caching mechanism resulted in significantly less consistent load on the infrastructure which allowed us to lower the resources required by each server, which in turn lowered the total monthly running costs by 40% per month.
The loading of any page which is served from the cache was also significantly faster for the end-user. Before switch over our monitoring platform highlighted that the average HTTPS response time was between 3-4 seconds. Once we completed the migration and went live on the new solution the same monitoring showed an average HTTPS response time of 200 – 400ms which is a performance gain of 99x for the average user landing on the homepage.