Methodologies ranging from Lean Startup to Agile have talked about building a minimum viable product (MVP) and then scaling it to a full-scale solution depending on the user feedback. Quite a few of these approaches advocate build fast fail fast. In this scenario junking pieces of code in case, they don’t meet market expectation is a norm.An aspect that quite a few of these scenarios have missed capturing is how to scale a product in case its beta release meets market expectation and it needs to be scaled without many changes.In fact, it is important for a corporation to keep certain factors in consideration even when they are building a beta version of there product, for example ensuring that the design of the solution supports distributed architecture even during the POC stage.
What steps can a corporation take when looking at scaling from an MVPto a full scale market ready solution.
What steps can a corporation take when looking at scaling from an MVP to a full-scale market-ready solution. One of the first thing that a corporation needs to do is review the current solution stack, is it build to support distributed or monolithic architecture.Building a monolithic solution is easier to start with as it is simple to build, test and deploy, but the key challenge with it is once the solution becomes large making changes, resolving bugs becomes a tough ask.Building a distributed system is challenging, the application must be divided into smaller, distributed services that are connected through adapters. Benefits of a distributed architecture are that it’s easy to scale as if anyone module is creating a bottleneck in scaling up, it can be easily switched to a more robust and scalable component.To identify the bottlenecks of a solution run a profiler that identifies the bottlenecks of a system. This profile should be run at both the application as well as the database layer.Some of the questions that a profiler should answer are:How many concurrent users is my current MVP setup supporting? To be able to support a full-scale solution does it need a re-design or only scaling up on the backend will with enhance on the front stack what-is-code-profiling suffice?In case the existing solution has latency which layer is bringing that about?The constraints that exist in the solution at the MVP stage do they exist in the code or in the database? It is important to keep checking the memory consumption at frequent intervals in the profiler.The profiler should identify if there is any process/service which is consuming resources beyond the limit prescribed to it?Once the profiler has been run the next step that could be taken is to do load testing-A well run load test will reveal aspects like how an application will behave when millions of transactions are being performed in a millisecond, if your load test shows a high degree of latency then we need to identify the bottlenecks.One of the factors that could be taken into consideration is speed. If page load time is an issue it is important to analyze what constitutes a user session. On which sections of a website/application a user spends the maximum amount of time? Validate if there are redundant steps which can be removed or modules where faster algorithms can be used.
Once you have identified the constraints and bottlenecks in general, there are a few basic tenets to get things started.
- Using a code delivery network like Cloud-flare for assets.
- Use HTTP caching to store local copies of web resources for faster retrieval the next time the resource is required.
- Leverage serverless architecture so that an application can scale horizontally.
- Utilize tools like AWS ELB for automatic load balancing and scaling.
- In a large-scale application, it is important that its components ranging from database, data structures, and APIs are optimized properly. In quite a few cases a developer must choose between faster write performance versus read performance.
- In most of the cases ensure that an application is using RAM instead of hard drive space. Preference should be given to algorithms that have high space complexity along with low time complexity.
With the evolution of tools like Dockers try building an application using slightly newer tech stacks which enable building applications that are fault tolerant and are easily scalable.