How do you Scale up? — Interview & Self Analysis

Kumar Nellipudi
4 min readMar 12, 2021

What is meant by typical system design?? Okay, I Will find some articles on the internet first. (ok referred — it's never-ending. But, system design to the system- our-kind is something that we can’t find on the internet about at least for our current problem). But on the contrary, let's generalize and write.
So, System design is basically bridging the gap between a specific problem domain and coding using software infrastructure. So basically, it's the ability to scale up a system to handle a really different and unique (pronounce yu neek) real-time problem.

How do we achieve a fine shape on the system so that we can give our users' seamless experience of our service? It's always recommended to keep the system Keep It Simple and Stupid. Baby steps involved into having such a good enough (pronounce i-nouf) system? Sometimes it matters how do we scale up at initial stages?

First, we need to have a load test (watch yt video on this), The foremost thing to do when moving to topological changes we need to identify the workload on a single server, how it responds to normal traffic? how performant when compared with irregular usage? if we are ready with the full report — then let set go. System should be capable of providing three aspects mentioned in CAP theorem. Any server or system falls under only two of three combinations of CAP system at most. High availability and Consistency (CA), or Partition tolerant and Consistency (CP), or High Availability and Partition tolerance (AP). Start it from a lawn chair, Given that Consistency ( C ), Availability (A), Partition Tolerance(P) cannot be served together with single instance. so what is the number of requests it could handle at production? Later if we feel it's pretty much less, and the system is not going to provide a better experience. of course hiccups are irresistible, there where scaling up comes. After we felt the fair enough moment becomes the need to do something moment! run a code walk-through, after certain reviews we should be able to identify the next set of optimization improve the system phase-wise, considering code refactorings, architectural bottlenecks. Going forward, based on user throughput we are making rearrangement to in-network topology timely basis.

Vertical Scaling: Vertical Scaling happens when the server is scuffling to serve the response to the client in typical client-server architecture. So, what vertical scaling means is upgrading the system or infrastructure that has higher physical disk capacity, also making changes in the number of CPU cores, and increasing RAM in size. Of course, we could upgrade virtually in the case of Cloud computing, thanks to cloud technology. If we are using VM right, we could do it with ease. Also, some cases might not give the proper scale-up or the system may fail to maintain lower service downtime as the number of users are increasing rapidly in our case the horizontal scaling works for us to serve the seamless experience.

Horizontal Scaling: In order to balance the load and distribute the incoming traffic to the peers in a network will definitely help out the needs of our system. Most large-scale applications provided by tech giants will always use this method to provide smoothness to their users. Horizontal scaling means adding up more systems to the main server as adding more nodes to the cluster. We could use the cloud to provide application load balancing, we could talk about more on this later.

Having centralized control over the systems — commencement of LB: I would like to get reminded of In-Sync Replicas of Kafka (I don't know this at present, :( need to get over it also in-depth), An indirect case of horizontal scaling. There are two types of load balancers. Application Load Balancer (ALB) and Network Load Balancer (NLB). I guess Classic Load Balancer and Elastic Load Balancers are subject to AWS, I will try to learn about these things later.
We could programmatically, I mean by defining routes to different URLs mapping to different ports or domains. Let’s say, /user/* is forwarded to some api.something.com, and /admin/* will go for admin.something.com. using the proxy server we could establish communication towards our LBs. It typically acts on the network.

Next set of coverings

Brief about Orchestration design in microservices: Yet to write

Database Indexing: Before we think of Redis, let's focus on other things too.
Yet to write

Database Caching: Yet to write

Database sharding: Yet to write

Database throttle balancing: Yet to write

12 factors
Codebase — System should have version control
Dependencies — jars are not in the part of code
Config — Everthing that is static shouldnt be conf
port binding — need to hav config to run on vch p
Backing Services — should support switch betw
Build, Release, Run — we should follow azile stages
processes — capable enough to deploy in multisys
concurrency — dnt rely on too much thread but kiss
Disposability — process should be less time consum
Dev/Prod parity — keep less gap betw prod / dev
Logs — treat logs as event streams
Admin Processes — jobs and migrations tasks

--

--

Kumar Nellipudi

Exploring emerging technologies, Exploring problem solving