Download complete Coursera Notebook assignment

Download complete Coursera Notebook assignment. Once you login to coursera and open any notebook, you can go to the root directory by trimming additional text.

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Our Journey from a single Monolithic system to a Microservices based architecture. How did we do it? Why did we do it?

It was one of those nights of “IPL 2017” when all our metrics were improving. We had just launched our new feature which provided a much faster and simpler scorecard than the ones that the market had to offer. Suddenly all of it went down, servers were crashing at a particular request, uninstall rate touched a new high, users switched to other competitors within a fraction of a second. Panicked and scared, we reverted back our codebase, debugged and debugged and found out a silly bug in one of the add-on features of the new scorecard module because of which entire monolith had crashed.

As we started succeeding, people started to replicate us and suddenly there were a plethora of live prediction platforms in the market. To keep our advantage and to grow exponentially, we had to conduct experiments at a breakneck speed as part of our fail fast strategy. Now our only bottleneck was the single system which had to be refactored after every major addition in the codebase and was getting messier and more complex with each new feature launch.

The team size grew and so did the coding styles, even though we tried to enforce a common set of coding principles. Because of the monolith, there were no clear set of ownership boundaries.

As we scaled, our AWS bill almost doubled as to cater to a higher RPM on the prediction feature the entire monolithic service had to be scaled up.

After months of procrastination, we decided to swallow the pill. We decided to solve the above-mentioned bottlenecks by adopting a microservices-based architecture.

3. Independent codebases also promised to provide mutually exclusive ownership.

4. The biggest advantage of microservices in our case was the reduction of scaling costs. With different services, we would be scaling only the overloaded services and not the entire system.

However, the implementation was easier said than done. Experiments could not be stopped, our competition was gaining on us, and the shortage of manpower was the biggest pain.

So, we decided to do it service by service, instead of breaking the monolith all in one go.

The first service we segregated was our notification delivery system. As a product which provides live engagement for sports events, notifications are obviously a crucial part of our product.

Earlier, we had a module in our monolith which requested GCM/APNS service directly with the notification payload. During the segregation of the notification delivery system, we also decided to develop a logging and monitoring system through which we could micro monitor our notification delivery to the user device.

Subsequently, we also moved our MySQL unique keys to Redis hashes using the same ETL service which increased write throughput multifold.

This service powers the Rooter Live Fantasy game. It is an auto-scaling fleet of EC2 machines running node.js servers which act as an internet facing API server for Rooter Live Fantasy Game.

Our Live Fantasy Service handles all the heavy-lifting of Rooter Live Fantasy game providing a seamless service to an average 40k concurrent users engaged during a live match. This service incorporates a Redis cache, a MySQL persistent storage, a fleet of auto-scalable EC2 servers optimized for high network IO and concurrent connections behind an elastic load balancer.

This service provides real-time (latency of ~100ms) scoring for users in a LIVE fantasy game. When a user has selected and powered up a player in the live fantasy game (let’s say Lionel Messi) and Messi scores a goal (which he does so consistently), the game being a “LIVE” fantasy, all the users (can be in the order of 100k) who have this player need to be awarded the points, timelines need to be updated, leaderboards need to reflect the event and all this has to happen within ms after the goal has been scored in Camp Nou - for the perfect user experience that is the trademark of Rooter.

We update the collections on Firestore after listening to Mysql changelogs of stats relational DB, which in turn updates the scoreboards on all client scorecards that are subscribed to that socket.

Gamification is an essential part of our user experience. A good gamification system can do wonders for user retention. And by a good gamification system, we mean all the profile data points and accomplishments have to be updated in real time (<100 ms) for a user base that is touching a million mark now.

This service handles real-time gratifications for the users when they redeem their Rooter coins for the variety of coupons available on our app, provided by our numerous partner brands.

Here is a detailed architecture diagram of our microservices-based architecture.

If you are a passionate techie and a sports fan and want to be a part of Rooter Tech as we enter the next level of scale, drop a mail to arpan@rooter.io or akshat@rooter.io with your GitHub profile link.

Add a comment

Related posts:

How to lead during The Great Resignation

The issue of how to lead during The Great Resignation is perhaps at first best approached by considering how not to lead during this employer crisis. Three key actions to avoid are: - cracking the…

Do You Have A Personal Injury Case?

Personal injury lawsuits can be filed when a person has received an injury because of the unreasonable actions of another. Depending on what type of personal injury case you are pursuing, the injury…

On Functional CSS

As my career takes a big shift from being only myself to a team of 20+ I’ve become more aware of the importance of best practices and scalability with regards to web design and development. This post…