How to get started with system design. Read these articles (part 2): 10) Scaling Hashnode feed: https://lnkd.in/eaf4K_-4 9) Microservices at Netflix: https://lnkd.in/eZSM3CRB 8) Service discovery: https://lnkd.in/eCYYwQfU 7) Quotient filter: https://lnkd.in/e6nTXKDY 6) What happens when you type a URL: https://lnkd.in/eusuDn5z 5) Hinted handoff: https://lnkd.in/eM-CqK9f 4) Gossip protocol: https://lnkd.in/eBtqUVB6 3) Troubleshooting access to a website: https://lnkd.in/eER3Ci-X 2) Tips to pass system design interview: https://lnkd.in/eb-PS2nc 1) Code splitting: https://lnkd.in/edwCKJgQ What would you add to this list? Also I write a weekly newsletter to teach system design. Join 36,001+ like-minded software engineers: https://lnkd.in/ed4URjuY part 1: https://lnkd.in/euS_bPK9 --- If you liked this post. ๐ Source: @Neo K. โป Repost to help others find it. ๐พ Save this post for future reference. #coding #programming #softwaredevelopment #systemdesign
SmartIPlaceโs Post
More Relevant Posts
-
๐ ๐ฆ๐ฐ๐ฎ๐น๐ถ๐ป๐ด ๐ฅ๐ฒ๐ฎ๐น-๐ง๐ถ๐บ๐ฒ ๐ ๐ฒ๐๐๐ฎ๐ด๐ถ๐ป๐ด ๐ฎ๐ ๐ฆ๐น๐ฎ๐ฐ๐ธ โจ I spent a couple of hours reading the Engineering blog post on Slack's real-time messaging architecture: https://lnkd.in/e4EKEkRw. Here is a summary and key takeaways: ๐น ๐ ๐๐ซ๐, ๐๐๐๐ ๐ก๐๐ฃ๐ & ๐ ๐๐ซ๐๐๐๐ง๐๐ฅ๐ฉ: The core services of Slack's real-time messaging architecture, such as Channel Servers, Gateway Servers, Admin Servers, and Presence Servers, are written in Java. A powerful combination where Hacklang powers the Webapp backend handles user authentication (user token fetching) and websocket connection setup information, and JavaScript enhances the front end, ensuring a smooth and interactive user experience. ๐น ๐พ๐๐๐ฃ๐ฃ๐๐ก ๐๐๐ง๐ซ๐๐ง๐จ (๐พ๐): These in-memory powerhouses manage channel histories and are mapped via consistent hashing, serving up to 16 million channels each. Their resilience? A CS can be replaced in under 20 seconds, keeping latency to a bare minimum. ๐น ๐๐๐ฉ๐๐ฌ๐๐ฎ ๐๐๐ง๐ซ๐๐ง๐จ (๐๐): The GSs are the diligent gatekeepers, holding user info and websocket subscriptions. Spread across regions, they ensure swift connections and seamless failovers, thanks to a robust draining mechanism. ๐น ๐ผ๐๐ข๐๐ฃ ๐๐๐ง๐ซ๐๐ง๐จ (๐ผ๐) & ๐๐ง๐๐จ๐๐ฃ๐๐ ๐๐๐ง๐ซ๐๐ง๐จ (๐๐): ASs act as the stateless and in-memory servers managing the communication between the Webapp backend and the CSs. The AS is involved in processing and routing messages to the appropriate CS, while PSs track who's online, lighting up those green dots we all look for. ๐น ๐๐ฃ๐ซ๐ค๐ฎ: This open-source proxy is the unsung hero, balancing loads and terminating TLS, ensuring secure and efficient client-server handshakes. ๐น ๐๐ช๐/๐๐ช๐ ๐ผ๐ง๐๐๐๐ฉ๐๐๐ฉ๐ช๐ง๐: At the heart of it all, a publish-subscribe model ensures messages and events are delivered in real-time, keeping everyone on the same page. But what about those transient events, like the "๐ถ๐ด๐ฆ๐ณ ๐ช๐ด ๐ต๐บ๐ฑ๐ช๐ฏ๐จ" on the chat? They follow a reverse flow, not persisted in the database, showcasing Slack's flexibility in handling various types of real-time data. ๐ ๐๐๐๐ก๐๐๐๐ก๐๐ฉ๐ฎ ๐๐ฃ๐จ๐๐๐๐ฉ: Each component, from Envoy to the edge regions, is a cog in the wheel of scalability. The pub/sub model, combined with consistent hashing, allows Slack to scale linearly, ready to serve an ever-growing customer base. The use of different servers in the Slack real-time messaging architecture is essential for handling various tasks and ensuring efficient, scalable, and reliable communication. Follow Ganesh Sadanala for more on System Design and cloud Computing. Thank you! ๐๐๐๐ง๐ ๐ฎ๐ค๐ช๐ง ๐ฉ๐๐ค๐ช๐๐๐ฉ๐จ ๐ค๐ฃ ๐ฉ๐๐ ๐๐ค๐ข๐ฅ๐ก๐๐ญ๐๐ฉ๐๐๐จ ๐ค๐ ๐ข๐๐๐ฃ๐ฉ๐๐๐ฃ๐๐ฃ๐ ๐จ๐ช๐๐ ๐ ๐ง๐ค๐๐ช๐จ๐ฉ ๐ง๐๐๐ก-๐ฉ๐๐ข๐ ๐๐ค๐ข๐ข๐ช๐ฃ๐๐๐๐ฉ๐๐ค๐ฃ ๐จ๐ฎ๐จ๐ฉ๐๐ข. #SlackEngineering #SystemDesign #Scalability #RealTimeMessaging #CloudComputing #TechInnovation image source: https://lnkd.in/e4EKEkRw
To view or add a comment, sign in
-
Profiles are one of the most underrated Docker Compose features. They enable keeping multiple configurations in the same file and running a subset of images. That means you can run a subset of containers. For instance if youโre working on Frontend then running only backend services, and the other way around. In my latest article I described in practice both how to do in and why you should. I backed that by the recent refactoring in my samples. Curious if youโre using Docker Compose Profiles and if you do, then for which use cases. https://lnkd.in/d48TGnKK
Docker Compose Profile, one the most useful and underrated features - Event-Driven.io
event-driven.io
To view or add a comment, sign in
-
Entry Level DevOps Engineer | Passionate about Infrastructure as Code | Building Secure & Scalable Systems๐ก
Docker Port Expose: A Practical Guide โ๏ธ Docker has revolutionized the way we develop, deploy, and manage applications. One of the key aspects of Docker is its networking capabilities, which allow containers to communicate with each other and the outside world. A fundamental part of this networking is exposing ports. In this guide, weโll explore what it means to expose ports in Docker, why itโs important, and how to do it effectively. What Does Exposing Ports Mean? When you run a Docker container, itโs isolated from the rest of your system and other containers by default. This isolation is great for security and resource management, but it also means your container canโt communicate with the outside world unless you explicitly allow it to. Exposing ports is the process of making a container's internal ports accessible on the host machine or to other containers. Why Expose Ports? There are several reasons why you might need to expose ports in Docker: 1. **External Access**: If your container runs a web server, database, or any service that needs to be accessed from outside the Docker environment, you need to expose the relevant ports. 2. **Inter-Container Communication**: In multi-container applications, services need to talk to each other. For example, a web application might need to connect to a database container. 3. **Debugging and Monitoring**: Exposing ports can also be useful for debugging or monitoring containerized applications. How to Expose Ports Using the EXPOSE Instruction The `EXPOSE` instruction in a Dockerfile indicates that the container listens on the specified network ports at runtime. Hereโs an example: ```dockerfile FROM node:14 Application code COPY . /app WORKDIR /app # Install dependencies RUN npm install # Expose port 3000 EXPOSE 3000 # Start the application CMD ["npm", "start"] ``` In this Dockerfile, weโre creating a Node.js application. The `EXPOSE 3000` line indicates that the application listens on port 3000. However, this doesnโt actually publish the port to the host machine; itโs more of a documentation step that other developers can see. ##### Using the -p or --publish Flag To make the container's port accessible from the host machine, you need to use the `-p` or `--publish` flag with `docker run`. This maps a port on your host to a port on the container. Hereโs how you can run the above Docker container and expose port 3000: ```sh docker run -p 3000:3000 my-node-app ``` In this command, `-p 3000:3000` maps port 3000 on the host to port 3000 on the container. This means you can access the application running in the container via `http://localhost:3000`. You can also map different ports, if needed: ```sh docker run -p 8080:3000 my-node-app ``` This command maps port 8080 on the host to port 3000 on the container. Now, the application is accessible via `http://localhost:8080`.
To view or add a comment, sign in
-
This is the last post for the tutorial "develop-a-phone-validator-with-go-using-dev-container". In this post, we had implement a phone validator API with the concepts we've learned in part 2. And leverage the N-tier architecture to structure our API. This is the end of the tutorial but there are still things we can explore using this project. For example: ๐ Add OpenAPI documentation. ๐งช Write unit test for this API service. ๐ง๐ปโ๏ธ validate the user input on controller. ๐ค My Blog: https://lnkd.in/gKE5RYHg ๐ Medium: https://lnkd.in/gKj3M-t5 โช๏ธ Previous Posts: part 1๏ธโฃ : https://lnkd.in/gg69hhpH part 2๏ธโฃ : https://lnkd.in/gkwPrSYq #docker #golang #golangdeveloper #backend #backenddeveloper #backenddevelopment #engineering #softwaredevelopment #containerization #vscode #microsoft #softwareengineering
Develop an API with Go using dev container - 3
daskinnyman.vercel.app
To view or add a comment, sign in
-
SDE Intern @Onehash | Ex-SDE Intern @Southguild Tech | @FirstCotton | @Temples of India | @JugaadHai |Freelance Developer | NextJs | Nodejs | Python | Flutter | AWS | GCP | Azure | CSE'24
๐ Day 21 System Design & DSA Journey Update ๐๏ฟฝ๏ฟฝ๏ฟฝ Today's exploration delved into various communication patterns, each offering unique capabilities and advantages in building robust systems. Here's a detailed breakdown of the communication patterns learned: 1๏ธโฃ REST (Representational State Transfer): - Description: REST is an architectural style for designing networked applications. It relies on stateless communication between clients and servers, using standard HTTP methods like GET, POST, PUT, DELETE for data manipulation. - Advantages: Simplicity, scalability, and compatibility with existing web standards. - Documentation Link: https://restfulapi.net/ 2๏ธโฃ GraphQL: - Description: GraphQL is a query language for APIs and a runtime for executing those queries. It allows clients to request only the data they need, facilitating efficient data fetching. - Advantages: Increased flexibility, reduced over-fetching and under-fetching, and improved performance for client-server communication. - Documentation Link: https://graphql.org/learn/ 3๏ธโฃ gRPC (gRPC Remote Procedure Call): - Description: gRPC is a high-performance RPC framework developed by Google. It uses Protocol Buffers (protobuf) as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages. - Advantages: Strongly-typed contracts, efficient binary serialization, and support for bidirectional streaming. - Documentation Link: https://grpc.io/docs/ 4๏ธโฃ tRPC (Typed RPC): - Description: tRPC is a TypeScript-first RPC framework that emphasizes type safety and developer productivity. It simplifies API development by automatically generating TypeScript types for endpoints and payloads. - Advantages: Type safety, ease of use, and seamless integration with TypeScript projects. - Documentation Link: https://trpc.io/docs By exploring these communication patterns, I gained insights into how different systems communicate and interact, each with its own set of strengths and best use cases. Excited to continue this journey of learning and exploration! ๐๐ #systemdesign #sde #restapi #graphql #grpc #fullstackdeveloper #learninginprogress #codingcommunity
What is REST?: REST API Tutorial
restfulapi.net
To view or add a comment, sign in
-
DevOps@Onsurity || AWS Certified Cloud Practitioner|| DevOps engineer || Cloud || Technology || AWS || Business
Docker Compose Cheatsheet Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to describe your application's services in a YAML file and then have Docker Compose set up the containers and networks needed to run your app. This cheatsheet provides a quick reference to the key concepts and commands you'll need to use Docker Compose effectively. Key Concepts Services: The building blocks of a Docker Compose application. A service is defined in a YAML file and specifies the image, ports, volumes, environment variables, and other configuration options for a container. Images: Docker Compose can use images from Docker Hub or a private registry. Ports: You can publish ports from your containers to the host machine so that you can access them from your development environment. Environment Variables: You can set environment variables for your containers to provide configuration data. Volumes: Volumes allow you to persist data outside of containers. This is important for data that you don't want to lose when a container is restarted. Networks: Networks allow containers to communicate with each other. Basic Commands, You need to know: docker-compose up: Starts all of the services defined in your docker-compose.yaml file. docker-compose down: Stops and removes all of the containers created by Docker Compose. docker-compose build: Builds the images for your services. docker-compose logs: Shows the logs from your containers. docker-compose exec: Allows you to run a command inside of a container. Benefits of Using Docker Compose Simplifies development: Docker Compose makes it easy to develop and test multi-container applications. Reduces boilerplate: You don't need to write a lot of shell scripts to manage your containers. Improves reproducibility: Docker Compose ensures that your application runs the same way on every machine. Getting Started To get started with Docker Compose, you'll need to have Docker installed on your machine. You can then download the Docker Compose binary from the Docker website. Once you have Docker Compose installed, you can create a docker-compose.yaml file to define your application's services. Here is a simple example of a docker-compose.yaml file for a web application: version: '3' services: web: build: . ports: - "80:8000" volumes: - ./:/app This YAML file defines a service called web that builds the image from the current directory, publishes port 8000 on the host machine to port 80 on the container, and mounts the current directory as a volume on the container at the /app directory. I hope this cheatsheet is helpful! #docker #dockercompose #devops #webdevelopment
To view or add a comment, sign in
-
Head of Academy, Business Operations & Engineering @ SELISE Digital Platforms | Training Industry 4.0 Developers | Creating Customer Success Stories
In my engineering journey, I have developed backend solutions in different architectures. Among them, clean architecture has always been my favorite. It is a design principle that enforces modular layering of code respecting the single responsibility principle and separation of concerns. I have created a small monolithic application where the architectural design of this solution is deeply influenced by the principles of clean architecture, ensuring a structured and organized development process. Check out my repo to delve deep into the code: https://lnkd.in/gBpYt7YU In this repo, you will find some basic modules such as email, blob, identity, language, & notification with minimum viable endpoints & functionalities. ๐ A short explanation of clean architecture: ๐ฑ Domain Layer: At the core of our solution lies the Domain layer. It serves as the bedrock upon which modules are constructed, defining fundamental elements such as entities, value objects, aggregates, domain events, exceptions, and repository interfaces. ๐ชด Application Layer: Positioned directly above the Domain layer, the Application layer orchestrates the business logic and vital use cases of the application. It encompasses the definition of commands, queries, DTOs (Data Transfer Objects), services, extensions, providers, constants, attributes, and middlewares, along with their corresponding interfaces and implementations. This solution follows the CQRS pattern, hence, you will find commands and queries, complete with their definitions, handlers, and validators in dedicated folders. ๐ด Infrastructure Layer: The Infrastructure layer serves as the bridge between repositories and database connections. This layer primarily houses repository implementations and database contexts. ๐ Presentation Layer: As the outermost layer, the Presentation layer serves as the entry point to the system. It comprises controllers that facilitate the execution of commands and queries. In line with the CQRS pattern, you will find command and query controllers within this layer, each action adorned with relevant attributes and endpoint routes meticulously derived from a constant class. Let me know your thoughts on my take on the clean architecture. This is based on the official guidelines of Microsoft to create DDD-oriented microservices: https://lnkd.in/gMT9-aVq #cleanarchitecture #domaindrivendesign #singleresponsibilityprinciple
GitHub - asadullahrifat89/dotnet-essential-services: Some essential services required to build a software that involves identity, authentication, blob storage, emails, notifications, and ui language management.
github.com
To view or add a comment, sign in
-
Still looking for that product idea? Here is an often-overlooked but more than suitable field for your next project: Developer Experience. In theory, you don't need to invent something new to start something on your own. Instead: 1. Use building blocks that are already available 2. Improve the experience people have when using them by a huge margin 3. Charge for the value you create on top of something already existing, or for the time you save by performing otherwise complex processes One of the best examples: Vercel. Vercel is a DX layer on top of otherwise well-known AWS infrastructure with a thin layer on top and some well-crafted additions. In theory, you can achieve the same with relatively complex CI/CD pipelines and open-source monitoring software, but why bother implementing those yourself if you can just let a platform do it for you? Need another Vercel example? Vercel Postgres -> A thin layer on top of Neon. Vercel KV -> A thin layer on top of Upstash. The markup? Not bad. Need an original idea? Ask yourself: 1. What do I use every day that I wish was way easier to use 2. Iterate on the idea and think about whether you can abstract it away 3. Provide a solution that is so easy to use out-of-the-box that people are willing to pay for it 4. Now you got yourself a business idea What you probably don't realize: Many of us have already built custom layers on top of the infrastructure we use every day. Be it shell or Python scripts, CI/CD pipeline definitions, or even fully-fledged custom software. You can take inspiration from that and build it into a fully-fledged product. Abstract away and generalize the obvious parts, add configuration, think about sensible defaults. It's definitely neither for everyone nor is it "super easy" to build in a weekend, but it's an interesting field to think about.
To view or add a comment, sign in
-
**The Great Repository Debate: Monorepos vs. Microrepos** How companies like Google & Meta manage vast seas of code? Or perhaps you're figuring out the best way to arrange your project's code? The way we handle our code has monumental consequences, from team efficiency to reliability. **Monorepo Insights** *Surprising Fact:* The monorepo isn't a newcomer! Both Linux and Windows used monorepo approach. *Visualize this:* A big library housing each line of code. That's the route giants like Google, Facebook, and Uber took. Google's monorepo, for instance, boasts billions of code lines! *Pros:* 1. **Ease of Cross-Project Changes:** Altering a utility function across multiple services? A single commit in one spot does the trick. 2. **Unified Dependency Management:** No more juggling varying package versions across multiple repositories. 3. **Consistency:** Code reviews in a monorepo follow a set standard, removing the chaos. 4. **Code Reuse:** Spotting and repurposing pre-existing functionalities is straightforward. *Cons:* 1. **Demands Careful Planning:** Google's dedicated tool, Blaze, is a testament to this. 2. **Potential Sluggish CI/CD Pipelines:** As the repository inflates, managing it becomes challenging. 3. **Initial Overwhelm for Newbies:** A gigantic repository can be daunting, but methodical documentation and onboarding can remedy this. 4. **Limited Customization:** Conflicting tools or major changes necessitate a more holistic codebase view. However, tools like Google's Bazel and transparent communication can harmonize the balance. **Microrepo Insights** *Shifting Focus:* Microrepos view each component as independent unit with dedicated repository. Big names like Amazon & Netflix are fans of this approach. *Pros:* 1. **Independence:** Teams have autonomy to oversee & expand their repositories. 2. **Risk Isolation:** A hiccup in one repository doesn't jeopardize others. 3. **Flexibility:** Teams can cherry-pick their tools, creating a custom environment. 4. **Clear Ownership:** Every team owns and manages their repository. *Cons:* 1. **Coordination Hiccups:** Synchronizing changes over diverse repositories demands excellent collaboration and specific tools. 2. **Dependency Challenges:** Tools like Nexus or Artifactory can offer solace. 3. **Variable Code Standards:** organization-level coding norms are pivotal. 4. **Code Duplication:** Avoid this pitfall by generating shared libraries. **Monorepo or Microrepos** The ideal repository approach is influenced by team dynamics, organizational scale, project type, and corporate ethos. Monorepos excel in collaborative environments requiring uniformity, ideal for sizable teams handling intertwined projects. They ensure consistent code and ease cross-project alterations. Conversely, microrepos are for firms valuing team autonomy. They ensure flexibility and risk containment. So, which resonates with you? Share your repository strategy and its underlying reasons in the comments.
To view or add a comment, sign in
-
Happy Sunday LI, I wanted to start this week off correctly since my side project work didn't end the way I wanted. I love that reading is a habit, workout, and accomplishment all in the same act. With each article or book, I finish I increase brain functionality, deepen my vocabulary, build a healthy habit, and expand my knowledge. Still, most of all, reading means I completed a goal I set for myself. With the new week comes a new topic within our API center theme, so we are moving onto another type of RPC protocol titled gRPC. Just a quick recap, RPC stands for remote procedure call and this is the protocol that isn't a protocol but rather a connection point/function to underlying backend servers. The 'g' in gRPC stands for Google and was originally built as an internal framework to connect several microservices running at their different data centers called Stubby. They later built the next version of Studdy as an open-source project and gRPC was born. I love learning that some software program started as a company's internal project to improve systems and then they said "Hey let's share this with the world since it worked so well for us". The author did a great job breaking down the differences between g, XML, and JSON RPC protocols called 'protocol buffers' that produce a parsable byte stream that represents structured data. The article goes on to cover the 3 main components of gRPC which are the: Service interface definition, Server, and Client. I can see the need to read a few more articles to get a clear grasp on the entirety of the gRPC protocol, but I'm already intrigued by its internal Google roots. This article is a great entry point into a series by the author covering the architecture of microservices but I will level that for you to dive into on your own. Have you worked with gRPC yet, if so what did you think about it? Happy Coding Yall! #softwareengineer #softwaredeveloper #frontenddeveloper #codinglife #skillsdevelopment #bettereveryday #js #ts https://lnkd.in/g3fUuG6y
Microservice ArchitectureโโโCommunications Patterns Part 4 ๐คโโโgRPC APIs
medium.com
To view or add a comment, sign in
40,389 followers