P.Counters

API API DOUBLE BALANCER POLICY POOL

667505

[00:00 – 00:00]

Hello everyone and welcome to my channel.

[00:00 – 00:03]

Today we’re diving into four real-time communication technologies.

[00:03 – 00:05]

HTTP polling, server sent events, SSE, web socket and web hooks.

[00:05 – 00:06]

These technologies play a crucial role in our daily online world.

[00:06 – 00:08]

Ever wondered how they work?

[00:08 – 00:09]

What are their strengths and limitations?

[00:09 – 00:10]

Or what scenarios are they best suited for?

[00:10 – 00:12]

If these questions catch your interest, then this video is made just for you.

[00:13 – 00:15]

In this video, we’ll break down how these technologies operate and use interactive diagrams to help you understand how they facilitate real-time data transfer behind the scenes.

[00:15 – 00:17]

We’ll also explore the advantages and limitations of each technology.

[00:17 – 00:19]

Their appropriate use cases and demonstrate how they’re applied through real-life examples.

[00:19 – 00:22]

Whether you’re a developer, a tech enthusiast, or just someone curious about technology, I’m confident you’ll find valuable insights in this video.

[00:22 – 00:23]

So, stay tuned as we delve deeper into these fascinating technologies.

[00:24 – 00:24]

Let’s first explore HTTP polling.

Force two authentication on page it becomes acard.

Hello everyone and welcome to my channel. Today we’re diving into four real-time communication technologies. HTTP polling, server sent events, SSE, web socket and web hooks. These technologies play a crucial role in our daily online world. Ever wondered how they work? What are their strengths and limitations? Or what scenarios are they best suited for? If these questions catch your interest, then this video is made just for you. In this video, we’ll break down how these technologies operate and use interactive diagrams to help you understand how they facilitate real-time data transfer behind the scenes. We’ll also explore the advantages and limitations of each technology. Their appropriate use cases and demonstrate how they’re applied through real-life examples. Whether you’re a developer, a tech enthusiast, or just someone curious about technology, I’m confident you’ll find valuable insights in this video. So, stay tuned as we delve deeper into these fascinating technologies. Let’s first explore HTTP polling.

It periodically sends request to the server to fetch the latest data. There are two main types of HTTP polling: short polling and long polling. Short polling works quite straightforwardly. The client regularly sends HTTP requests to the server to check for new data. The server upon receiving each request immediately returns a response, regardless of whether there is new data. Here’s how short polling works. The client sends a request to the server at set intervals, say every 15 seconds. When the server receives a request, it checks for new data and sends it back if available. If not, it sends an empty response. The client processes any received data and then waits for the next interval to send another request. This method offers near real-time data retrieval, meaning there can be some delays. Long polling improves upon short polling by reducing the number of requests. In this method, after the client sends a request, if there is no new data, the server doesn’t immediately respond. Instead, it holds the request open for a period until new data becomes available or a certain timeout limit is reached. Here’s the interaction for long polling.

client sends a request to the server. If there is no data, the server doesn’t respond immediately, but waits while keeping the connection open. Once new data is available, the server responds immediately. If a timeout occurs, for example after 100 seconds with no data, it sends a timeout notification. The client, upon receiving data or a timeout notification, immediately makes another request. This method allows clients to receive updates in real time. Now, let’s examine the advantages and limitations of HTTP polling. Advantages: Wide compatibility. Based on HTTP protocol, polling is supported on a broad range of platforms. Easy implementation. It’s straightforward to implement without the need for special tools or libraries. Timely communication. Short polling offers near real-time communication, while long polling provides actual real-time updates. Limitations: Resource consumption. Short polling can lead to numerous ineffective requests, wasting bandwidth and server resources. Long polling reduces the frequency of requests, but still involves repeated connections. Data delay. With short polling, updates might occur between requests, leading to delays. Server load. Although long polling reduces the number of requests, maintaining open connections under high concurrency can burden the server. Suitable scenarios. Considering these…

Both short and long polling are viable options for scenarios like user notification or status updates where there is limited concurrency. Short polling is more suitable for scenarios that do not require stringent real-time updates and have small data volumes. Long polling is better for scenarios needing real-time updates. In practice, the polling intervals and timeout settings should be adjusted based on business needs to balance resource expenditure and response timeliness. Now, let’s look at two typical cases of HTTP polling in action. Case one: Configuration management system. The first example is a configuration or feature flag management system, typically comprising a feature flag service and a database. Administrators can modify settings through a user interface, with changes saved to the database. Many clients need to sync these settings quickly. HTTP polling, particularly short polling, is suitable here. Clients regularly check for configuration updates to ensure they receive the latest settings. For more real-time synchronization, long polling could also be used. This example demonstrates how HTTP polling can facilitate timely synchronization of configuration data, providing a simple and easy-to-implement solution suitable for scenarios with limited concurrency, small data volumes, and infrequent updates. Case two: Social sharing app.

The back end design of a social sharing system. Users post pictures or videos to social sites, often a time consuming operation. To avoid blocking the system, we use an asynchronous system design. The back end consists of several parts. One post service. Handles API requests from the front end, including job submissions and job status queries. Two post job consumer. Processes job messages, publishes content to social sites and updates job status. Three message queue MQ. Manages the asynchronous handling of jobs. Four database DB. Stores job details and their processing status. Here’s how it operates. One user initiate post request from the front end. Two post service receives the request, converts it into a job, stores it in the database, and sends a job message to the MQ. Three at this point, the job has been successfully submitted and the post service returns a job ID to the front end. Since inserting data into the database and MQ involves lightweight operations, the job submission process is quick and does not block the front end. Four the front end, having received a job ID, periodically polls the post service to check the job’s execution status. Five if the job is queued or in progress, the front end receives an in queue or in progress status. Six concurrently, post job consumer picks up the job from the MQ, begins publishing to the social site and updates the final status, success or failure, in the database.

in the database. Seven. After one or several polling rounds, the front end eventually receives the result status of the posting operation. This design, combining asynchronous operations with polling, not only avoids blocking in post service, but also supports near real-time responses on the front end. It is particularly suitable for handling long running back end tasks while maintaining a smooth user experience on the front end. Now let’s explore server sent events. SSE. SSE is an HTTP based server push technology that allows clients to automatically receive updates from a server. With SSE, a client establishes a persistent long connection with the server. Through this connection, the server continuously sends data to the client. It’s important to note that clients cannot send data back to the server via SSE. Here’s how SSE works. The client creates a new event source object to request and open a persistent connection to the server. Once the server accepts this connection, it keeps it open. Whenever there is an update on the server side, it sends data update events through this connection. As the connection remains open, the server can send more events anytime. The client listens to these events to receive data updates. Now let’s delve into the advantages.

SSE is an HTTP-based server push technology that allows clients to automatically receive updates from a server. With SSE, a client establishes a persistent, long connection with the server. Through this connection, the server continuously sends data to the client. It’s important to note that clients cannot send data back to the server via SSE. Here’s how SSE works. The client creates a new Event Source object to request an open and persistent connection with the server. Once the server accepts this connection, it keeps it open. Whenever there’s an update on the server side, it sends data update events through this connection. As the connection remains open, the server can send more events anytime. The client listens to these events to receive data updates. Now, let’s delve into the advantages and limitations of SSE. Advantages. One, simplicity. Implementing SSE on the client side is straightforward, as most modern browsers natively support the event source interface. Two, efficiency. Since the connection is persistent, the server can send data anytime without frequent requests from the client, reducing HTTP request overhead. Three, automatic reconnection. If the connection is accidentally closed, the event source interface automatically tries to reconnect. Limitations. One, limited browser support. Although most modern browsers support SSE, it may not be supported in some older or certain mobile browsers. Two, text only. SSE supports only text data, making it unsuitable for transferring binary data. Three, unidirectional communication. SSE supports only one-way data flow from server to client. If bidirectional communication is needed, other technologies like WebSockets might be considered. Suitable scenarios. Given these points, SSE is particularly suitable for applications where the server needs to continuously push data to the client, but there is no need for data to be pushed from the client to the server, such as: One, real-time news feeds. Automatically updated news headlines or blog posts. Two, live sports updates. Real-time updates of scores and stats during games. Three, real-time data in stock or forex markets. Live streaming of stock prices or trading information. Application examples.

The client listens to these events to receive data updates. Now, let’s delve into the advantages and limitations of SSC. Advantages. One. Simplicity. Implementing SSC on the client side is straightforward, as most modern browsers natively support the event source interface. Two. Efficiency. Since the connection is persistent, the server can send data anytime without frequent requests from the client, reducing HTTP request overhead. Three. Automatic reconnection. If the connection is accidentally closed, the event source interface automatically tries to reconnect. Limitations. One. Limited browser support. Although most modern browsers support SSE, it may not be supported in some older or certain mobile browsers. Two. Text only. SSE supports only text data, making it unsuitable for transferring binary data. Three. Unidirectional communication. SSE supports only one-way data flow from server to client. If bidirectional communication is needed, other technologies like WebSockets might be considered. [MUSIC] This is simply the best moving hack you will find.

Other technologies like web sockets might be considered. Suitable scenarios. Given these points, SSE is particularly suitable for applications where the server needs to continuously push data to the client, but there is no need for data to be pushed from the client to the server, such as One. Real time news feeds, automatically updated news headlines or blog posts. Two. Live sports updates, real time updates of scores and stats during games. Three. Real time data and stock or forex markets, live streaming of stock prices or trading information. Application examples. Let’s explore two typical scenarios where SSE is used. An interesting example is chat GPT, a well known artificial intelligence product in recent years. Many might not realize that SSE technology is used behind the scenes when interacting with chat GPT. For instance, when you ask chat GPT to write an introductory article about server sent events. Once you submit your request, the server begins processing and gradually generates the article. The generation of the article is typically done in batches, not all at once. During this process, chat GPT server uses SSE technology to push parts of the article to the client in real time. Thus, if the response content is extensive, you might notice chat GPT’s replies appearing in a step by step manner, leveraging SSE technology. Another typical application is the dashboard of an enterprise monitoring system, which can implement real time data updates via SSE. Take Netflix’s open source Histrix as an example, a widely recognized component for microservice monitoring and circuit breaking. Histrix comes with a web dashboard that displays real time performance metrics of monitored services, along with details on circuit breaking. This dashboard uses SSE technology to push performance data in real time. For a specific demonstration of the Histrix dashboard, you can refer to a video posted on YouTube 10 years ago by Ben Christensen, the creator of Histrix. In the video, he shows how Netflix internally uses the Histrix dashboard to monitor real time performance metrics and circuit breaking situations of core services. The demonstration is very visual and effectively showcases the capabilities and practicality of SSE technology in real time data pushing. Next, let’s discuss web socket technology.

Next, let’s discuss WebSocket technology. WebSocket allows for full duplex communication over a single persistent connection. Unlike HTTP, once a WebSocket connection is established, it remains open, allowing data to flow bidirectionally between the client and server at any time, without needing to reestablish the connection for each exchange. Let’s look at the WebSocket interaction flowchart. The client begins the handshake process by sending a special HTTP request. This request informs the server that the client wishes to upgrade the connection to WebSocket. If the server accepts the request, it confirms with an HTTP response, signaling the upgrade from HTTP to WebSocket. Once the protocol is upgraded, a full duplex communication channel is established, and the client and server can start transmitting data in both directions. This connection remains open until either the client or server decides to close it. It’s important to note that WebSocket uses a specific protocol for communication. This protocol is identified by starting with WS for its unsecured version and WSS for its secured version, instead of HTTP or HTTPS.

Now, let’s analyze the advantages and limitations of WebSocket.

Hello, latency communications. The persistent connection of WebSocket reduces overhead and latency, making it particularly suited for applications that require fast responses. Two, full duplex communication. Clients and servers can send and receive information simultaneously, enhancing interactive efficiency. Three, reduced server load. Maintaining one persistent connection saves more resources than repeatedly establishing connections for each interaction. Limitations. One, compatibility issues. Although most modern browsers support WebSocket, some older systems might not. Two, security considerations. WebSocket may introduce security risks as the persistent open connection is more susceptible to certain types of network attacks. Three, resource usage. If not managed properly, long-lived connections could lead to excessive use of server resources. [MUSIC] Squirrel AI writes film.

Webhooks are server-initiated, event-driven pushes of data in real-time. This means that with webhooks, the client must also handle HTTP requests, effectively acting as a server with an externally accessible URL. Here’s how webhooks work. Users first register a webhook on the server, including specifying the target URL for receiving data and subscribing to the events of interest. When the registered event occurs, the server automatically sends an HTTP POST request to the target URL. Then the client can receive and process the data from this POST request accordingly. Let’s look at the advantages and limitations of webhooks. Advantages. One.

Webhooks provide an extremely efficient way to transfer data, notifying the recipient immediately when events occur. Simplified architecture. By using webhooks, complex polling logic is avoided, reducing the client’s querying load on the server. Easy to implement and maintain. Webhooks are relatively simple to set up and maintain. Limitations. Dependence on external services. If the server receiving webhooks is unavailable, event information might be lost. Therefore, the server side usually needs to support error retries. Security concerns. It’s crucial to ensure the security of data transmitted via webhooks to prevent man-in-the-middle attacks and data breaches. Capacity limits. If events are triggered too frequently, the receiving server must have adequate processing capacity to handle potential high concurrency requests. Additionally, slow client reception might consume significant server resources or even lead to blockages.

Suitable themselves to scenarios. Webhooks are particularly suited for scenarios such as…

1. Automated tasks. For instance, automatically updating social media after an article is published in a content management system.

2. Integrating third-party services. Notifying merchants to process order statuses after transactions are completed on a payment gateway.

3. Monitoring systems. For real-time system alerts such as server downtime or performance anomalies. Application examples. Consider Shopify, a globally renowned e-commerce platform that supports webhooks through its API, enabling real-time data synchronization from Shopify stores to external systems. Imagine a merchant operating an online store on Shopify, wishing to synchronize store data in real time with another external system.

Typically, the external system would need to continuously poll Shopify’s API to check for new orders. Although this method is direct, it generates a high volume of ineffective requests and places significant load on Shopify servers, particularly when thousands of external systems are polling simultaneously.

To address this, Shopify recommends using webhooks. In this setup, the external system first registers a URL on Shopify’s platform to receive data and subscribes to events of interest, like order creation. Once an order is placed in the store, Shopify automatically pushes information about the new order, usually in JSON or XML format, directly to the registered URL of the external system. This ensures timely event notifications while significantly reducing network overhead and server load. This example illustrates how webhooks can effectively optimize the data synchronization process, making event-driven notifications more timely and efficient.

0

Subtotal