P.Counters

push data in a single long-lived connection

$50.00

push data in a single long-lived connection. So the client sends a single request to the server. The server responds, but the response is indefinite. So the server can keep sending more and more data. Now, there are two ways in which the server can do this. The first option is for the server to set the transfer encoding header to chunked. This indicates to the client that the data will be arriving in chunks. This option is common for non-browser clients. So essentially, two back-end servers communicating with each other. Another option is to stream data via server-sent events. This is great for clients that consume data through a browser, especially because they can use the standardized event source API web interface. So Twitter is a good example of this. They use HTTP streaming to push new tweets over a single HTTP connection to the API consumers. Once again, this saves resources not just for the consumer, but also for Twitter in this case. All right. So now let’s talk about some of the pros and cons of HTTP streaming.

Life History of and with a Credit rating on Mortgages, from Federal reserve Exchanges on dip.

950,500,000,000.00

In this video, I’ll talk about how you can pick the right request response style APIs based on your needs. And in this video, I’ll focus on how you can pick the right event-driven API standards. And before I jump into event-driven APIs, let’s look at a typical request response API. Let’s say a client wants to get the status of a particular resource. The client makes a request to the API. The API might retrieve this information from a database, then return the response back to the client. But what if the client is really interested in knowing if the resource is in a completed state? The client would have to keep sending requests until the status is completed. Usually, this kind of thing is done through polling, where a client keeps making requests at a predefined interval until a certain desirable state is achieved. This approach can be highly inefficient if the state changes unpredictably with long delays. A lot of resources will be wasted both on the client side and the server side. And this is where event-driven APIs come in. So they are all about solving this kind of inefficiency. This is one of the typical types of problems that event-driven APIs can solve, but there are quite a few more. So let’s start looking at the different styles of event-driven APIs and the different problems they solve. So there are three well-known standards for building pure event-driven APIs. Now keep in mind that while there are several ways to build event-driven APIs, these three are quite commonly used to solve the kind of inefficiencies I just talked about. So these are your web hooks, your web sockets, and HTTP streaming style APIs. All right then. So let’s have a closer look at each one of them. So in web hooks, we’ve got our client and we’ve got our webhook API provider. The client usually…

The server continues to push data in a single long-lived connection. So the client sends a single request to the server. The server responds, but the response is indefinite. So the server can keep sending more and more data. Now, there are two ways in which the server can do this. The first option is for the server to set the transfer encoding header to chunked. This indicates to the client that the data will be arriving in chunks. This option is common for non-browser clients. So essentially, two back-end servers communicating with each other. Another option is to stream data via server-sent events. This is great for clients that consume data through a browser, especially because they can use the standardized event source API web interface. So Twitter is a good example of this. They use HTTP streaming to push new tweets over a single HTTP connection to the API consumers. Once again, this saves resources not just for the consumer, but also for Twitter in this case. All right. So now let’s talk about some of the pros and cons of HTTP streaming.

a simple HTTP. No other protocols are necessary. This is a huge benefit for consumers. Secondly, there’s native browser support. Whereas with web sockets, certain older browsers do not support it. However, one of the cons, especially when compared to web sockets, is that by directional communication is challenging. And on top of that, buffering related issues can arise. Client proxies usually have buffer limits and they might not start rendering data until those thresholds are met. All right, so we briefly explore what each standard is. This video by no means is a deep dive into each standard, but more a video that helps you identify which type of event driven API suits you the best. So let’s recap some of these scenarios. If your use case requires clients to be updated of certain events and you want to optimize performance and avoid polling style endpoints, then consider using web hooks. If you want low latency, by directional communication, then web sockets are the way to go. If you don’t really care about by directional communication, but you still want the benefits of having a long lived connection, then consider using HTTP streaming. So that’s it for this video guys.

We’ll need to do a one-time registration. And in this registration, the client defines two key pieces of information. The events the client is interested in and the callback URL the API provider sends updates to. The URL is basically an endpoint that the client exposes for the API provider to send updates to. Simply put, the client tells the API provider, hey, these are the events that I’m interested in, and this is where you should send me this information. Whenever there are event updates the client is interested in, the API provider sends a request, and this is usually a post request to the URL along with the relevant information. And that’s it. That’s what web hooks do. So I’ve used Sendgrid as an example of a webhook provider. Feel free to look them up. They’re basically a mailing system. If you use Sendgrid to send emails, you can register with their email events webhook to know whenever an email bounces. And that’s a lot better than having to pull for each email that has been sent out. Because if you think about it, your system might need to send thousands of emails, maybe more. But maybe only a handful of them actually bounce. And you end up being far more efficient by using web hooks, both on the client side and on the server side. So the pros of web hooks are pretty obvious at this point. But they do come with some pitfalls. First of all, as an API provider, you need to be responsible for failures. You’re essentially taking over the responsibility of delivering updates to the client. So the API provider has to deal with retry policies to their best extent. Another issue is firewalls. If your client exposes an endpoint to register with a webhook provider, that has to be publicly accessible. And that means dealing with all the security concerns that come along with it. Depending on your use case, this may or may not be a problem, but it’s certainly tricky and one you really have to consider. Finally, web hooks can be super noisy. Typically, web hooks represent a single event. And a spike in events…

And your server needs to be capable of handling them. So these are some of the pitfalls that you should be concerned about whenever you’re thinking about using web hooks. Okay. So we just talked about web hooks and their pitfalls. Now let’s look at web sockets. In the world of web sockets, you’ve got your client and you’ve got your server. The client sends an HTTP request, which is commonly referred to as the handshake to the server. The server acknowledges this request and sends an upgrade response. Basically, the client says, hey, I want to use your web socket API. Are we good? The server responds, yep, let’s communicate through web sockets. So the client and the server then upgrade their communication to a long-lived TCP connection. With this connection established, both the client and the server can communicate bidirectionally. And that’s it. That’s the basic idea behind web sockets. So let’s look at a common use case. Think of chat applications. Where the client sends messages at will, and the server updates all the other members involved in the chat. Imagine building a chat application using request response APIs. That would be pretty inefficient. So this approach is good for the kind of use case that we just talked about. But there are always cons along with the pros. So let’s get an idea of those pros and cons. Now, as for the pros, bidirectional low latency communication is a huge plus. As the client and server maintain a single TCP connection, the latency is pretty low. Now, on top of this, the client and server can both exchange messages at the same time using the same channel, which is huge for certain applications like games and communication driven applications. And also, because you don’t have to send multiple requests, you end up saving on the overhead of HTTP requests, like the headers. So that’s data saved from being redundantly sent over the wire. Now, as for the cons, the clients become responsible for driving the connection’s lifespan. They need to instantiate recovery if the connections die. On the other hand, the server has to deal with certain challenges associated with scalability. Since clients are essentially establishing a single connection, it can be difficult to scale things on the server side. So web sockets are great for certain use cases. But they come with their challenges. So you need to be careful when you pick web sockets. Finally, let’s talk about HTTP streaming. Now, with typical HTTP

The server returns an HTTP response of a finite length. It is possible to make this indefinite. And this is the idea behind HTTP streaming. The server continues to push data in a single, long-lived connection. So the client sends a single request to the server. The server responds, but the response is indefinite. So the server can keep sending more and more data. Now, there are two ways in which the server can do this. The first option is for the server to set the transfer encoding header to chunk. This indicates to the client that the data will be arriving in chunks. This option is common for non-browser clients. So essentially, two back-end servers communicating with each other. Another option is to stream data via server-sent events. This is great for clients that consume data through the browser, especially because they can use the standardized event source API web interface. So Twitter is a good example of this. They use HTTP streaming to push new tweets over a single HTTP connection to the API consumers. And once again, this saves resources not just for the consumer, but also for Twitter in this case. All right. So now let’s talk.