You should be careful with using TcpStream::read_to_end or any other function that fully drains the buffer for you when using non-blocking buffers. If you get an error of the io::WouldBlock type, it will report that as an error even though you had several successful reads before you got that error. You have no way of knowing how much data you read successfully other than observing any changes to the &mut Vec you passed in.
Now, if we run our program, we should get the following output:
RECEIVED: Event { events: 1, epoll_data: 4 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Wed, 04 Oct 2023 15:29:09 GMT
request-4
——
RECEIVED: Event { events: 1, epoll_data: 3 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Wed, 04 Oct 2023 15:29:10 GMT
request-3
——
RECEIVED: Event { events: 1, epoll_data: 2 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Wed, 04 Oct 2023 15:29:11 GMT
request-2
——
RECEIVED: Event { events: 1, epoll_data: 1 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Wed, 04 Oct 2023 15:29:12 GMT
request-1
——
RECEIVED: Event { events: 1, epoll_data: 0 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Wed, 04 Oct 2023 15:29:13 GMT
request-0
——
FINISHED
As you see, the responses are sent in reverse order. You can easily confirm this by looking at the output on the terminal on running the delayserver instance. The output should look like this:
#1 – 5000ms: request-0
#2 – 4000ms: request-1
#3 – 3000ms: request-2
#4 – 2000ms: request-3
#5 – 1000ms: request-4
The ordering might be different sometimes as the server receives them almost simultaneously, and can choose to handle them in a slightly different order.
Say we track events on the stream with ID 4:
- In send_requests, we assigned the ID 4 to the last stream we created.
- Socket 4 sends a request to delayserver, setting a delay of 1,000 ms and a message of request-4 so we can identify it on the server side.
- We register socket 4 with the event queue, making sure to set the epoll_data field to 4 so we can identify on what stream the event occurred.
- delayserver receives that request and delays the response for 1,000 ms before it sends an HTTP/1.1 200 OK response back, together with the message we originally sent.
- epoll_wait wakes up, notifying us that an event is ready. In the epoll_data field of the Event struct, we get back the same data that we passed in when registering the event. This tells us that it was an event on stream 4 that occurred.
- We then read data from stream 4 and print it out.
In this example, we’ve kept things at a very low level even though we used the standard library to handle the intricacies of establishing a connection. Even though you’ve actually made a raw HTTP request to your own local server, you’ve set up an epoll instance to track events on a TcpStream and you’ve used epoll and syscalls to handle incoming events.
That’s no small feat – congratulations!
Before we leave this example, I wanted to point out how few changes we need to make to have our example use mio as the event loop instead of the one we created.
In the repository under ch04/b-epoll-mio, you’ll see an example where we do the exact same thing using mio instead. It only requires importing a few types from mio instead of our own modules and making only five minor changes to our code!
Not only have you replicated what mio does, but you pretty much know how to use mio to create an event loop as well!
Leave a Reply