Discussion topics:
- Stream notifications for errors
- Stream close notification
- Cascading notifications throughout streams (up the stream and/or down the stream)
- Multiple future types
- Lightweight versus easy to use futures
- Timing out promises efficiently
- Helpers for managing tasks on the event loop
These topics explained from top to bottom:
Error stream notifications
Currently, stream notifications are handled by registering a closure. You can set a closure on the receiving/draining end of the stream, the result. But anywhere else will break the error chain unless explicitly passed on. This also prevents errors from cascading back. An error in TCP will trigger later in the process of handling the HTTP request, but the other way around doesn't work (Database errors cascading to the HTTP layer). That is still a useful feature, for example, for throwing 500
pages by default.
let requests = socket.stream(to: HTTPParser)
let responses = request.stream(to: myApp)
responses.stream(to: socket)
responses.catch { error in
print("500 error")
}
// At this moment, the socket does not find any errors, nor do the parser serializer
Stream close notifications
Closing streams is handled the same way, currently. We should decide if we want to keep that this way.
// WebSocket.swift
socket.onClose = {
websocket.sendCloseMessage()
}
// WebSocket+SSL.swift
socket.onClose = {
tls.deinitialize()
}
// Chat.swift
socket.onClose = {
friends.notifyOfflineStatus()
}
Cascading notifications
As discussed in the error stream notification, cascading these errors can interrupt a process causing no additional unnecessary work to be executed. But this must be implemented with care. Interrupting in the middle of a registration form (adding the database entry, sending the email and 2 other ops) can prevent some ops from executing whilst others did execute.
There should be sensible helpers for that.
drop.get {
return database.all(User.self, in: "SELECT * FROM users")
}
class ErrorMiddleware: Middleware {
func handle(request: Request, chainingTo responder: Responder) -> Future<Response> {
let promise = Promise<Response>()
return responder.respond(to: request).do(promise.complete).catch {
promise.complete(Response(status: 500))
}
return promise.future
}
}
Multiple future types
As of right now, futures are really easy to use and sensible. They conform to FutureType
allowing extensibility and more (generic) integration. One such example is the conformance of Response
to FutureType
, effectively making Future(Response(...))
unnecessary bloat.
Pseudocode Problem
let users = database.all(User.self, in: "SELECT * FROM users LIMIT 100") // [User].count == 100
If User
is a struct of 10KB, [User]
will be copied around 2 times with an overhead. Once to the future, once from the future to the handler. This copies around 2MB of extra data.
Timing out promises
Promises can be timed out whilst blocking, currently. But you cannot easily dispatch a task like this without manually interfacing with the DispatchQueue in a worker's eventloop. Having to block for waiting is not sensible, and against the async design. If an HTTP request is waiting for >30 and especially >60 seconds, you're pretty much guaranteed to have timed out. And if any client-side operation done by Vapor (such as calling stripe) times out after X seconds, we shouldn't keep the promise unfulfilled in memory.
Use case
// Throws a timeout after 5 seconds of waiting for a result
return websocket.connect(to: "ws://localhost:8080/").map(timeout: .seconds(5)) { websocket in
...
}
Helpers for managing tasks
Finally, playing into this last topic. Many people are using websockets or other features that require cron-jobs. Some people may want to send a notification every minute for new chat messages, regardless of circumstances. Or send a ping to the client every 15 seconds to keep the connection open. They're very common use cases and should be covered from the start, especially with async.
Use case
websocket.worker.every(.seconds(15)) {
// keep the connection alive
websocket.ping()
}