JMM’s notes on

Server-sent events

Server-sent events (SSE) are like an early version of WebSockets, but only in one direction (from server to client). One possible benefit of SSE is that a JavaScript client will automatically reconnect and try to resume using the ID of the last received event.

JavaScript (client)

Here’s an example of how to start receiving Server-sent events.

function startEventStreamStuff() {
    const source = new EventSource("/ssedemo1/eventsource");

    source.addEventListener("open", (event) => {
	addmessage(`EventSource stream opened.`);
    source.addEventListener("error", (event) => {
	addmessage(`Some EventSource error occurred, maybe the stream closed.`);
    // Add your own custom message types:
    source.addEventListener("some-event-type", (event) => {
	addmessage(`some-event-type: ${}`);

    function addmessage(text) {
	const el = document.createElement("li");
	el.textContent = text;

    const messagelist = document.getElementById("messagelist");
document.addEventListener("DOMContentLoaded", startEventStreamStuff);

Service Worker issue

I’ve noticed an issue with service workers (see also service worker notes) intercepting server-sent events. At least with Firefox Nightly on Android on 2024-05-01 (and I’d assume other browsers, but I haven’t tested), if a service worker handles an event stream, eventually that fetch will error out. What I’m guessing is happening is that the service worker eventually gets killed because the browser thinks the service worker is taking too long to handle the request. I haven’t confirmed that, though. But some evidence for this is that I haven’t observed this error when I’m debugging the service worker, since I think Firefox might not kill the service worker if debugging is open. (I’ve heard that Chrome will purposely kill the service worker so you don’t get complacent. Haven’t tried it. Seems like a good idea.)

The way I’m dealing with this for now is for my service worker to refuse to handle any fetch requests that look like an event stream. That is, when the Accept header is text/event-stream, I tell the service worker not to handle the request. So, in your service worker’s 'fetch' event handler, you’d test something like:

    if (event.request.headers.get("accept") === "text/event-stream") {
	console.log(`Not handling event stream`);
	// For some reason Server Sent Events seem to error out.
	// I’m guessing the worker is just getting killed or something.
	return false;

Pedestal (server)

Pedestal supports Server-sent Events as well as WebSockets. One reason you might prefer SSE over WebSockets is that SSE can be used with normal interceptors and routers, whereas WebSockets require setting up a :context-configurator in :io.pedestal.http/container-options.

Simple Pedestal SSE stream example

In Pedestal’s SSE API, you supply a function that takes in a core.async channel and the Pedestal interceptor context. You write events (maps with :name, :data, and optionally :id) to the channel. If the channel is closed, the connection to the client closes.

Here’s an example:

(defn jmm-sse-stream-ready-example
  "An SSE example where we send 50 events, spaced at 1 second intervals.
  If disconnected, the stream is resumed at the last position.

  This function is passed to `io.pedestal.http.sse/start-event-stream` as the
  “stream-ready-fn”.  As such, it takes two arguments:
  - event-chan (a core.async channel) where it writes events.
  - context (a Pedestal interceptor context) "
  [event-chan {:keys [request] :as context}]
    (println "SSE stream started.")
    (loop [x (or
              (some-> (get-in request [:headers "last-event-id"])
                      ;; Make sure we don’t get some weird negative value
                      ;; Since we got the last event, start at the next ID.
                      ;; Without this, we repeat ourselves.
      ;; Only send 50 events.
      (when (< x 50)
        ;; Send the event, or bail if sending fails.
        (when (>! event-chan
                  {:name "some-event-type"
                   :data (str "This is message " x)
                   ;; Optional ID allows reconnecting and resuming
                   :id x})

          ;; Wait a bit before sending the next message.
          (<! (timeout 1000))

          (recur (inc x)))))
    (println "SSE stream ending (but not closing).")
    ;; I’m not sure if we should close the event-chan
    ;; Closing it will cause the client to try to reconnect.
    #_ (close! event-chan)))

Then you use (sse/start-event-stream jmm-sse-stream-ready-example), which returns an interceptor that (when entered) starts the stream.

If you want to redefine the stream-ready-fn during development, you can do something like the following:

(def sse-dev-interceptor
   {:name ::sse-dev-interceptor
    :enter (fn [context]
             ;; Normally you’d just use `sse/start-event-stream` directly.
             ;; This just lets me more easily reload `jmm-sse-stream-ready-example`
             ;; during development
              (sse/start-event-stream #'jmm-sse-stream-ready-example)))}))

Another option is to use sse/start-stream, which returns a context instead of an interceptor.

Buffering issue

Server-sent events were coming as one single chunk instead of a stream. I checked this by doing:

curl -v --no-buffer

Since my Pedestal backend is behind a reverse-proxy, I had to remember to set the header “X-Accel-Buffering” (see my HTTP headers notes) to “no”. Here’s an example of how to do this:

(def sse-fix-buffer-headers-intc
  "An interceptor that adds X-Accel-Buffering headers to fix SSE streaming issues behind a reverse proxy."
   {:name ::sse-fix-buffer-headers
    :leave (fn [context] (assoc-in context [:response :headers "X-Accel-Buffering"] "no"))}))