Web API History Series • Post 99 of 240
Chapter 99: Before Server-Sent Events—How 1995–1998 Web “Streaming” Hacks Shaped Modern API Updates
A chronological, SEO-focused guide to Server-Sent Events and streaming updates in web API history and its role in the long evolution of web APIs.
Chapter 99: Before Server-Sent Events—How 1995–1998 Web “Streaming” Hacks Shaped Modern API Updates
Server-Sent Events (SSE) feels like a clean, modern promise: open one HTTP connection, let the server stream updates, and handle them in the browser with a simple event-based API. But if you zoom your timeline back to 1995–1998—the era of CGI scripts, HTML forms, early browser scripting, and the first wave of “dynamic” sites—the web already contained the seeds of streaming updates.
SSE itself would be standardized much later as part of the HTML platform, but its core idea—the server continuously sending incremental data to the browser—was being chased in the mid-to-late 1990s with whatever tools developers could bend into shape. This chapter of web API history is about that chase: the “almost streaming” patterns that emerged from forms, CGI, frames, and early scripting, and how those patterns helped define what streaming web APIs needed to become.
1995–1998: The web wants to be live, but the toolset is rigid
In 1995, much of the web’s dynamism came from a simple loop:
- A user submits a form.
- The server runs a CGI program.
- The CGI program returns a brand-new HTML page.
That loop made interactivity possible, but it didn’t naturally support continuous updates. HTTP responses were usually treated as “done” once the full HTML arrived. Browsers rendered pages as documents, not as live views into an ongoing stream of application state.
Yet the demand for “live” experiences was already present: news tickers, sports scores, auction prices, chat-like pages, and dashboards for server status. If you wanted a web UI that updated itself without a user constantly clicking Refresh, you had to improvise.
Pattern #1: Client pull—refresh loops as the first “update API”
The earliest widespread technique for continuous updates was not true streaming; it was client pull. A page would periodically request a new version of itself or a portion of itself.
In that era, the “API surface” was often just another URL endpoint that returned HTML. Developers used timed reloads (including techniques like meta refresh) to approximate a feed. Conceptually, this acted like an extremely primitive polling API:
- Resource: a URL that represents “the latest state”
- Method: GET
- Update model: replace the old view with the new view
From a history-of-APIs perspective, this matters because it framed a core question that SSE later answered elegantly: should clients repeatedly ask for updates, or should servers push updates as they happen?
Pattern #2: Server push experiments—streaming before “streaming APIs” were named
While client pull was easy, it was inefficient and “bursty.” Developers and browser vendors also explored server push approaches, especially for use cases like continually updating images (think early webcam feeds or auto-updating charts).
One historically important technique involved multipart MIME responses (often associated with the content type multipart/x-mixed-replace). The server would keep a connection open and send multiple “parts” over time. Browsers that supported it could replace an image or a piece of content as new parts arrived.
This wasn’t SSE, and it wasn’t a standardized web API you could rely on across browsers. But architecturally it’s a close ancestor: one request, many updates, time-ordered data flowing from server to client.
Why it was a near miss
Multipart push was clever, but it tended to be:
- Media-centric: great for images, awkward for structured data.
- Browser-dependent: implementation differences made it hard to build on.
- Not event-first: it delivered “replacement content,” not typed events with names and fields.
SSE’s later success came from making streaming updates text and event oriented, with a clear browser API and predictable reconnection semantics.
Pattern #3: Hidden frames and incremental script output—the DIY event channel
In the mid-to-late 1990s, browsers gained scripting capabilities (notably JavaScript) and features like frames. That combination enabled a workaround that looks surprisingly modern in spirit: open a long-lived request in a hidden frame and have the server send back small chunks that the browser executes.
The “API” in this case was often a CGI endpoint that never quite finished. Instead of returning a complete HTML page, it would periodically emit additional HTML or <script> blocks. If the browser processed incremental output during download (behavior varied), each script chunk could call into the parent frame to update the visible UI.
What’s important historically is not whether every browser reliably supported this (they didn’t), but that developers were converging on an idea that later reappeared as Comet and then as standardized streaming APIs:
- Keep a connection open.
- Send incremental updates.
- Let the browser react without a full page reload.
If you squint, that’s the same story SSE tells—just without today’s safety rails.
HTTP and the late-1990s groundwork: persistent connections and chunked transfers
Streaming APIs don’t live solely in the browser—they depend on how HTTP connections behave. In the late 1990s, HTTP/1.1 work introduced more robust ideas around persistent connections and mechanisms like chunked transfer encoding. Those changes weren’t “SSE features,” but they helped normalize the notion that a response could arrive in pieces over time.
In practical terms, the shift mattered because early “keep it open” techniques were fragile. Proxies, servers, and clients often assumed responses were short-lived. Improvements in the ecosystem made long-lived responses more plausible, which later streaming approaches would rely on.
What this era contributed to SSE (even though SSE came later)
From 1995–1998, developers didn’t get a single, blessed standard for streaming updates. Instead, they accumulated lessons—some technical, some product-driven—that directly map to what SSE eventually formalized.
1) The web needed a low-friction push channel
Polling was easy but wasteful. Push was efficient but messy. SSE’s later appeal is that it offers push with relatively low implementation overhead: plain HTTP, simple text framing, and an API most developers can understand quickly.
2) “Data” needed to be separate from “HTML”
Many 1990s techniques shipped HTML as the update format because that’s what browsers consumed best. But that blurred presentation and data. The path toward web APIs involves separating concerns: send structured data/events, let the client render.
3) Reliability and reconnection are part of the API
Early long-lived connections broke often. When they broke, the client usually had no standard way to resume. SSE later addressed this with built-in reconnection behavior and event IDs for resuming a stream—features shaped by years of “we tried to keep a connection open and it died” experience.
Reading SSE today with 1997 eyes
When you look at the modern SSE browser interface (the EventSource API), you can see it as a retroactive cleanup of late-1990s experimentation: a single-purpose, server-to-browser event stream that doesn’t require plugins, doesn’t require full-duplex complexity, and doesn’t require you to smuggle scripts through hidden frames.
If you want a current, authoritative overview of SSE itself (syntax, reconnection rules, browser behavior), the best starting point is the developer documentation for server-sent events: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events.
Takeaway: SSE is a standard answer to a 1990s question
The mid-1990s web asked, “How can a page stay updated without constant user action?” The answers at the time were clever but inconsistent: periodic reloads, multipart server push, long-lived CGI responses, frames, and early scripting hacks.
SSE eventually arrived as a standardized, developer-friendly response to that exact pressure—one that acknowledges the realities of HTTP and browsers while finally turning “streaming updates” into a reliable web API.
If you’re tracking how these building blocks evolve into modern automation-friendly integrations, you may also like the broader experiments and write-ups collected at https://automatedhacks.com/.
FAQ: Server-Sent Events and the 1995–1998 web
Did Server-Sent Events exist in 1995–1998?
No. SSE as a standardized browser feature (via EventSource) came later. But several 1995–1998 techniques—server push via multipart responses, hidden frames, and long-lived CGI output—explored the same goal: server-to-browser incremental updates.
What was the most common “live update” approach in the mid-1990s?
Client pull (periodic refresh or re-requesting a URL) was common because it worked with the least special support. It was effectively early polling, usually returning whole HTML pages.
Why weren’t the early streaming techniques enough?
They were inconsistent across browsers and fragile across networks (proxies and timeouts). They also mixed presentation and data, making it hard to build reusable, developer-friendly APIs.
How is SSE different from the old “hidden frame” trick?
SSE defines a standard event stream format and a standard browser API, plus built-in reconnection behavior. The hidden-frame approach depended on browser quirks and often required sending executable HTML or scripts rather than clean event data.
