Web API History Series • Post 46 of 240
Chapter 46 (1990–1994): Before CORS — How Early HTTP Interfaces Shaped Cross-Origin API Access
A chronological, SEO-focused guide to CORS and cross-origin API access in web API history and its role in the long evolution of web APIs.
Chapter 46 (1990–1994): Before CORS — How Early HTTP Interfaces Shaped Cross-Origin API Access
When developers talk about CORS today, they’re usually dealing with modern browsers enforcing a security boundary between one site and another. But if you rewind to the earliest public years of the Web—roughly 1990 through 1994—you find something surprising: the core ingredients of web APIs were already forming (URLs, request/response, headers, status codes), yet cross-origin browser API calls were not the central problem they would become.
This chapter follows a simple chronological point: CORS didn’t “arrive” in those years, but the Web’s first HTTP interfaces created the conditions that made CORS necessary later. In other words, the origin boundary problem is easiest to understand by looking at the era when the Web didn’t meaningfully have it.
The early Web’s “API” was the URL and the response
In the early 1990s, a web client (a browser) primarily did one thing: fetch a document by URL over HTTP and render it. If you squint, that is already an API call: a standardized request (method + URL + headers) returns standardized response metadata (status code + headers) and a body. The difference from today is that the “consumer” wasn’t JavaScript code running in a page; it was the browser itself, acting on the user’s behalf.
That framing matters for cross-origin access. Modern CORS is mostly about scripts in one origin trying to read data from another origin. In 1990–1994, the browser was not yet a rich application runtime. There wasn’t a dominant model of in-page scripting calling remote APIs and reading structured data. So the Web could be wildly cross-site in navigation terms (links everywhere) without needing a fine-grained cross-origin data access policy.
1990–1992: Hypertext first, programmable clients later
The first wave of web usage centered on publishing and retrieving documents. Early HTTP interactions were simple by design: get a resource, display it. If a site linked to another site, your browser simply navigated there. That’s cross-origin in the sense of switching domains, but it’s not cross-origin data access in the modern “AJAX” sense.
In this period, many “integrations” happened outside the browser. If you wanted data from another system, you often pulled it server-to-server or through specialized clients. A lot of what we’d now call an API integration happened through command-line tools, academic software, or gateway services. The browser wasn’t yet a universal programmable agent.
1993–1994: Forms and CGI made the Web interactive (and API-like)
Around the time graphical browsers such as Mosaic helped popularize the Web, the experience shifted from passive reading to interacting with servers. HTML forms and early server-side interfaces (commonly implemented via CGI programs) enabled workflows that look a lot like API interactions:
- A user submits input from a page (parameters).
- The browser sends an HTTP request to a server endpoint.
- The server runs logic (often a script or compiled program).
- The server returns a response (often HTML, sometimes plain text).
This is an important milestone in web API history because it created a repeatable pattern: clients send inputs, servers return computed outputs through standardized HTTP. That’s basically the core idea behind APIs—just without today’s conventions (JSON, typed schemas, client libraries, auth standards, and so on).
Yet even with forms and CGI, the cross-origin issue still looked different. When a user submits a form to another domain, the browser is essentially navigating or posting data; it is not executing a script that then tries to read and process a cross-site response inside the original page context. The browser’s security needs were present, but the threat model was less about in-page code exfiltrating data from another origin and more about basic transport and trust.
Why CORS wasn’t “needed” yet: the missing JavaScript-era data plane
The key concept behind CORS is that a browser must decide whether a page from Origin A is allowed to read responses from Origin B. That question becomes urgent when pages run rich client-side code and when the browser becomes a general-purpose application platform.
In 1990–1994, the Web was not yet dominated by the idea of client-side applications calling remote endpoints for data to be rendered dynamically inside the same page. Pages were mostly documents. “API calls” were mostly navigations and form submissions. So cross-origin access was less about readable data responses and more about:
- Linking (any page could link anywhere, and the user could follow links freely).
- Embedding early media types (a precursor to later cross-origin resource inclusion).
- Server-side aggregation (a server could fetch remote resources and present them locally).
That last item—server-side aggregation—is a crucial “proto-solution” to cross-origin constraints. Even today, when CORS blocks a browser from reading a response, teams often create a server-side proxy endpoint that fetches the data and returns it to the browser from the same origin. In the early 1990s, that pattern wasn’t a workaround; it was frequently the default way to integrate remote systems.
The origin boundary starts as a browser security principle
As browsers became more capable, the industry needed a rule for what a page is allowed to read. This is the philosophical seed that later grows into the same-origin policy and then into CORS as a controlled relaxation mechanism.
Even if you don’t pin an exact “birth date” for these policies in 1990–1994, you can see the pressure building: once the browser is not merely a viewer but a runtime, the Web needs guardrails. Cross-origin navigation is fine; cross-origin data access is a different category because it can silently move information across site boundaries.
Early HTTP interfaces set the API conventions CORS depends on
CORS works by using HTTP headers and standardized request flows (including preflight requests in later implementations). That means CORS is built on the idea that HTTP is not just a transport; it’s an application interface.
In 1990–1994, the community was already discovering that HTTP metadata matters. Status codes communicated whether an operation worked. Headers carried content types and other signals. Servers and clients negotiated capabilities. Those are the mechanics that later made it possible to express security policy in-band, using headers such as Access-Control-Allow-Origin.
If you’re building modern systems and want practical patterns that connect these historical ideas to today’s automation and integration work, you might also explore engineering notes and API-focused experiments at Automated Hacks.
From “document web” to “application web”: the inevitable collision
The 1990–1994 era can be summarized as: the Web learned how to address things (URLs), how to transfer representations (HTTP), and how to accept user input (forms + scripts). Those steps quietly created a future where browsers would execute applications.
Once that transition happened, the Web needed a standard, interoperable way to say: “This resource is allowed to be read by code from that other site.” That’s the niche CORS fills, and it’s why the best way to understand CORS is to understand that it is not a random restriction—it’s a response to a browser becoming a platform.
For a modern, authoritative explanation of how CORS works in today’s browsers—simple requests, preflight, and the key headers—see the documentation at MDN Web Docs: Cross-Origin Resource Sharing (CORS).
What to remember about 1990–1994 in the CORS timeline
- No CORS yet: the Web’s first years didn’t revolve around browser scripts calling APIs across sites.
- HTTP already acted like an API interface: requests and responses were standardized, even when the body was mostly HTML.
- Interactivity arrived via forms and server scripts: an early “API pattern” that taught developers to think in endpoints and parameters.
- Cross-origin was mostly navigation: linking was the primary cross-site behavior, not in-page data access.
- The foundation for header-based policy was laid: later CORS mechanisms depend on HTTP’s extensible metadata model.
FAQ
Did CORS exist in 1990–1994?
Not in the modern sense. CORS is a browser-enforced mechanism that relies on conventions and headers standardized later. In 1990–1994, the Web was primarily document retrieval and basic interactivity, so the typical “AJAX cross-origin read” scenario wasn’t yet the main driver.
How did developers do cross-site integration before CORS?
Common approaches were server-to-server fetching, gateway scripts, and proxies that pulled remote data and then served it from the same origin as the site. Those patterns predate CORS and remain common today as a way to centralize authentication, caching, and security.
What early Web feature most resembles an API call?
HTML forms paired with server-side scripts are a strong early analog. A form submission sends parameters to an endpoint, the server runs logic, and the response returns computed output—often HTML, but the request/response contract is the key API-like element.
Why is CORS implemented with HTTP headers?
Because HTTP already provided a standardized way to attach metadata to requests and responses. Once browsers needed an interoperable way to express “who may read this response,” headers became the most compatible mechanism across servers, clients, and intermediaries.
