Optimizing global latency: how Commerce Layer supercharged CDN performance.
How we reduced global response times by up to 47% through smart CDN optimizations — synthetic edge responses and origin shielding across distant regions.
At Commerce Layer, we live and breathe performance. Every millisecond matters when you’re running commerce APIs at global scale. Our customers expect near-instant responses, and that means constantly tuning the network so that the "distance" between user and data keeps shrinking — even when they’re halfway around the world.
Recently, we introduced two key CDN-level improvements that have significantly reduced response times across all regions:
- Synthetic responses to browsers’ pre-flight requests at the edge
- Origin shielding for distant regions
Each tackles a different type of latency, but together they’ve driven measurable gains in our core performance metric — time to first byte (TTFB) — for both average and tail latencies.
Synthetic responses to pre-flight requests at the edge
Modern web applications rely heavily on cross-origin resource sharing (CORS). Whenever a browser makes an API call that includes custom headers, credentials, or non-simple methods (like PUT, PATCH, or DELETE), it first sends a pre-flight request — an HTTP OPTIONS call — to verify that the target server permits the operation.
Pre-flight requests are small and informational, but the irony is that they often travel as far as a real API request would. That round trip adds latency to every subsequent call and inflates perceived response times, especially for UI elements such as MFEs or Drop-Ins.
Even with caching, this setup means unnecessary network hops for millions of lightweight requests every day. Given that pre-flights represented 13.26% of total traffic, this overhead had a real, measurable impact on global latency.
The optimization
We implemented synthetic pre-flight responses directly at the CDN edge. In practice, this means the edge node instantly recognizes a valid pre-flight request and generates the appropriate OPTIONS response without ever contacting the origin.
Here’s what happens now:
- The CDN edge inspects incoming
OPTIONSrequests. - If the request matches the expected CORS patterns (verified domain, methods, headers), it immediately returns a synthetic
204 No contentwith the correctAccess-Control-Allow-*headers. - The request never leaves the edge, eliminating the origin round trip.
- The browser receives the response in near-zero time and proceeds to send the actual API call.
This logic runs at the CDN’s edge scripting layer, integrated into our caching configuration. The synthetic response includes dynamically maintained values for:
Access-Control-Allow-Origin— based on the request’sOriginheader and our configured allowlist.Access-Control-Allow-MethodsandAccess-Control-Allow-Headers— aligned with API contract metadata.Access-Control-Max-Age— to let browsers cache the permission check.
Benefits beyond latency
- Reduced origin load
Millions of pre-flight requests no longer hit the origin, freeing resources for actual API operations. - Lower bandwidth cost
The edge logic ensures CORS behavior is uniform worldwide, even if origin clusters differ slightly in configuration. - Instantaneous response times
The TTFB for pre-flights is effectively zero, since the edge serves the request immediately.
The results
While these requests represent only a fraction of total traffic, their optimization had an outsized global impact:
- Pre-flight TTFB reduced to near zero.
- Global average TTFB decreased by 23.4%.
- 95th-percentile TTFB improved by 13.3%.
In short, we eliminated a category of latency that had been silently slowing down a significant share of requests worldwide.
Origin shielding for regions distant from US/EU
While synthetic responses improved performance everywhere, customers in regions far from our origins — like Asia-Pacific, Latin America, and Africa — still experienced longer response times due to simple physics: distance. To fix that, we implemented the CDN’s origin shield feature to bring data effectively “closer” to those users.
How origin shielding works
A shield is a designated CDN Point of Presence (POP) that sits between the outer edge POPs and the origin. Rather than each edge contacting the origin on a cache miss, edges forward those requests to the shield POP.
If the shield has the object cached, it replies immediately. If not, it fetches it once from the origin, caches it, and serves subsequent requests locally.
This architecture reduces the number of direct origin fetches and consolidates traffic, creating a single optimized path per region.
Our implementation
We identified the best POP to act as shield for each of our origins - i.e. evaluating factors such as distance from origin, capacity and level of congestion. Since each request is assigned a specific origin as destination, we configured edge POPs in distant regions to forward misses to that origin’s shield POP.
By consolidating origin requests this way, we achieved:
- Lower round-trip times between edge and shield (as both are on the CDN backbone).
- Fewer duplicate origin requests, improving cache efficiency.
- Reduced variability, especially for 95th-percentile latency, since shield fetches are over optimized internal links.
The results
In all involved regions:
- Average TTFB improved by 47.1%
- 95th-percentile TTFB improved by 44.9%
That’s nearly half the latency gone for users previously farthest from our origins.
Delivering performance that scales with growth
Together, these two improvements — synthetic pre-flight responses and origin shielding — attack latency at both the micro and macro levels. One trims unnecessary network hops globally; the other re-architects how data flows across continents.
Enabling the next generation of AI-driven and agentic commerce
These optimizations don’t just make human-driven commerce faster — they also lay the groundwork for AI agentic commerce, where autonomous systems make, manage, and optimize transactions on behalf of users.
In this emerging model, speed, predictability, and data freshness become even more critical. Agent-based systems — such as shopping bots, inventory optimizers, and personalization engines — operate in real time and depend on rapid API feedback loops to act intelligently.
By minimizing network latency at both the edge and the origin, our CDN improvements unlock several key advantages for AI-driven commerce:
- Real-time decisioning
Agents can evaluate offers, prices, and stock availability with near-zero lag, enabling dynamic purchasing or repricing at scale. - Higher reliability for autonomous workflows
Reduced tail latency (95th percentile TTFB) ensures that long or chained API sequences complete without timeouts or bottlenecks. - Improved data coherence
With faster propagation through shields and consistent edge responses, AI systems maintain a more accurate view of global commerce states. - Lower compute overhead
By cutting network latency, AI agents spend less time idling between calls, improving overall throughput and cost efficiency.
These capabilities turn performance from a metric into an enabler — allowing both human developers and intelligent agents to build responsive, adaptive, and high-frequency commerce experiences on top of Commerce Layer’s APIs.