Routing

Requests Scheduling

Each of our front web servers is applying a strict round-robin scheduling over the different containers of your application.

Our frontend servers also include a quarantine mechanism: if one of your containers unexpectedly cut a connection, it is moved into quarantine. No more request will be routed to this container. Our frontend servers will regularly try to reach the application with an exponential backoff. Whenever the container successfully answers, it gets out of quarantine and start receiving requests again.

Headers

  • X-Forwarded-For: IP from the end user
  • X-Real-Ip: IP from the end user
  • X-Forwarded-Proto: Either http or https
  • X-Forwarded-Port: Either 80 or 443
  • X-Request-ID: UUID to identify the request, will be set if not existing. More information on this page.
  • X-Request-Start: Unix timestamp (with a milliseconds resolution) when the request was received by the front server. The value looks like t=1693406590.527.
  • X-Scalingo-Error: detailed error message when an error occurred on the router

The HTTP library/framework you’re using may downcase all the header names, be cautious

Guide: Detecting HTTPS requests

Max Headers Size

The headers size limit is set to 32kB. This limit is for the entire header section of the request. In addition of the global headers size limit, there is also a limit of 8kB for a single header.

HTTPS Support

HTTPS support is included by default. It is managed by application. When a custom domain name is added, our Let’s Encrypt integration will generate a valid certificate for this domain name to get valid HTTPS automatically.

Uploads - Max Request Size

The request size limit is set to 75MB. If you have a bigger file to upload, you should consider using a third-party storage facility like Amazon S3 (or any other provider) and upload files directly to it (with signed-URL for instance).

Our front servers return a “413 Payload Too Large” when attempting to overcome this limit.

Long Running Connections - SSE - Websockets

Long running connections and websockets are completely supported, but they are under the same constraint timeout as any other kind of request.

HTTP/2

HTTP/2 is enabled by default for all applications. If the client supports it and is reaching the servers with HTTPS, the connection will be upgraded to HTTP/2 by our routing nodes.

The traffic between the routing nodes and the application containers is then done using HTTP/1.1. Hence your application should not expect HTTP/2 traffic.

Performance improvement due to HTTP/2 would still be present as the protocol has been designed to be faster over slow connections (i.e. mobile, xDSL) and these terminals would use HTTP/2. Then, multiplexed requests are achieved in parallel using HTTP/1.1 to the containers. As these are in the same network, HTTP/2 would not have a significant impact here.

Sticky Sessions

Sticky sessions can be enabled per app in the Routing Settings of the dashboard. It is enabled by default for Meteor applications, browse to its dedicated page to get a deeper insight of its implementation.

Timeouts

Your application has to accept the connection within the 60 seconds and has 30 seconds to send the first data to the client in the 60 seconds. If one of these conditions is not respected, our servers will return a 504 Gateway Timeout error to the end user:

  • Connection timeout: 60s
  • Read timeout: 60s

These rules also apply to the long running connections like websockets. Ensure your application is sending at least one ping every 59 seconds to keep the connection open, otherwise it will be stopped.

Requests Queue

Each router of the infrastructure is keeping a local request queue for each application running on the platform. For each application, this queue is limited at 50 requests per web container. If the queue is full and a new request is received the router will return 503 Service Unavailable

For instance, if your application is using 2 containers of type web, our proxies will accept to queue up to 100 requests and will reject following requests.

When requests are being queued, it means the application is not able to cope with the amount of received requests, in this case, it is not worth adding more requests to be handled. It often means your application is not well sized compared to the traffic your instances need to handle. Either your application should have more web containers, or it should be optimized to respond to requests faster.

Requests Compression

Scalingo routers handle the gzip compression of incoming requests if they contain the HTTP header Accept-Encoding: gzip on a limited list of resources: text/html, text/plain, text/css, application/x-javascript, text/xml, application/xml, application/rss+xml, application/atom+xml, text/javascript, application/javascript, application/json, text/mathml.

Your application will need to handle the compression using a different algorithm or the compression for different type of files.


Suggest edits

Routing

©2024 Scalingo