TCP Gateway Addon
Introduction
By default, Scalingo will only let you run applications that communicate via HTTP or HTTPS. Sometimes this is not enough and you need to have applications that can accept raw TCP connections.
This addon lets you deploy applications that can accept raw TCP connections.
Setup of the Addon
Provision the Addon
First, you need to add the TCP addon to your application. This can be done
through the dashboard on the addons section of your app under the Network
category.
Setup Configuration
In order to use the Scalingo TCP addon, you need to define a new container type. This can be done in your Procfile.
Here is a sample Procfile for a Node.js application:
web: npm start
tcp: npm run start-tcp
Contact your Application
When your application starts, a PORT
environment variable is defined in your
container. Your application must listen on this port in order to be accessible
from the outside. This is the exact same mechanism and the exact same
environment variable that the one used to expose a HTTP server in a web
container.
Once up and running your application will be accessible at the URL defined by
the TCP_URL
environment variable.
This TCP_URL
looks like:
tcp://tcp-1.osc-fr1.scalingo.com:20003
You can find this environment variable on our dashboard or by using our CLI:
scalingo --app my-app env-get TCP_URL
This URL is also accessible on the addon dashboard.
If you need to contact a tcp
container from a web
container, you must also
use this URL, there is no direct link between your containers, every
communication must pass via our load balancer. It’s the exact same thing if you
need to contact a web
container from a tcp
container, you must use the
public URL of the web
container.
Application Restart and New Deployments
When a new version of the application is pushed or when the tcp
containers
are restarted, newly created tcp
containers are started. TCP handshakes are then
sent to your applications. Once a TCP handshake is successful, newly incoming
TCP connections will be sent to this container and old containers will receive
a SIGTERM
signal, notifying them to stop, see container-management for more informations.
The server application is responsible for closing opened connections with its
clients once the signal SIGTERM
is caught. If the
server does not close the connections, the new rules will be updated and next
packets will be dropped causing the clients to timeout.
Load Balancing
If you have multiple tcp
containers running, Scalingo distributes connections
to those containers. Our load balancer distributes connections during the TCP
handshake. When established, the communication is always routed to the same
server.
gRPC
As described in the routing documentation, application nodes cannot receive HTTP/2 traffic. Thus, you need to deploy your application on the TCP Gateway:
tcp: <run your grpc app>
Your gRPC server will be accessible at the address in TCP_URL
.
Don’t forget to remove unused web / worker nodes on your app.
Rate Limiting
There’s no rate limiting in place. It may be put in place later.
Limitations
To be able to load balance TCP connections, connections are proxied to your
containers via NAT. This process comes with a downside: the TCP source IP needs
to be replaced by our front servers internal IP, this mean that a tcp
container wont be able to access the real client IP. Thus, the device
identification should be part of the protocol you designed over the TCP
connection.
Samples
You can find a sample Node.js application that uses this addon to expose a MQTT server built with Mosca on GitHub.