SPM-2025-001 DDoS attack
TL;DR
On March 11th, the osc-fr1
region experienced multiple successive waves of DDoS attacks throughout the day. These attacks resulted in a cumulative service disruption of 48 minutes for this region. This incident occurred within a broader context in which several French companies and public entities were targeted by similar attacks during the same period. The attacks were sudden and large-scale, requiring us to take network configuration measures to isolate the impacted IP addresses and protect the overall stability of the platform.
While this incident affected network access to your application and databases, the integrity and confidentiality of your data and applications were maintained.
What happened?
On Tuesday morning, starting at 11:08, we began observing early signs of unusual traffic patterns in the osc-fr1
region, including a small drop in request volume (requests per minute) on the platform. However, these initial signals were not significant enough to trigger an alert. At 11:17, several monitoring probes triggered alerts simultaneously, and the osc-fr1
region became unreachable.
We immediately contacted our infrastructure provider to investigate whether the issue originated on their end. The service recovered at 11:30, and upon analyzing platform metrics, we identified that we had been targeted by a SYN flood DDoS attack on one of our public IP addresses: 148.253.75.120
A SYN flood DDoS attack is a type of Denial of Service (DoS) attack where a large number of TCP connection requests (SYN packets) are sent to a server, but the handshake is never completed. This causes the server to allocate resources for incomplete connections, eventually exhausting its capacity and making it unresponsive to legitimate traffic.
The situation repeated shortly after, with a second wave of unavailability from 11:40 to 11:53. We re-engaged with our infrastructure provider, escalating the support ticket. The attack was of such scale that it led to a full regional outage of osc-fr1
.
In the early afternoon, a new wave of attacks began. We identified that the same IP address (148.253.75.120
) was once again being targeted. As a result, we decided to exclude this IP from our public IP pool. DNS entries were reconfigured accordingly, and the IP was attached to a traffic offloading VPC to both protect the platform and allow for in-depth network traffic analysis.
Later in the day, yet another and final DDoS wave targeted a different public IP address 109.232.233.130
. We applied the same mitigation strategy, redirecting traffic for this IP to the previously created isolated VPC.
To restore our usual level of resiliency, we proactively added two temporary public IP addresses to our pool. By 18:32, after confirming that the situation had stabilized, we restored normal traffic routing to the two affected IPs. At this point, observability tools were also improved to enable faster identification of the targeted infrastructure components in the event of future attacks. A corresponding incident response procedure was also established to allow for quicker mitigation in similar scenarios.
The following day, after confirming that no further attack waves were occurring, the public IP pool configuration was restored to its original state.
Timeline of the incident
All times given are in CET (UTC+1)
11:08 | Initial signs of unusual network traffic appear in the osc-fr1 region. |
11:17 | A DDoS attack targeting public IP 148.253.75.120 triggers alerts, indicating the region is unreachable. |
11:25 | An incident ticket is opened with our infrastructure provider to assess if the issue is on their end and to obtain any relevant insights. |
11:30 | The region is fully reachable again. |
11:35 | Analysis of our metrics indicates that we are facing a DDoS attack. |
11:40 | A new wave of attack begins, again targeting 148.253.75.120 . |
11:44 | The incident is escalated with our infrastructure provider, and we reach out to our Technical Account Manager (TAM) for support. |
11:53 | The region is fully reachable again. |
14:16 | A new attack wave is detected, still targeting 148.253.75.120 . |
14:23 | IP 148.253.75.120 is removed from the public IP pool. |
14:26 | The region is fully reachable again. |
14:41 | IP 148.253.75.120 is moved back in the pool. |
14:46 | IP 148.253.75.120 is still under attack, removing the IP from our pool. |
15:23 | A temporary VPC is provisioned, and the targeted IP is linked to it for traffic offloading and analysis. |
16:28 | A new attack wave occurs, this time targeting public IP 109.232.233.130 . |
16:34 | Redirecting traffic for this IP to the previously created isolated VPC. |
16:36 | The platform is fully operational. |
16:45 | A new public IP is added to the pool. |
16:59 | An additional public IP is added to the pool. |
18:32 | Traffic routing to the previously excluded IPs is restored. |
Impact
-
On
osc-fr1
, the cumulative impact on the osc-fr1 region amounts to 49 minutes of unavailability over the course of March 11th. -
On
osc-secnum-fr1
, no impact for our customers.
Immediate Actions Taken
- The targeted public IP addresses were isolated into a dedicated VPC to preserve platform stability and allow for detailed analysis of the attack traffic.
- The attacked IP addresses were replaced with new public IPs that were unknown to the attackers, in order to restore service resilience and mitigate further disruption.
- Observability tools were improved to enable faster identification of the targeted infrastructure components in the event of future attacks. A corresponding incident response procedure was also established to allow for quicker mitigation in similar scenarios.
Actions in progress
- A redesign of the platform’s overall architecture is underway to achieve greater resilience against this type of incident, with a more segmented and distributed approach. This project has been in progress for several months and continues as part of our long-term infrastructure improvement efforts.
- Evaluation of third-party solutions, including options from our infrastructure provider and external vendors, to introduce additional layers of protection against large-scale DDoS attacks.
Financial compensation
Our service level agreement (SLA) guarantees 99.9% availability for applications deployed across two or more containers, which translates to a maximum of 43 minutes of downtime per month.
If you consider that some of your applications were concerned, please contact our support team.
Changelog
2025-03-28 : Initial version