Gigalixir Status Page - Notice history

Dashboard - Operational

100% - uptime
Aug 2025 · 100.0%Sep · 100.0%Oct · 100.0%
Aug 2025
Sep 2025
Oct 2025

US-Central1 (Google Cloud) - Operational

100% - uptime
Aug 2025 · 100.0%Sep · 100.0%Oct · 100.0%
Aug 2025
Sep 2025
Oct 2025

Europe-West1 (Google Cloud) - Operational

100% - uptime
Aug 2025 · 100.0%Sep · 100.0%Oct · 100.0%
Aug 2025
Sep 2025
Oct 2025

US-East-1 (AWS) - Operational

100% - uptime
Aug 2025 · 100.0%Sep · 100.0%Oct · 100.0%
Aug 2025
Sep 2025
Oct 2025

US-West-2 (AWS) - Operational

100% - uptime
Aug 2025 · 100.0%Sep · 100.0%Oct · 100.0%
Aug 2025
Sep 2025
Oct 2025

Notice history

Oct 2025

No notices reported this month

Sep 2025

GCP-US Central - Degraded performance on the shared ingress systems
  • Postmortem
    Postmortem

    2025-09-19 us-central1 Ingress Incident Report

    Updated September 22, 2025

    Description of Issue

    We received a DDoS attack on one of our shared ingress systems. The attack originated from many distinct sources.

    A distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt the normal traffic of a targeted server, service or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. DDoS attacks achieve effectiveness by utilizing multiple compromised computer systems as sources of attack traffic.


    Scope of the Issue

    The attack was able to degrade our ability to process incoming requests on one of our shared ingress systems in the us-central1 GCP region. The attack prevented or degraded incoming traffic to applications that are on the same shared ingress setup. Applications on our dedicated ingress infrastructure were not affected. Applications on adjacent shared ingress systems were also not affected.

    We were able to identify and mitigate the effects relatively quickly, restoring traffic to affected applications. We were able to block the attacker(s) after applying additional measures.


    Prevention Measures

    We have applied several layers of protections to make our system more resilient to this type of attack in the future. These measures include, but are not limited to:

    • dynamic rate limiting

    • dynamic firewall rules

    • additional monitors and alerts

    Additionally we are working to split our shared ingress systems into smaller pieces to limit the scope of any similar attacks in the future.


    Customer Recommendations

    Gigalixir is always working to improve our infrastructure to prevent and lessen the impact of


    However, for general application protection we would recommend the following:


    1. Run more than one Replica

    When you run more than one replica, we run them on multiple servers and across multiple zones.

    This helps prevent an issue on a single server or zone from completely taking your application offline.

    2. Consider Dedicated Ingress

    Dedicated Ingress gives your applications their own load balancer and ingress resources.

    Applications on dedicated ingress also run in a separate runtime server pool from our common runtime server pool, which provides further isolation.

    3. Consider DDoS Protections and/or WAF


    There are a handful of good products that offer protections for traffic coming through your domain and/or hostnames.

    (We can provide individual recommendations upon request.)

    4. Source Filtering

    We commonly work with customers to add source filtering in our system to ensure traffic only comes from trusted sources. The details of this vary depending on the customer's domain setup. 


    One of our preferred solutions is to utilize Cloudflare for your DNS with a rule that applies a signature to all requests. If you couple this with our Dedicated Ingress, we can limit all traffic through the ingress system to only traffic coming from your Cloudflare setup. Cloudflare offers DDoS protections for all plan levels, including their free plan.


    Incident Timeline

    19 September - 16:40 UTC 11:40 CDT

    We started to receive elevated levels of ingress traffic on the affected shared ingress system. Our system was handling this scenario; scaling to account for the additional traffic.  This traffic remained elevated, but managed until 12:50pm CDT. At 12:50pm, the traffic levels settled back to a nominal level.

    19 September - 18:10 UTC / 13:10 CDT

    A new flood of traffic came into the affected ingress system. This time our system began to drop packets in the affected ingress system. Our team increased the resources we were dedicating to ingress and worked to identify the patterns and sources of the traffic. After these changes, our system was delivering traffic to customer applications again, but the threat was still active.

    19 September - 18:40 UTC / 13:40 CDT

    With the additional resources we added, customer traffic was healthy until 1:40pm CDT. At this time a few of our ingress servers restarted, but the large majority remained online and healthy. We increased our resources limits further at this time help with the situation.

    19 September - 18:50 UTC / 13:50 CDT

    At this time we were able to uniquely identify the traffic pattern used in the attack. We were able to isolate this traffic away from the other applications on the affected ingress system. We added some additional identification systems to help us block/reject the traffic further upstream. We began to block the traffic at our firewall.


    Our team continued to monitor the traffic and our system’s response.

    19 September - 21:30 UTC / 16:30 CDT

    We considered the incident to be resolved. We were able to block the attack at our firewall and had implemented several measures to identify and block similar attacks in the future.


    After resolution

    We continued to implement measures, alerts, and strategies to prevent the problem as well as to improve our response and recovery times.



  • Resolved
    Resolved

    This incident has been resolved and we will continue to monitor.

  • Monitoring
    Monitoring

    We have applied mitigations.  Our systems have recovered. We are actively monitoring the situation.

  • Identified
    Identified

    GCP us-central region.  Increase in traffic started approx 1:10 PM central.

    Requests are intermittent, but we have put in some fixes to relieve the system.


    We are monitoring and continuing to investigate the performance.

Aug 2025

No notices reported this month

Aug 2025 to Oct 2025

Next