Service partially disrupted
Incident Report for Adjust
Postmortem

On June 19 2019 at 14:00 UTC, an update of our firewall configuration as part of our regular maintenance revealed a previously undiscovered bug in the configuration files that was not flagged by our automated testing environment. This triggered a cascading effect in our core network.

While the issue was instantly recognized and rolled back, it was discovered that the backup configuration included the same bug. This meant a manual override of all configurations necessary, which took around 10 minutes. Systems started recovering at 14:15 UTC. Tracking systems were fully back online at 14:20 UTC

All incoming traffic was back to normal at 15:00 UTC.

As a result, we massively expanded our automated testing environment, revised our deploy procedures and are auditing all of our backup configurations.

We sincerely apologize for the disruption.

Posted Jun 20, 2019 - 10:39 UTC

Resolved
All services are restored.
Posted Jun 19, 2019 - 15:08 UTC
Update
The Adjust Dashboard is fully operational, as well as the LAX/AMS data centers. The FRA data center is still being monitored.
Posted Jun 19, 2019 - 15:02 UTC
Monitoring
We are currently monitoring some partial disruptions on all components.
Posted Jun 19, 2019 - 14:26 UTC
This incident affected: Engagement Redirect Endpoint, SDK Endpoint, Dashboard, KPI Service, Server to Server Endpoint, Raw Data Export Service, and Uploading Service.