Incidents | Pyto Incidents reported on status page for Pyto https://status.pyto.com/ en Internal rate limiter blocking calls triggering under high concurrency https://status.pyto.com/incident/835251 Fri, 27 Feb 2026 14:23:00 -0000 https://status.pyto.com/incident/835251#1b226dd04ff0bcd7345060617f5da52e8153dc3f5046a65d49e33be361564e6a We identified an issue preventing calls from being triggered when a large volume was submitted within a short period of time. In such cases, an internal rate-limiting mechanism was activated, blocking the calls from being triggered. As a workaround, we provide a dedicated batch webhook endpoint that allows a large number of calls to be triggered at once. Clients should use this endpoint instead of the individual call-triggering endpoint to avoid encountering rate limits. To address this more robustly, we are developing a new gateway infrastructure specifically designed for call triggering. This gateway will support higher rate limits and handle forwarding calls to downstream systems in a controlled manner, allowing clients to submit high volumes of calls while preserving downstream rate constraints. This development is currently in progress. In the meantime, clients are advised to use the dedicated batch webhook endpoint. Misconfiguration of a TTL resulting in a small subset of calls silently crashing https://status.pyto.com/incident/835244 Fri, 27 Feb 2026 14:21:00 -0000 https://status.pyto.com/incident/835244#c2b5235c85ee19ab3868d1ecf9b34925d3a8c6ddaf923718b1e3fb8180c22b28 Until 2026-02-24 19:15:00 UTC, a misconfigured TTL caused a subset of calls to silently crash. This occurred only when the system was under high load, which is why it affected some calls but not all. As a result, those calls could not be placed properly and would crash silently. By “silent”, we mean that the calls did crash, but were not marked as such. Consequently, they were effectively “lost” instead of being sent back to our clients’ systems as crashed calls. As soon as the issue and its root cause were identified, all affected calls were manually resent to our clients’ systems with the correct “crashed” status. Therefore, no data loss occurred. However, there was a longer-than-expected delay between the time a call was triggered and when it was returned to the client’s system. Once the issue was identified, we immediately released a patch that increased the faulty TTL value, resolving the issue even under high load. Additionally, we plan to revise the way audio streams are initialized after a call is created, to ensure that even a very short TTL cannot block the system as it did during this incident. Hard bounces were not handled correctly resulting in calls being marked as crashed instead of bounced https://status.pyto.com/incident/835235 Fri, 27 Feb 2026 14:11:00 -0000 https://status.pyto.com/incident/835235#a90f23b79b53e81b28ff002246be82373a127579009e8ef2ec4ba5a0bf2aaaba From 2026-02-13 01:00:00 UTC to 2026-02-20 21:30:00 UTC, calls that hard bounced were incorrectly marked (and therefore routed back) as crashed instead of bounced. This impacted a small subset of calls, as hard bounces are very rare. A hard bounce, unlike a soft bounce, occurs when a call cannot connect to the destination phone number at all. A soft bounce occurs when the network can reach the phone number, but the recipient is unavailable (e.g., the call goes to voicemail). Because these calls were sent back as crashed, no data loss or operational impact is expected. This was strictly a labeling and classification issue. To prevent similar issues in the future, we will strengthen our unit and end-to-end testing infrastructure and improve our monitoring to detect this type of issue much more quickly.