Skip to content

segment_analytics 1.1.11: UI jank / apparent freezes on track #199

@angshuman-acko

Description

@angshuman-acko

Summary

After upgrading segment_analytics from 1.1.10 → 1.1.11, our Flutter app sees severe UI jank / long stalls when Segment track runs on hot paths (e.g. from a Dio onError interceptor). Pinning 1.1.10 removes the problem.

A diff between the two releases shows lib/utils/store/io.dart is essentially the only behavioral change in lib/ between 1.1.10 and 1.1.11, so the regression is very likely tied to the new queue persistence / recovery logic there.

Environment

Package: segment_analytics
Good: 1.1.10
Bad: 1.1.11
Flutter: 3.41.6
Platforms observed: IOS (Iphone 15 IOS version 26.3.2), issue is not there in android

What we’re doing when it happens

Analytics client initialized with default / typical options (storageJson enabled).
High-frequency or latency-sensitive track calls (e.g. after failed HTTP requests from an interceptor).

Symptom: main thread feels blocked; downgrading to 1.1.10 restores smooth behavior with the same app code.
Expected behavior
track should not cause disproportionate main-isolate work compared to 1.1.10: persistence should remain bounded and avoid long CPU + synchronous file I/O bursts during normal operation.

Actual behavior (1.1.11)

Noticeable stalls / jank correlated with Segment persistence; 1.1.10 does not exhibit the same severity.

Analysis (1.1.10 vs 1.1.11)

Comparing lib/utils/store/io.dart between 1.1.10 and 1.1.11:

Per-write overhead for the queue file
1.1.11 adds special handling for the queue file key (queue_flushing_plugin), including building a copy of the queue list and extra json.encode / utf8.encode work on the path to disk — additional cost on every queue persist vs 1.1.10, which used a simpler single-pass write.

Large-queue / large-payload path
When serialized size exceeds _kMaxQueueBytes (~512 KB), 1.1.11 runs trimming (_trimQueueToTargetMap) with repeated full json.encode of the queue — high CPU on the isolate that triggered persistence.

FileSystemException recovery (while (true))
On write failure for the queue file, 1.1.11 enters a loop that re-encodes, re-locks, and re-writes while dropping events until success or empty queue. That can mean many synchronous lockSync / writeFromSync / truncateSync cycles in one persistence operation — a strong candidate for long freezes when disk is stressed or writes fail.

1.1.10 did not implement this recovery/trim loop; it used a straightforward write path.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions