Skip to content

Comments

feat: archive stale convo volatile entries to extra_data#77

Open
Bilb wants to merge 5 commits intosession-foundation:devfrom
Bilb:feat/archive-volatile-stale-entries
Open

feat: archive stale convo volatile entries to extra_data#77
Bilb wants to merge 5 commits intosession-foundation:devfrom
Bilb:feat/archive-volatile-stale-entries

Conversation

@Bilb
Copy link
Collaborator

@Bilb Bilb commented Feb 18, 2026

No description provided.

Comment on lines +525 to +526
arch.insert_or_assign(key, c);
_needs_dump = true;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we'd want something like this instead

        auto [it, inserted] = arch.try_emplace(key, c);
        // avoid marking `_needs_dumps` as true when the change didn't do anything
        if (inserted) {
            _needs_dump = true;
        } else if (it->second != c) {
            it->second = c;
            _needs_dump = true;
        }

But I need to write the operator for it first

Comment on lines +843 to +851
enum class ArchPhase : uint8_t {
s_1to1 = 0,
s_legacy = 1,
s_blinded = 2,
s_group = 3,
s_comm = 4,
done = 5
};
ArchPhase _arch_section = ArchPhase::done;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not fan of this and how it's used in the archive_stale, but I couldn't find a better option

Comment on lines +744 to +746
info.erase(); // remove from active dict if present
_arch_comm[c.base_url()].insert_or_assign(c.room_norm(), c);
_needs_dump = true;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment, we should avoid marking needs_dump when nothing has changed

Comment on lines +599 to +613
void ConvoInfoVolatile::load_extra_data(oxenc::bt_dict_consumer&& extra) {
// "1": one_to_one — skip if already in active config (handles re-activation)
if (extra.skip_until("1")) {
auto section = extra.consume_dict_consumer();
while (!section.is_finished()) {
auto [key, val] = section.next_dict_consumer();
if (key.size() != 33)
continue;
if (data["1"][std::string{key}].dict())
continue;
convo::one_to_one c{oxenc::to_hex(key)};
if (val.skip_until("e")) {
c.pro_expiry_unix_ts = std::chrono::sys_time<std::chrono::milliseconds>(
std::chrono::milliseconds(val.consume_integer<int64_t>()));
}
Copy link
Collaborator Author

@Bilb Bilb Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this whole function needs some cleanup. I'll check how the other wrappers are reading the extradata, and if there is a better/nicer way

Comment on lines +1030 to +1037
// Dict phase exhausted — scan typed archive sections in extra_data key order
// ("1"<"C"<"b"<"g"<"o").
while (_arch_section != ArchPhase::done) {
switch (_arch_section) {
case ArchPhase::s_1to1:
if (_arch_1to1 && _arch_1to1_it != _arch_1to1->end()) {
_val = std::make_shared<convo::any>(_arch_1to1_it->second);
++_arch_1to1_it;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As said above, not fan of this ArchPhase, there got to be a nicer way of doing this

const ConvoInfoVolatile::arch_legacy_map_t* arch_legacy,
const ConvoInfoVolatile::arch_blinded_map_t* arch_blinded,
const ConvoInfoVolatile::arch_group_map_t* arch_group,
const ConvoInfoVolatile::arch_comm_map_t* arch_comm) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is kind of getting out of hands, maybe we should keep those iterators separate?
We already have begin_archived but it's reusing this one

@Bilb Bilb force-pushed the feat/archive-volatile-stale-entries branch from 64cc9a1 to 128ec7a Compare February 19, 2026 12:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant