We can possibly avoid rolling back chunk files if the size of the target dataset's append dimension is one.
In this case, the override
|
def __setitem__(self, key: str, value: bytes): |
|
old_value = self._store.get(key) |
|
self._store[key] = value |
|
if old_value is not None: |
|
self._rollback_cb("replace_file", key, old_value) |
|
else: |
|
self._rollback_cb("delete_file", key, None) |
would fall back to super().__setitem__(key, value), because it is obvious that the chunk given by key does not exist.
A first step should be to verify that the statement old_value = self._store.get(key) does effectively perform a filesystem operation, e.g., stat for a local posix filesystem. If not, this optimization would not be applicable.
We can possibly avoid rolling back chunk files if the size of the target dataset's append dimension is one.
In this case, the override
zappend/zappend/rollbackstore.py
Lines 53 to 59 in c228788
would fall back to
super().__setitem__(key, value), because it is obvious that the chunk given bykeydoes not exist.A first step should be to verify that the statement
old_value = self._store.get(key)does effectively perform a filesystem operation, e.g.,statfor a local posix filesystem. If not, this optimization would not be applicable.