Conversation
d1684f5 to
371e9a0
Compare
| bit_depth_image: AttrR[int] | ||
| compression: AttrRW[str] | ||
| trigger_mode: AttrR[str] | ||
| nimages: AttrR[int] |
There was a problem hiding this comment.
May be better to get this out of OD.FP.frames?
GDYendell
left a comment
There was a problem hiding this comment.
This looks pretty good! Something I should have mentioned is that we need to create multiple virtual datasets. This should be easy enough, it will just need a bit of re-ordering to avoid repeating the calculations.
I think there should be a for loop over dataset_names (now passed in) inside the with File block where all creation of v_source and v_layout is done, with pre-computation of frame layouts done before if possible.
|
Where can I get the dataset names from? Couldn't find them anywhere in the Phoebus screens but I probably missed them |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## top-level-attributes #86 +/- ##
========================================================
+ Coverage 82.00% 83.72% +1.72%
========================================================
Files 14 15 +1
Lines 450 510 +60
========================================================
+ Hits 369 427 +58
- Misses 81 83 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
9e7fd1a to
7672841
Compare
|
This looks really good! I have rebased this on the latest stream2 branch so that we can try it on i13-1 today. |
There was a problem hiding this comment.
I can't reproduce the divide by zero error, and it does look like it handles it.
The problem with there being one too few frames confuses me. The mappings look fine. A 10 frame acquisition:
❯ h5dump -Hp test_vds.h5
HDF5 "test_vds.h5" {
GROUP "/" {
DATASET "data" {
DATATYPE H5T_STD_U16LE
DATASPACE SIMPLE { ( 10, 4148, 4362 ) / ( 10, 4148, 4362 ) }
STORAGE_LAYOUT {
MAPPING 0 {
VIRTUAL {
SELECTION REGULAR_HYPERSLAB {
START (0,0,0)
STRIDE (4,1,1)
COUNT (3,1,1)
BLOCK (1,4148,4362)
}
}
SOURCE {
FILE "/tmp/test_000001.h5"
DATASET "data"
SELECTION REGULAR_HYPERSLAB {
START (0,0,0)
STRIDE (1,1,1)
COUNT (1,1,1)
BLOCK (3,4148,4362)
}
}
}
MAPPING 1 {
VIRTUAL {
SELECTION REGULAR_HYPERSLAB {
START (1,0,0)
STRIDE (4,1,1)
COUNT (3,1,1)
BLOCK (1,4148,4362)
}
}
SOURCE {
FILE "/tmp/test_000002.h5"
DATASET "data"
SELECTION REGULAR_HYPERSLAB {
START (0,0,0)
STRIDE (1,1,1)
COUNT (1,1,1)
BLOCK (3,4148,4362)
}
}
}
MAPPING 2 {
VIRTUAL {
SELECTION REGULAR_HYPERSLAB {
START (2,0,0)
STRIDE (4,1,1)
COUNT (2,1,1)
BLOCK (1,4148,4362)
}
}
SOURCE {
FILE "/tmp/test_000003.h5"
DATASET "data"
SELECTION REGULAR_HYPERSLAB {
START (0,0,0)
STRIDE (1,1,1)
COUNT (1,1,1)
BLOCK (2,4148,4362)
}
}
}
MAPPING 3 {
VIRTUAL {
SELECTION REGULAR_HYPERSLAB {
START (3,0,0)
STRIDE (4,1,1)
COUNT (2,1,1)
BLOCK (1,4148,4362)
}
}
SOURCE {
FILE "/tmp/test_000004.h5"
DATASET "data"
SELECTION REGULAR_HYPERSLAB {
START (0,0,0)
STRIDE (1,1,1)
COUNT (1,1,1)
BLOCK (2,4148,4362)
}
}
}
}
FILLVALUE {
FILL_TIME H5D_FILL_TIME_IFSET
VALUE H5D_FILL_VALUE_DEFAULT
}
}
}
}
But if I try to read it fails:
>>> import h5py
>>> f = h5py.File("/tmp/test_vds.h5", "r")
>>> f["data"][0]
array([[2, 1, 1, ..., 2, 3, 2],
[1, 1, 3, ..., 3, 2, 3],
[2, 1, 2, ..., 4, 3, 5],
...,
[1, 1, 1, ..., 2, 1, 3],
[1, 3, 5, ..., 5, 1, 3],
[3, 3, 2, ..., 3, 3, 2]], dtype=uint16)
>>> f["data"][1]
array([[2, 1, 1, ..., 2, 3, 2],
[1, 1, 3, ..., 3, 2, 3],
[2, 1, 2, ..., 4, 3, 5],
...,
[1, 1, 1, ..., 2, 1, 3],
[1, 3, 5, ..., 5, 1, 3],
[3, 3, 2, ..., 3, 3, 2]], dtype=uint16)
>>> f["data"][2]
array([[2, 1, 1, ..., 2, 3, 2],
[1, 1, 3, ..., 3, 2, 3],
[2, 1, 2, ..., 4, 3, 5],
...,
[1, 1, 1, ..., 2, 1, 3],
[1, 3, 5, ..., 5, 1, 3],
[3, 3, 2, ..., 3, 3, 2]], dtype=uint16)
>>> f["data"][3]
array([[2, 1, 1, ..., 2, 3, 2],
[1, 1, 3, ..., 3, 2, 3],
[2, 1, 2, ..., 4, 3, 5],
...,
[1, 1, 1, ..., 2, 1, 3],
[1, 3, 5, ..., 5, 1, 3],
[3, 3, 2, ..., 3, 3, 2]], dtype=uint16)
>>> f["data"][4]
array([[2, 1, 1, ..., 2, 3, 2],
[1, 1, 3, ..., 3, 2, 3],
[2, 1, 2, ..., 4, 3, 5],
...,
[1, 1, 1, ..., 2, 1, 3],
[1, 3, 5, ..., 5, 1, 3],
[3, 3, 2, ..., 3, 3, 2]], dtype=uint16)
>>> f["data"][5]
array([[2, 1, 1, ..., 2, 3, 2],
[1, 1, 3, ..., 3, 2, 3],
[2, 1, 2, ..., 4, 3, 5],
...,
[1, 1, 1, ..., 2, 1, 3],
[1, 3, 5, ..., 5, 1, 3],
[3, 3, 2, ..., 3, 3, 2]], dtype=uint16)
>>> f["data"][6]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/dls_sw/apps/python/miniforge/4.10.0-0/envs/python3.10/lib/python3.10/site-packages/h5py/_hl/dataset.py", line 768, in __getitem__
return self._fast_reader.read(args)
File "h5py/_selector.pyx", line 376, in h5py._selector.Reader.read
OSError: Can't read data (Could not allocate output buffer.)
Am I doing something wrong here?
| create_interleave_vds( | ||
| path=Path(self.OD.file_path.get()), | ||
| prefix=self.OD.file_prefix.get(), | ||
| datasets=["data1", "data2", "data3"], |
There was a problem hiding this comment.
I told you the wrong thing. This should actually be
| datasets=["data1", "data2", "data3"], | |
| datasets=["data", "data2", "data3"], |
| raise TimeoutError("File writers failed to start") from e | ||
|
|
||
| if self.enable_vds_creation.get(): | ||
| create_interleave_vds( |
There was a problem hiding this comment.
path, prefix and frames actually get cleared after file writing starts, so unfortunately we need to take local copies of these before the start_writing call.
|
No I don't think you're doing anything wrong, I'm confused why it doesn't work as the mapping does look correct. What was the block size and blocks per file in this example? |
|
This was with block_size 1 and blocks_per_file 0 |
Fixes #85
Requires DiamondLightSource/fastcs-odin#98