ewoksdata.data.hdf5.dataset_writer.StackDatasetWriter#

class ewoksdata.data.hdf5.dataset_writer.StackDatasetWriter(parent, name, npoints=None, nstack=None, attrs=None, flush_period=None)[source]#

Bases: _DatasetWriterBase

Append arrays of the same shape to each item of a new HDF5 dataset in a sequential manner per item. So each item of the HDF5 dataset is a stack to which we can append data in a sequential manner.

Instead of creating a dataset with the h5py API

it can be done like this

Chunk size determination, chunk-aligned writing, compression and flushing is handled.

Parameters:
  • parent (Group)

  • name (str)

  • npoints (Optional[NewType()(StrictPositiveIntegral, Integral)])

  • nstack (Optional[NewType()(StrictPositiveIntegral, Integral)])

  • attrs (Optional[dict])

  • flush_period (Optional[float])

add_point(data, stack_index)[source]#

Append one array to one stack of the dataset.

Parameters:
  • data (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]])

  • stack_index (int)

Return type:

bool

add_points(data, stack_index)[source]#

Append several arrays at once to one stack of the dataset.

Parameters:
  • data (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]])

  • stack_index (int)

Return type:

bool

property dataset: Dataset | None#
property dataset_name: str#
flush_buffer(align=False)[source]#
Parameters:

align (bool)

Return type:

bool