ewoksdata.data.hdf5.dataset_writer.DatasetWriter#

class ewoksdata.data.hdf5.dataset_writer.DatasetWriter(parent, name, npoints=None, attrs=None, flush_period=None)[source]#

Bases: _DatasetWriterBase

Append arrays of the same shape to a new HDF5 dataset in a sequential manner.

Instead of creating a dataset with the h5py API

it can be done like this

Chunk size determination, chunk-aligned writing, compression and flushing is handled.

Parameters:
  • parent (Group)

  • name (str)

  • npoints (Optional[NewType()(StrictPositiveIntegral, Integral)])

  • attrs (Optional[dict])

  • flush_period (Optional[float])

add_point(data)[source]#

Append one array to the dataset.

Parameters:

data (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]])

Return type:

bool

add_points(data)[source]#

Append several arrays at once to the dataset.

Parameters:

data (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]])

Return type:

bool

property dataset: Dataset | None#
property dataset_name: str#
flush_buffer(align=False)[source]#
Parameters:

align (bool)

Return type:

bool

property npoints_added: int#