qcodes.data.gnuplot_format

class qcodes.data.gnuplot_format.GNUPlotFormat(extension='dat', terminator='\n', separator='\t', comment='# ', number_format='.15g', metadata_file=None)[source]

Bases: qcodes.data.format.Formatter

Saves data in one or more gnuplot-format files. We make one file for each set of matching dependent variables in the loop.

Parameters
  • extension (str) – file extension for data files. Defaults to ‘dat’

  • terminator (str) – newline character(s) to use on write not used for reading, we will read any combination of ‘\r’ and ‘\n’. Defaults to ‘\n’

  • separator (str) – field (column) separator, must be whitespace. Only used for writing, we will read with any whitespace separation. Defaults to ‘\t’.

  • comment (str) – lines starting with this are not data Comments are written with this full string, and identified on read by just the string after stripping whitespace. Defaults to ‘# ‘.

  • number_format (str) – from the format mini-language, how to format numeric data into a string. Defaults to ‘g’.

  • always_nest (bool) – whether to always make a folder for files or just make a single data file if all data has the same setpoints. Defaults to bool.

These files are basically tab-separated values, but any quantity of any whitespace characters is accepted.

Each row represents one setting of the setpoint variable(s) the setpoint variable(s) are in the first column(s) measured variable(s) come after.

The data is preceded by comment lines (starting with #). We use three:

  • one for the variable name

  • the (longer) axis label, in quotes so a label can contain whitespace.

  • for each dependent var, the (max) number of points in that dimension (this also tells us how many dependent vars we have in this file)

# id1   id2     id3...
# "label1"      "label2"        "label3"...
# 100   250
1       2       3...
2       3       4...

For data of 2 dependent variables, gnuplot puts each inner loop into one block, then increments the outer loop in the next block, separated by a blank line.

We extend this to an arbitrary quantity of dependent variables by using one blank line for each loop level that resets. (gnuplot does seem to use 2 blank lines sometimes, to denote a whole new dataset, which sort of corresponds to our situation.)

read_one_file(data_set, f, ids_read)[source]

Called by Formatter.read to bring one data file into a DataSet. Setpoint data may be duplicated across multiple files, but each measured DataArray must only map to one file.

Parameters
  • data_set – the DataSet we are reading into

  • f – a file-like object to read from

  • ids_read – a set of array_ids that we have already read. when you read an array, check that it’s not in this set (except setpoints, which can be in several files with different inner loop) then add it to the set so other files know not to read it again

write(data_set: DataSet, io_manager, location, force_write=False, write_metadata=True, only_complete=True, filename=None)[source]

Write updates in this DataSet to storage.

Will choose append if possible, overwrite if not.

Parameters
  • data_set – the data we’re storing

  • io_manager (io_manager) – the base location to write to

  • location (str) – the file location within io_manager

  • only_complete (bool) – passed to match_save_range, answers the following question: Should we write all available new data, or only complete rows? Is used to make sure that everything gets written when the DataSet is finalised, even if some dataarrays are strange (like, full of nans)

  • filename (Optional[str]) – Filename to save to. Will override the usual naming scheme and possibly overwrite files, so use with care. The file will be saved in the normal location.

write_metadata(data_set: DataSet, io_manager, location, read_first=True, **kwargs)[source]

Write all metadata in this DataSet to storage.

Parameters
  • data_set – the data we’re storing

  • io_manager (io_manager) – the base location to write to

  • location (str) – the file location within io_manager

  • read_first (Optional[bool]) – read previously saved metadata before writing? The current metadata will still be the used if there are changes, but if the saved metadata has information not present in the current metadata, it will be retained. Default True.

read_metadata(data_set)[source]

Read the metadata from this DataSet from storage.

Subclasses must override this method.

Parameters

data_set – the data to read metadata into

class ArrayGroup(shape, set_arrays, data, name)

Bases: tuple

Create new instance of ArrayGroup(shape, set_arrays, data, name)

count(value, /)

Return number of occurrences of value.

property data

Alias for field number 2

index(value, start=0, stop=9223372036854775807, /)

Return first index of value.

Raises ValueError if the value is not present.

property name

Alias for field number 3

property set_arrays

Alias for field number 1

property shape

Alias for field number 0

group_arrays(arrays)

Find the sets of arrays which share all the same setpoint arrays.

Some Formatters use this grouping to determine which arrays to save together in one file.

Parameters

arrays (Dict[DataArray]) – all the arrays in a DataSet

Returns

namedtuples giving:

  • shape (Tuple[int]): dimensions as in numpy

  • set_arrays (Tuple[DataArray]): the setpoints of this group

  • data (Tuple[DataArray]): measured arrays in this group

  • name (str): a unique name of this group, obtained by joining the setpoint array ids.

Return type

List[Formatter.ArrayGroup]

match_save_range(group, file_exists, only_complete=True)

Find the save range that will joins all changes in an array group.

Matches all full-sized arrays: the data arrays plus the inner loop setpoint array.

Note: if an outer loop has changed values (without the inner loop or measured data changing) we won’t notice it here. We assume that before an iteration of the inner loop starts, the outer loop setpoint gets set and then does not change later.

Parameters
  • group (Formatter.ArrayGroup) – a namedtuple containing the arrays that go together in one file, as tuple group.data.

  • file_exists (bool) – Does this file already exist? If True, and all arrays in the group agree on last_saved_index, we assume the file has been written up to this index and we can append to it. Otherwise we will set the returned range to start from zero (so if the file does exist, it gets completely overwritten).

  • only_complete (bool) – Should we write all available new data, or only complete rows? If True, we write only the range of array indices which all arrays in the group list as modified, so that future writes will be able to do a clean append to the data file as more data arrives. Default True.

Returns

the first and last raveled indices that should
be saved. Returns None if:
  • no data is present

  • no new data can be found

Return type

Tuple(int, int)

read(data_set: DataSet) None

Read the entire DataSet.

Find all files matching data_set.location (using io_manager.list) and call read_one_file on each. Subclasses may either override this method (if they use only one file or want to do their own searching) or override read_one_file to use the search and initialization functionality defined here.

Parameters

data_set – the data to read into. Should already have attributes io (an io manager), location (string), and arrays (dict of {array_id: array}, can be empty or can already have some or all of the arrays present, they expect to be overwritten)