qcodes.dataset

The dataset module contains code related to storage and retrieval of data to and from disk

Classes:

AbstractSweep()

Abstract sweep class that defines an interface for concrete sweep classes.

ArraySweep(param, array[, delay, ...])

Sweep the values of a given array.

ConnectionPlus(sqlite3_connection)

A class to extend the sqlite3.Connection object.

DataSetProtocol(*args, **kwargs)

DataSetType(value)

An enumeration.

InterDependencies_([dependencies, ...])

Object containing a group of ParamSpecs and the information about their internal relations to each other

LinSweep(param, start, stop, num_points[, ...])

Linear sweep.

LogSweep(param, start, stop, num_points[, ...])

Logarithmic sweep.

Measurement([exp, station, name])

Measurement procedure container.

ParamSpec(name, paramtype[, label, unit, ...])

param name:

name of the parameter

ParamSpecTree

alias of dict[ParamSpecBase, tuple[ParamSpecBase, ...]]

RunDescriber(interdeps[, shapes])

The object that holds the description of each run in the database.

SQLiteSettings()

Class that holds the machine's sqlite options.

SequentialParamsCaller(*param_meas)

ThreadPoolParamsCaller(*param_meas[, ...])

Context manager for calling given parameters in a thread pool.

TogetherSweep(*sweeps)

A combination of Multiple sweeps that are to be performed in parallel such that all parameters in the TogetherSweep are set to the next value before a parameter is read.

DataSetDefinition(name, independent, dependent)

A specification for the creation of a Dataset or Measurement object

LinSweeper(*args, **kwargs)

An iterable version of the LinSweep class

Exceptions:

BreakConditionInterrupt

Functions:

call_params_threaded(param_meas)

Function to create threads per instrument for the given set of measurement parameters.

connect(name[, debug, version])

Connect or create database.

datasaver_builder(dataset_definitions, *[, ...])

A utility context manager intended to simplify the creation of datasavers

do0d(*param_meas[, write_period, ...])

Perform a measurement of a single parameter.

do1d(param_set, start, stop, num_points, ...)

Perform a 1D scan of param_set from start to stop in num_points measuring param_meas at each step.

do2d(param_set1, start1, stop1, num_points1, ...)

Perform a 1D scan of param_set1 from start1 to stop1 in num_points1 and param_set2 from start2 to stop2 in num_points2 measuring param_meas at each step.

dond(*params[, write_period, ...])

Perform n-dimentional scan from slowest (first) to the fastest (last), to measure m measurement parameters.

dond_into(datasaver, *params[, ...])

A doNd-like utility function that writes gridded data to the supplied DataSaver

experiments([conn])

List all the experiments in the container (database file from config)

extract_runs_into_db(source_db_path, ...[, ...])

Extract a selection of runs into another DB file.

get_data_export_path()

Get the path to export data to at the end of a measurement from config

get_default_experiment_id(conn)

Returns the latest created/ loaded experiment's exp_id as the default experiment.

get_guids_by_run_spec(*[, captured_run_id, ...])

Get a list of matching guids from one or more pieces of runs specification.

guids_from_dbs(db_paths)

Extract all guids from the supplied database paths.

guids_from_dir(basepath)

Recursively find all db files under basepath and extract guids.

guids_from_list_str(s)

Get tuple of guids from a python/json string representation of a list.

import_dat_file(location[, exp])

This imports a QCoDeS legacy qcodes.data.data_set.DataSet into the database.

initialise_database([journal_mode])

Initialise a database in the location specified by the config object and set atomic commit and rollback mode of the db.

initialise_or_create_database_at(...[, ...])

This function sets up QCoDeS to refer to the given database file.

initialised_database_at(db_file_with_abs_path)

Initializes or creates a database and restores the 'db_location' afterwards.

load_by_counter(counter, exp_id[, conn])

Load a dataset given its counter in a given experiment

load_by_guid(guid[, conn])

Load a dataset by its GUID

load_by_id(run_id[, conn])

Load a dataset by run id

load_by_run_spec(*[, captured_run_id, ...])

Load a run from one or more pieces of runs specification.

load_experiment(exp_id[, conn])

Load experiment with the specified id (from database file from config)

load_experiment_by_name(name[, sample, ...])

Try to load experiment with the specified name.

load_from_netcdf(path[, path_to_db])

Create a in memory dataset from a netcdf file.

load_last_experiment()

Load last experiment (from database file from config)

load_or_create_experiment(experiment_name[, ...])

Find and return an experiment with the given name and sample name, or create one if not found.

new_data_set(name[, exp_id, specs, values, ...])

Create a new dataset in the currently active/selected database.

new_experiment(name, sample_name[, ...])

Create a new experiment (in the database file from config)

plot_by_id(run_id[, axes, colorbars, ...])

Construct all plots for a given run_id.

plot_dataset(dataset[, axes, colorbars, ...])

Construct all plots for a given dataset

reset_default_experiment_id([conn])

Resets the default experiment id to to the last experiment in the db.

rundescriber_from_json(json_str)

Deserialize a JSON string into a RunDescriber of the current version

class qcodes.dataset.AbstractSweep[source]

Bases: ABC, Generic[T]

Abstract sweep class that defines an interface for concrete sweep classes.

Methods:

get_setpoints()

Returns an array of setpoint values for this sweep.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

abstract get_setpoints() ndarray[Any, dtype[T]][source]

Returns an array of setpoint values for this sweep.

abstract property param: ParameterBase

Returns the Qcodes sweep parameter.

abstract property delay: float

Delay between two consecutive sweep points.

abstract property num_points: int

Number of sweep points.

abstract property post_actions: Sequence[Callable[[], None]]

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

class qcodes.dataset.ArraySweep(param: ParameterBase, array: Sequence[Any] | ndarray[Any, dtype[T]], delay: float = 0, post_actions: Sequence[Callable[[], None]] = (), get_after_set: bool = False)[source]

Bases: AbstractSweep, Generic[T]

Sweep the values of a given array.

Parameters:
  • param – Qcodes parameter for sweep.

  • array – array with values to sweep.

  • delay – Time in seconds between two consecutive sweep points.

  • post_actions – Actions to do after each sweep point.

  • get_after_set – Should we perform a get on the parameter after setting it and store the value returned by get rather than the set value in the dataset.

Methods:

get_setpoints()

Returns an array of setpoint values for this sweep.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

get_setpoints() ndarray[Any, dtype[T]][source]

Returns an array of setpoint values for this sweep.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property delay: float

Delay between two consecutive sweep points.

property num_points: int

Number of sweep points.

property post_actions: Sequence[Callable[[], None]]

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

exception qcodes.dataset.BreakConditionInterrupt[source]

Bases: Exception

args
with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

class qcodes.dataset.ConnectionPlus(sqlite3_connection: Connection)[source]

Bases: ObjectProxy

A class to extend the sqlite3.Connection object. Since sqlite3.Connection has no __dict__, we can not directly add attributes to its instance directly.

It is not allowed to instantiate a new ConnectionPlus object from a ConnectionPlus object.

It is recommended to create a ConnectionPlus using the function connect()

Attributes:

atomic_in_progress

a bool describing whether the connection is currently in the middle of an atomic block of transactions, thus allowing to nest atomic context managers

path_to_dbfile

Path to the database file of the connection.

atomic_in_progress: bool = False

a bool describing whether the connection is currently in the middle of an atomic block of transactions, thus allowing to nest atomic context managers

path_to_dbfile: str = ''

Path to the database file of the connection.

class qcodes.dataset.DataSetProtocol(*args, **kwargs)[source]

Bases: Protocol

Attributes:

persistent_traits

pristine

running

completed

run_id

captured_run_id

counter

captured_counter

guid

number_of_results

name

exp_name

exp_id

sample_name

run_timestamp_raw

completed_timestamp_raw

snapshot

metadata

path_to_db

paramspecs

description

parent_dataset_links

export_info

cache

dependent_parameters

Methods:

prepare(*, snapshot, interdeps[, shapes, ...])

mark_completed()

run_timestamp([fmt])

completed_timestamp([fmt])

add_snapshot(snapshot[, overwrite])

add_metadata(tag, metadata)

export([export_type, path, prefix, ...])

get_parameter_data(*params[, start, end, ...])

get_parameters()

to_xarray_dataarray_dict(*params[, start, end])

to_xarray_dataset(*params[, start, end])

to_pandas_dataframe_dict(*params[, start, end])

to_pandas_dataframe(*params[, start, end])

the_same_dataset_as(other)

persistent_traits: tuple[str, ...] = ('name', 'guid', 'number_of_results', 'exp_name', 'sample_name', 'completed', 'snapshot', 'run_timestamp_raw', 'description', 'completed_timestamp_raw', 'metadata', 'parent_dataset_links', 'captured_run_id', 'captured_counter')
prepare(*, snapshot: Mapping[Any, Any], interdeps: InterDependencies_, shapes: dict[str, tuple[int, ...]] | None = None, parent_datasets: Sequence[Mapping[Any, Any]] = (), write_in_background: bool = False) None[source]
property pristine: bool
property running: bool
property completed: bool
mark_completed() None[source]
property run_id: int
property captured_run_id: int
property counter: int
property captured_counter: int
property guid: str
property number_of_results: int
property name: str
property exp_name: str
property exp_id: int
property sample_name: str
run_timestamp(fmt: str = '%Y-%m-%d %H:%M:%S') str | None[source]
property run_timestamp_raw: float | None
completed_timestamp(fmt: str = '%Y-%m-%d %H:%M:%S') str | None[source]
property completed_timestamp_raw: float | None
property snapshot: dict[str, Any] | None
add_snapshot(snapshot: str, overwrite: bool = False) None[source]
add_metadata(tag: str, metadata: Any) None[source]
property metadata: dict[str, Any]
property path_to_db: str | None
property paramspecs: dict[str, ParamSpec]
property description: RunDescriber
export(export_type: DataExportType | str | None = None, path: Path | str | None = None, prefix: str | None = None, automatic_export: bool = False) None[source]
property export_info: ExportInfo
property cache: DataSetCache[DataSetProtocol]
get_parameter_data(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None, callback: Callable[[float], None] | None = None) ParameterData[source]
get_parameters() list[ParamSpec][source]
property dependent_parameters: tuple[ParamSpecBase, ...]
to_xarray_dataarray_dict(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None) dict[str, xr.DataArray][source]
to_xarray_dataset(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None) xr.Dataset[source]
to_pandas_dataframe_dict(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None) dict[str, pd.DataFrame][source]
to_pandas_dataframe(*params: str | ParamSpec | ParameterBase, start: int | None = None, end: int | None = None) pd.DataFrame[source]
the_same_dataset_as(other: DataSetProtocol) bool[source]
class qcodes.dataset.DataSetType(value)[source]

Bases: str, Enum

An enumeration.

Attributes:

DataSet

DataSetInMem

DataSet = 'DataSet'
DataSetInMem = 'DataSetInMem'
class qcodes.dataset.InterDependencies_(dependencies: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, inferences: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, standalones: tuple[ParamSpecBase, ...] = ())[source]

Bases: object

Object containing a group of ParamSpecs and the information about their internal relations to each other

Methods:

validate_paramspectree(paramspectree)

Validate a ParamSpecTree.

what_depends_on(ps)

Return a tuple of the parameters that depend on the given parameter.

what_is_inferred_from(ps)

Return a tuple of the parameters thatare inferred from the given parameter.

extend([dependencies, inferences, standalones])

Create a new InterDependencies_ object that is an extension of this instance with the provided input

remove(parameter)

Create a new InterDependencies_ object that is similar to this instance, but has the given parameter removed.

validate_subset(parameters)

Validate that the given parameters form a valid subset of the parameters of this instance, meaning that all the given parameters are actually found in this instance and that there are no missing dependencies/inferences.

Attributes:

paramspecs

Return the ParamSpecBase objects of this instance

non_dependencies

Return all parameters that are not dependencies of other parameters, i.e. return the top level parameters.

names

Return all the names of the parameters of this instance

static validate_paramspectree(paramspectree: dict[ParamSpecBase, tuple[ParamSpecBase, ...]]) tuple[type[Exception], str] | None[source]

Validate a ParamSpecTree. Apart from adhering to the type, a ParamSpecTree must not have any cycles.

Returns:

A tuple with an exception type and an error message or None, if the paramtree is valid

what_depends_on(ps: ParamSpecBase) tuple[ParamSpecBase, ...][source]

Return a tuple of the parameters that depend on the given parameter. Returns an empty tuple if nothing depends on the given parameter

Parameters:

ps – the parameter to look up

Raises:

ValueError if the parameter is not part of this object

what_is_inferred_from(ps: ParamSpecBase) tuple[ParamSpecBase, ...][source]

Return a tuple of the parameters thatare inferred from the given parameter. Returns an empty tuple if nothing is inferred from the given parameter

Parameters:

ps – the parameter to look up

Raises:

ValueError if the parameter is not part of this object

property paramspecs: tuple[ParamSpecBase, ...]

Return the ParamSpecBase objects of this instance

property non_dependencies: tuple[ParamSpecBase, ...]

Return all parameters that are not dependencies of other parameters, i.e. return the top level parameters. Returned tuple is sorted by parameter names.

property names: tuple[str, ...]

Return all the names of the parameters of this instance

extend(dependencies: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, inferences: dict[ParamSpecBase, tuple[ParamSpecBase, ...]] | None = None, standalones: tuple[ParamSpecBase, ...] = ()) InterDependencies_[source]

Create a new InterDependencies_ object that is an extension of this instance with the provided input

remove(parameter: ParamSpecBase) InterDependencies_[source]

Create a new InterDependencies_ object that is similar to this instance, but has the given parameter removed.

validate_subset(parameters: Sequence[ParamSpecBase]) None[source]

Validate that the given parameters form a valid subset of the parameters of this instance, meaning that all the given parameters are actually found in this instance and that there are no missing dependencies/inferences.

Parameters:

parameters – The collection of ParamSpecBases to validate

Raises:
  • DependencyError, if a dependency is missing

  • InferenceError, if an inference is missing

class qcodes.dataset.LinSweep(param: ParameterBase, start: float, stop: float, num_points: int, delay: float = 0, post_actions: Sequence[Callable[[], None]] = (), get_after_set: bool = False)[source]

Bases: AbstractSweep[float64]

Linear sweep.

Parameters:
  • param – Qcodes parameter to sweep.

  • start – Sweep start value.

  • stop – Sweep end value.

  • num_points – Number of sweep points.

  • delay – Time in seconds between two consecutive sweep points.

  • post_actions – Actions to do after each sweep point.

  • get_after_set – Should we perform a get on the parameter after setting it and store the value returned by get rather than the set value in the dataset.

Methods:

get_setpoints()

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

get_setpoints() ndarray[Any, dtype[float64]][source]

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property delay: float

Delay between two consecutive sweep points.

property num_points: int

Number of sweep points.

property post_actions: Sequence[Callable[[], None]]

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

class qcodes.dataset.LogSweep(param: ParameterBase, start: float, stop: float, num_points: int, delay: float = 0, post_actions: Sequence[Callable[[], None]] = (), get_after_set: bool = False)[source]

Bases: AbstractSweep[float64]

Logarithmic sweep.

Parameters:
  • param – Qcodes parameter for sweep.

  • start – Sweep start value.

  • stop – Sweep end value.

  • num_points – Number of sweep points.

  • delay – Time in seconds between two consecutive sweep points.

  • post_actions – Actions to do after each sweep point.

  • get_after_set – Should we perform a get on the parameter after setting it and store the value returned by get rather than the set value in the dataset.

Methods:

get_setpoints()

Logarithmically spaced numpy array for supplied start, stop and num_points.

Attributes:

param

Returns the Qcodes sweep parameter.

delay

Delay between two consecutive sweep points.

num_points

Number of sweep points.

post_actions

actions to be performed after setting param to its setpoint.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

get_setpoints() ndarray[Any, dtype[float64]][source]

Logarithmically spaced numpy array for supplied start, stop and num_points.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property delay: float

Delay between two consecutive sweep points.

property num_points: int

Number of sweep points.

property post_actions: Sequence[Callable[[], None]]

actions to be performed after setting param to its setpoint.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

class qcodes.dataset.Measurement(exp: Experiment | None = None, station: Station | None = None, name: str = '')[source]

Bases: object

Measurement procedure container. Note that multiple measurement instances cannot be nested.

Parameters:
  • exp – Specify the experiment to use. If not given the default one is used. The default experiment is the latest one created.

  • station – The QCoDeS station to snapshot. If not given, the default one is used.

  • name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

Attributes:

parameters

write_period

Methods:

register_parent(parent, link_type[, description])

Register a parent for the outcome of this measurement

register_parameter(parameter[, setpoints, ...])

Add QCoDeS Parameter to the dataset produced by running this measurement.

register_custom_parameter(name[, label, ...])

Register a custom parameter with this measurement

unregister_parameter(parameter)

Remove a custom/QCoDeS parameter from the dataset produced by running this measurement

add_before_run(func, args)

Add an action to be performed before the measurement.

add_after_run(func, args)

Add an action to be performed after the measurement.

add_subscriber(func, state)

Add a subscriber to the dataset of the measurement.

set_shapes(shapes)

Set the shapes of the data to be recorded in this measurement.

run([write_in_background, in_memory_cache, ...])

Returns the context manager for the experimental run

property parameters: dict[str, ParamSpecBase]
property write_period: float
register_parent(parent: DataSetProtocol, link_type: str, description: str = '') T[source]

Register a parent for the outcome of this measurement

Parameters:
  • parent – The parent dataset

  • link_type – A name for the type of parent-child link

  • description – A free-text description of the relationship

register_parameter(parameter: ParameterBase, setpoints: Sequence[str | ParameterBase] | None = None, basis: Sequence[str | ParameterBase] | None = None, paramtype: str | None = None) T[source]

Add QCoDeS Parameter to the dataset produced by running this measurement.

Parameters:
  • parameter – The parameter to add

  • setpoints – The Parameter representing the setpoints for this parameter. If this parameter is a setpoint, it should be left blank

  • basis – The parameters that this parameter is inferred from. If this parameter is not inferred from any other parameters, this should be left blank.

  • paramtype – Type of the parameter, i.e. the SQL storage class, If None the paramtype will be inferred from the parameter type and the validator of the supplied parameter.

register_custom_parameter(name: str, label: str | None = None, unit: str | None = None, basis: Sequence[str | ParameterBase] | None = None, setpoints: Sequence[str | ParameterBase] | None = None, paramtype: str = 'numeric') T[source]

Register a custom parameter with this measurement

Parameters:
  • name – The name that this parameter will have in the dataset. Must be unique (will overwrite an existing parameter with the same name!)

  • label – The label

  • unit – The unit

  • basis – A list of either QCoDeS Parameters or the names of parameters already registered in the measurement that this parameter is inferred from

  • setpoints – A list of either QCoDeS Parameters or the names of of parameters already registered in the measurement that are the setpoints of this parameter

  • paramtype – Type of the parameter, i.e. the SQL storage class

unregister_parameter(parameter: Sequence[str | ParameterBase]) None[source]

Remove a custom/QCoDeS parameter from the dataset produced by running this measurement

add_before_run(func: Callable[[...], Any], args: Sequence[Any]) T[source]

Add an action to be performed before the measurement.

Parameters:
  • func – Function to be performed

  • args – The arguments to said function

add_after_run(func: Callable[[...], Any], args: Sequence[Any]) T[source]

Add an action to be performed after the measurement.

Parameters:
  • func – Function to be performed

  • args – The arguments to said function

add_subscriber(func: Callable[[...], Any], state: MutableSequence[Any] | MutableMapping[Any, Any]) T[source]

Add a subscriber to the dataset of the measurement.

Parameters:
  • func – A function taking three positional arguments: a list of tuples of parameter values, an integer, a mutable variable (list or dict) to hold state/writes updates to.

  • state – The variable to hold the state.

set_shapes(shapes: dict[str, tuple[int, ...]] | None) None[source]

Set the shapes of the data to be recorded in this measurement.

Parameters:

shapes – Dictionary from names of dependent parameters to a tuple of integers describing the shape of the measurement.

run(write_in_background: bool | None = None, in_memory_cache: bool | None = True, dataset_class: DataSetType = DataSetType.DataSet, parent_span: Span | None = None) Runner[source]

Returns the context manager for the experimental run

Parameters:
  • write_in_background – if True, results that will be added within the context manager with DataSaver.add_result will be stored in background, without blocking the main thread that is executing the context manager. By default the setting for write in background will be read from the qcodesrc.json config file.

  • in_memory_cache – Should measured data be keep in memory and available as part of the dataset.cache object.

  • dataset_class – Enum representing the Class used to store data with.

class qcodes.dataset.ParamSpec(name: str, paramtype: str, label: str | None = None, unit: str | None = None, inferred_from: Sequence[ParamSpec | str] | None = None, depends_on: Sequence[ParamSpec | str] | None = None, **metadata: Any)[source]

Bases: ParamSpecBase

Parameters:
  • name – name of the parameter

  • paramtype – type of the parameter, i.e. the SQL storage class

  • label – label of the parameter

  • inferred_from – the parameters that this parameter is inferred from

  • depends_on – the parameters that this parameter depends on

Attributes:

inferred_from_

depends_on_

inferred_from

depends_on

allowed_types

Methods:

copy()

Make a copy of self

__hash__()

Allow ParamSpecs in data structures that use hashing (i.e. sets).

base_version()

Return a ParamSpecBase object with the same name, paramtype, label and unit as this ParamSpec

sql_repr()

property inferred_from_: list[str]
property depends_on_: list[str]
property inferred_from: str
property depends_on: str
copy() ParamSpec[source]

Make a copy of self

__hash__() int[source]

Allow ParamSpecs in data structures that use hashing (i.e. sets)

base_version() ParamSpecBase[source]

Return a ParamSpecBase object with the same name, paramtype, label and unit as this ParamSpec

allowed_types: ClassVar[list[str]] = ['array', 'numeric', 'text', 'complex']
sql_repr() str
qcodes.dataset.ParamSpecTree

alias of dict[ParamSpecBase, tuple[ParamSpecBase, …]] Methods:

clear()

copy()

fromkeys([value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

items()

keys()

pop(k[,d])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem()

Remove and return a (key, value) pair as a 2-tuple.

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

class qcodes.dataset.RunDescriber(interdeps: InterDependencies_, shapes: dict[str, tuple[int, ...]] | None = None)[source]

Bases: object

The object that holds the description of each run in the database. This object serialises itself to a string and is found under the run_description column in the runs table.

Extension of this object is planned for the future, for now it holds the parameter interdependencies. Extensions should be objects that can convert themselves to dictionary and added as attributes to the RunDescriber, such that the RunDescriber can iteratively convert its attributes when converting itself to dictionary.

Attributes:

version

shapes

interdeps

property version: int
property shapes: dict[str, tuple[int, ...]] | None
property interdeps: InterDependencies_
class qcodes.dataset.SQLiteSettings[source]

Bases: object

Class that holds the machine’s sqlite options.

Note that the settings are not dynamically updated, so changes during runtime must be updated manually. But you probably should not be changing these settings dynamically in the first place.

Attributes:

limits

settings

limits = {'MAX_ATTACHED': 10, 'MAX_COLUMN': 2000, 'MAX_COMPOUND_SELECT': 500, 'MAX_EXPR_DEPTH': 1000, 'MAX_FUNCTION_ARG': 127, 'MAX_LENGTH': 1000000000, 'MAX_LIKE_PATTERN_LENGTH': 50000, 'MAX_PAGE_COUNT': 1073741823, 'MAX_SQL_LENGTH': 1000000000, 'MAX_VARIABLE_NUMBER': 250000}
settings = {'ATOMIC_INTRINSICS': 1, 'COMPILER': 'gcc-11.3.0', 'DEFAULT_AUTOVACUUM': True, 'DEFAULT_CACHE_SIZE': '-2000', 'DEFAULT_FILE_FORMAT': 4, 'DEFAULT_JOURNAL_SIZE_LIMIT': '-1', 'DEFAULT_MMAP_SIZE': True, 'DEFAULT_PAGE_SIZE': 4096, 'DEFAULT_PCACHE_INITSZ': 20, 'DEFAULT_RECURSIVE_TRIGGERS': True, 'DEFAULT_SECTOR_SIZE': 4096, 'DEFAULT_SYNCHRONOUS': 2, 'DEFAULT_WAL_AUTOCHECKPOINT': 1000, 'DEFAULT_WAL_SYNCHRONOUS': 2, 'DEFAULT_WORKER_THREADS': True, 'ENABLE_COLUMN_METADATA': True, 'ENABLE_DBSTAT_VTAB': True, 'ENABLE_FTS3': True, 'ENABLE_FTS3_PARENTHESIS': True, 'ENABLE_FTS3_TOKENIZER': True, 'ENABLE_FTS4': True, 'ENABLE_FTS5': True, 'ENABLE_JSON1': True, 'ENABLE_LOAD_EXTENSION': True, 'ENABLE_MATH_FUNCTIONS': True, 'ENABLE_PREUPDATE_HOOK': True, 'ENABLE_RTREE': True, 'ENABLE_SESSION': True, 'ENABLE_STMTVTAB': True, 'ENABLE_UNLOCK_NOTIFY': True, 'ENABLE_UPDATE_DELETE_LIMIT': True, 'HAVE_ISNAN': True, 'LIKE_DOESNT_MATCH_BLOBS': True, 'MALLOC_SOFT_LIMIT': 1024, 'MAX_DEFAULT_PAGE_SIZE': 32768, 'MAX_MMAP_SIZE': '0x7fff0000', 'MAX_PAGE_SIZE': 65536, 'MAX_SCHEMA_RETRY': 25, 'MAX_TRIGGER_DEPTH': 1000, 'MAX_VDBE_OP': 250000000, 'MAX_WORKER_THREADS': 8, 'MUTEX_PTHREADS': True, 'OMIT_LOOKASIDE': True, 'SECURE_DELETE': True, 'SOUNDEX': True, 'SYSTEM_MALLOC': True, 'TEMP_STORE': 1, 'THREADSAFE': 1, 'USE_URI': True, 'VERSION': '3.37.2'}
class qcodes.dataset.SequentialParamsCaller(*param_meas: ParamMeasT)[source]

Bases: _ParamsCallerProtocol

class qcodes.dataset.ThreadPoolParamsCaller(*param_meas: ParamMeasT, max_workers: int | None = None)[source]

Bases: _ParamsCallerProtocol

Context manager for calling given parameters in a thread pool. Note that parameters that have the same underlying instrument will be called in the same thread.

Usage:

...
with ThreadPoolParamsCaller(p1, p2, ...) as pool_caller:
    ...
    output = pool_caller()
    ...
    # Output can be passed directly into DataSaver.add_result:
    # datasaver.add_result(*output)
    ...
...
Parameters:
  • param_meas – parameter or a callable without arguments

  • max_workers – number of worker threads to create in the pool; if None, the number of worker threads will be equal to the number of unique “underlying instruments”

Methods:

__call__()

Call parameters in the thread pool and return (param, value) tuples.

__call__() list[tuple[ParameterBase, values_type]][source]

Call parameters in the thread pool and return (param, value) tuples.

class qcodes.dataset.TogetherSweep(*sweeps: AbstractSweep)[source]

Bases: object

A combination of Multiple sweeps that are to be performed in parallel such that all parameters in the TogetherSweep are set to the next value before a parameter is read.

Attributes:

sweeps

num_points

Methods:

get_setpoints()

property sweeps: tuple[AbstractSweep, ...]
get_setpoints() Iterable[source]
property num_points: int
qcodes.dataset.call_params_threaded(param_meas: Sequence[ParamMeasT]) OutType[source]

Function to create threads per instrument for the given set of measurement parameters.

Parameters:

param_meas – a Sequence of measurement parameters

qcodes.dataset.connect(name: str | Path, debug: bool = False, version: int = -1) ConnectionPlus[source]

Connect or create database. If debug the queries will be echoed back. This function takes care of registering the numpy/sqlite type converters that we need.

Parameters:
  • name – name or path to the sqlite file

  • debug – should tracing be turned on.

  • version – which version to create. We count from 0. -1 means ‘latest’. Should always be left at -1 except when testing.

Returns:

connection object to the database (note, it is ConnectionPlus, not sqlite3.Connection)

qcodes.dataset.datasaver_builder(dataset_definitions: Sequence[DataSetDefinition], *, override_experiment: Experiment | None = None) Generator[list[DataSaver], Any, None][source]

A utility context manager intended to simplify the creation of datasavers

The datasaver builder can be used to streamline the creation of multiple datasavers where all dependent parameters depend on all independent parameters.

Parameters:
  • dataset_definitions – A set of DataSetDefinitions to create and register parameters for

  • override_experiment – Sets the Experiment for all datasets to be written to. This argument overrides any experiments provided in the DataSetDefinition

Yields:

A list of generated datasavers with parameters registered

class qcodes.dataset.DataSetDefinition(name: str, independent: Sequence[ParameterBase], dependent: Sequence[ParameterBase], experiment: Experiment | None = None)[source]

Bases: object

A specification for the creation of a Dataset or Measurement object

Attributes:

name

The name to be assigned to the Measurement and dataset

independent

A sequence of independent parameters in the Measurement and dataset

dependent

A sequence of dependent parameters in the Measurement and dataset Note: All dependent parameters will depend on all independent parameters

experiment

An optional argument specifying which Experiment this dataset should be written to

name: str

The name to be assigned to the Measurement and dataset

independent: Sequence[ParameterBase]

A sequence of independent parameters in the Measurement and dataset

dependent: Sequence[ParameterBase]

A sequence of dependent parameters in the Measurement and dataset Note: All dependent parameters will depend on all independent parameters

experiment: Experiment | None = None

An optional argument specifying which Experiment this dataset should be written to

qcodes.dataset.do0d(*param_meas: ParamMeasT, write_period: float | None = None, measurement_name: str = '', exp: Experiment | None = None, do_plot: bool | None = None, use_threads: bool | None = None, log_info: str | None = None) AxesTupleListWithDataSet[source]

Perform a measurement of a single parameter. This is probably most useful for an ArrayParameter that already returns an array of data points

Parameters:
  • *param_meas – Parameter(s) to measure at each step or functions that will be called at each step. The function should take no arguments. The parameters and functions are called in the order they are supplied.

  • write_period – The time after which the data is actually written to the database.

  • measurement_name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

  • exp – The experiment to use for this measurement.

  • do_plot – should png and pdf versions of the images be saved after the run. If None the setting will be read from qcodesrc.json

  • use_threads – If True measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • log_info – Message that is logged during the measurement. If None a default message is used.

Returns:

The QCoDeS dataset.

qcodes.dataset.do1d(param_set: ParameterBase, start: float, stop: float, num_points: int, delay: float, *param_meas: ParamMeasT, enter_actions: ActionsT = (), exit_actions: ActionsT = (), write_period: float | None = None, measurement_name: str = '', exp: Experiment | None = None, do_plot: bool | None = None, use_threads: bool | None = None, additional_setpoints: Sequence[ParameterBase] = (), show_progress: bool | None = None, log_info: str | None = None, break_condition: BreakConditionT | None = None) AxesTupleListWithDataSet[source]

Perform a 1D scan of param_set from start to stop in num_points measuring param_meas at each step. In case param_meas is an ArrayParameter this is effectively a 2d scan.

Parameters:
  • param_set – The QCoDeS parameter to sweep over

  • start – Starting point of sweep

  • stop – End point of sweep

  • num_points – Number of points in sweep

  • delay – Delay after setting parameter before measurement is performed

  • param_meas – Parameter(s) to measure at each step or functions that will be called at each step. The function should take no arguments. The parameters and functions are called in the order they are supplied.

  • enter_actions – A list of functions taking no arguments that will be called before the measurements start

  • exit_actions – A list of functions taking no arguments that will be called after the measurements ends

  • write_period – The time after which the data is actually written to the database.

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned.

  • measurement_name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

  • exp – The experiment to use for this measurement.

  • do_plot – should png and pdf versions of the images be saved after the run. If None the setting will be read from qcodesrc.json

  • use_threads – If True measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • show_progress – should a progress bar be displayed during the measurement. If None the setting will be read from qcodesrc.json

  • log_info – Message that is logged during the measurement. If None a default message is used.

  • break_condition – Callable that takes no arguments. If returned True, measurement is interrupted.

Returns:

The QCoDeS dataset.

qcodes.dataset.do2d(param_set1: ParameterBase, start1: float, stop1: float, num_points1: int, delay1: float, param_set2: ParameterBase, start2: float, stop2: float, num_points2: int, delay2: float, *param_meas: ParamMeasT, set_before_sweep: bool | None = True, enter_actions: ActionsT = (), exit_actions: ActionsT = (), before_inner_actions: ActionsT = (), after_inner_actions: ActionsT = (), write_period: float | None = None, measurement_name: str = '', exp: Experiment | None = None, flush_columns: bool = False, do_plot: bool | None = None, use_threads: bool | None = None, additional_setpoints: Sequence[ParameterBase] = (), show_progress: bool | None = None, log_info: str | None = None, break_condition: BreakConditionT | None = None) AxesTupleListWithDataSet[source]

Perform a 1D scan of param_set1 from start1 to stop1 in num_points1 and param_set2 from start2 to stop2 in num_points2 measuring param_meas at each step.

Parameters:
  • param_set1 – The QCoDeS parameter to sweep over in the outer loop

  • start1 – Starting point of sweep in outer loop

  • stop1 – End point of sweep in the outer loop

  • num_points1 – Number of points to measure in the outer loop

  • delay1 – Delay after setting parameter in the outer loop

  • param_set2 – The QCoDeS parameter to sweep over in the inner loop

  • start2 – Starting point of sweep in inner loop

  • stop2 – End point of sweep in the inner loop

  • num_points2 – Number of points to measure in the inner loop

  • delay2 – Delay after setting parameter before measurement is performed

  • param_meas – Parameter(s) to measure at each step or functions that will be called at each step. The function should take no arguments. The parameters and functions are called in the order they are supplied.

  • set_before_sweep – if True the outer parameter is set to its first value before the inner parameter is swept to its next value.

  • enter_actions – A list of functions taking no arguments that will be called before the measurements start

  • exit_actions – A list of functions taking no arguments that will be called after the measurements ends

  • before_inner_actions – Actions executed before each run of the inner loop

  • after_inner_actions – Actions executed after each run of the inner loop

  • write_period – The time after which the data is actually written to the database.

  • measurement_name – Name of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset.

  • exp – The experiment to use for this measurement.

  • flush_columns – The data is written after a column is finished independent of the passed time and write period.

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned.

  • do_plot – should png and pdf versions of the images be saved after the run. If None the setting will be read from qcodesrc.json

  • use_threads – If True measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • show_progress – should a progress bar be displayed during the measurement. If None the setting will be read from qcodesrc.json

  • log_info – Message that is logged during the measurement. If None a default message is used.

  • break_condition – Callable that takes no arguments. If returned True, measurement is interrupted.

Returns:

The QCoDeS dataset.

qcodes.dataset.dond(*params: AbstractSweep | TogetherSweep | ParamMeasT | Sequence[ParamMeasT], write_period: float | None = None, measurement_name: str | Sequence[str] = '', exp: Experiment | Sequence[Experiment] | None = None, enter_actions: ActionsT = (), exit_actions: ActionsT = (), do_plot: bool | None = None, show_progress: bool | None = None, use_threads: bool | None = None, additional_setpoints: Sequence[ParameterBase] = (), log_info: str | None = None, break_condition: BreakConditionT | None = None, dataset_dependencies: Mapping[str, Sequence[ParamMeasT]] | None = None, in_memory_cache: bool | None = None) AxesTupleListWithDataSet | MultiAxesTupleListWithDataSet[source]

Perform n-dimentional scan from slowest (first) to the fastest (last), to measure m measurement parameters. The dimensions should be specified as sweep objects, and after them the parameters to measure should be passed.

Parameters:
  • params

    Instances of n sweep classes and m measurement parameters, e.g. if linear sweep is considered:

    LinSweep(param_set_1, start_1, stop_1, num_points_1, delay_1), ...,
    LinSweep(param_set_n, start_n, stop_n, num_points_n, delay_n),
    param_meas_1, param_meas_2, ..., param_meas_m
    

    If multiple DataSets creation is needed, measurement parameters should be grouped, so one dataset will be created for each group. e.g.:

    LinSweep(param_set_1, start_1, stop_1, num_points_1, delay_1), ...,
    LinSweep(param_set_n, start_n, stop_n, num_points_n, delay_n),
    [param_meas_1, param_meas_2], ..., [param_meas_m]
    

    If you want to sweep multiple parameters together.

    TogetherSweep(LinSweep(param_set_1, start_1, stop_1, num_points, delay_1),
                  LinSweep(param_set_2, start_2, stop_2, num_points, delay_2))
    param_meas_1, param_meas_2, ..., param_meas_m
    

  • write_period – The time after which the data is actually written to the database.

  • measurement_name – Name(s) of the measurement. This will be passed down to the dataset produced by the measurement. If not given, a default value of ‘results’ is used for the dataset. If more than one is given, each dataset will have an individual name.

  • exp – The experiment to use for this measurement. If you create multiple measurements using groups you may also supply multiple experiments.

  • enter_actions – A list of functions taking no arguments that will be called before the measurements start.

  • exit_actions – A list of functions taking no arguments that will be called after the measurements ends.

  • do_plot – should png and pdf versions of the images be saved and plots are shown after the run. If None the setting will be read from qcodesrc.json

  • show_progress – should a progress bar be displayed during the measurement. If None the setting will be read from qcodesrc.json

  • use_threads – If True, measurements from each instrument will be done on separate threads. If you are measuring from several instruments this may give a significant speedup.

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned/swept-over.

  • log_info – Message that is logged during the measurement. If None a default message is used.

  • break_condition – Callable that takes no arguments. If returned True, measurement is interrupted.

  • dataset_dependencies – Optionally describe that measured datasets only depend on a subset of the setpoint parameters. Given as a mapping from measurement names to Sequence of Parameters. Note that a dataset must depend on at least one parameter from each dimension but can depend on one or more parameters from a dimension sweeped with a TogetherSweep.

  • in_memory_cache – Should a cache of the data be kept available in memory for faster plotting and exporting. Useful to disable if the data is very large in order to save on memory consumption. If None, the value for this will be read from qcodesrc.json config file.

Returns:

A tuple of QCoDeS DataSet, Matplotlib axis, Matplotlib colorbar. If more than one group of measurement parameters is supplied, the output will be a tuple of tuple(QCoDeS DataSet), tuple(Matplotlib axis), tuple(Matplotlib colorbar), in which each element of each sub-tuple belongs to one group, and the order of elements is the order of the supplied groups.

qcodes.dataset.dond_into(datasaver: DataSaver, *params: AbstractSweep | ParameterBase | Callable[[], None], additional_setpoints: Sequence[ParameterBase] = ()) None[source]

A doNd-like utility function that writes gridded data to the supplied DataSaver

dond_into accepts AbstractSweep objects and measurement parameters or callables. It executes the specified Sweeps, reads the measurement parameters, and stores the resulting data in the datasaver.

Parameters:
  • datasaver – The datasaver to write data to

  • params

    Instances of n sweep classes and m measurement parameters, e.g. if linear sweep is considered:

    LinSweep(param_set_1, start_1, stop_1, num_points_1, delay_1), ...,
    LinSweep(param_set_n, start_n, stop_n, num_points_n, delay_n),
    param_meas_1, param_meas_2, ..., param_meas_m
    

  • additional_setpoints – A list of setpoint parameters to be registered in the measurement but not scanned/swept-over.

qcodes.dataset.experiments(conn: ConnectionPlus | None = None) list[Experiment][source]

List all the experiments in the container (database file from config)

Parameters:

conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

All the experiments in the container

qcodes.dataset.extract_runs_into_db(source_db_path: str | Path, target_db_path: str | Path, *run_ids: int, upgrade_source_db: bool = False, upgrade_target_db: bool = False) None[source]

Extract a selection of runs into another DB file. All runs must come from the same experiment. They will be added to an experiment with the same name and sample_name in the target db. If such an experiment does not exist, it will be created.

Parameters:
  • source_db_path – Path to the source DB file

  • target_db_path – Path to the target DB file. The target DB file will be created if it does not exist.

  • run_ids – The run_id’s of the runs to copy into the target DB file

  • upgrade_source_db – If the source DB is found to be in a version that is not the newest, should it be upgraded?

  • upgrade_target_db – If the target DB is found to be in a version that is not the newest, should it be upgraded?

qcodes.dataset.get_data_export_path() Path[source]

Get the path to export data to at the end of a measurement from config

Returns:

Path

qcodes.dataset.get_default_experiment_id(conn: ConnectionPlus) int[source]

Returns the latest created/ loaded experiment’s exp_id as the default experiment. If it is not set the maximum exp_id returned as the default. If no experiment is found in the database, a ValueError is raised.

Parameters:

conn – Open connection to the db in question.

Returns:

exp_id of the default experiment.

Raises:

ValueError – If no experiment exists in the given db.

qcodes.dataset.get_guids_by_run_spec(*, captured_run_id: int | None = None, captured_counter: int | None = None, experiment_name: str | None = None, sample_name: str | None = None, sample_id: int | None = None, location: int | None = None, work_station: int | None = None, conn: ConnectionPlus | None = None) list[str][source]

Get a list of matching guids from one or more pieces of runs specification. All fields are optional.

Parameters:
  • captured_run_id – The run_id that was originally assigned to this at the time of capture.

  • captured_counter – The counter that was originally assigned to this at the time of capture.

  • experiment_name – name of the experiment that the run was captured

  • sample_name – The name of the sample given when creating the experiment.

  • sample_id – The sample_id assigned as part of the GUID.

  • location – The location code assigned as part of GUID.

  • work_station – The workstation assigned as part of the GUID.

  • conn – An optional connection to the database. If no connection is supplied a connection to the default database will be opened.

Returns:

List of guids matching the run spec.

qcodes.dataset.guids_from_dbs(db_paths: Iterable[Path]) tuple[dict[Path, list[str]], dict[str, Path]][source]

Extract all guids from the supplied database paths.

Parameters:

db_paths – Path or str or directory where to search

Returns:

Tuple of Dictionary mapping paths to lists of guids as strings and Dictionary mapping guids to db paths.

qcodes.dataset.guids_from_dir(basepath: Path | str) tuple[dict[Path, list[str]], dict[str, Path]][source]

Recursively find all db files under basepath and extract guids.

Parameters:

basepath – Path or str or directory where to search

Returns:

Tuple of Dictionary mapping paths to lists of guids as strings and Dictionary mapping guids to db paths.

qcodes.dataset.guids_from_list_str(s: str) tuple[str, ...] | None[source]

Get tuple of guids from a python/json string representation of a list.

Extracts the guids from a string representation of a list, tuple, or set of guids or a single guid.

Parameters:

s – input string

Returns:

Extracted guids as a tuple of strings. If a provided string does not match the format, None will be returned. For an empty list/tuple/set or empty string an empty tuple is returned.

Examples

>>> guids_from_str(
"['07fd7195-c51e-44d6-a085-fa8274cf00d6',           '070d7195-c51e-44d6-a085-fa8274cf00d6']")
will return
('07fd7195-c51e-44d6-a085-fa8274cf00d6',
'070d7195-c51e-44d6-a085-fa8274cf00d6')
qcodes.dataset.import_dat_file(location: str | Path, exp: Experiment | None = None) list[int][source]

This imports a QCoDeS legacy qcodes.data.data_set.DataSet into the database.

Parameters:
  • location – Path to file containing legacy dataset

  • exp – Specify the experiment to store data to. If None the default one is used. See the docs of qcodes.dataset.Measurement for more details.

qcodes.dataset.initialise_database(journal_mode: Literal['DELETE', 'TRUNCATE', 'PERSIST', 'MEMORY', 'WAL', 'OFF'] | None = 'WAL') None[source]

Initialise a database in the location specified by the config object and set atomic commit and rollback mode of the db. The db is created with the latest supported version. If the database already exists the atomic commit and rollback mode is set and the database is upgraded to the latest version.

Parameters:

journal_mode – Which journal_mode should be used for atomic commit and rollback. Options are DELETE, TRUNCATE, PERSIST, MEMORY, WAL and OFF. If set to None no changes are made.

qcodes.dataset.initialise_or_create_database_at(db_file_with_abs_path: str | Path, journal_mode: Literal['DELETE', 'TRUNCATE', 'PERSIST', 'MEMORY', 'WAL', 'OFF'] | None = 'WAL') None[source]

This function sets up QCoDeS to refer to the given database file. If the database file does not exist, it will be initiated.

Parameters:
  • db_file_with_abs_path – Database file name with absolute path, for example C:\mydata\majorana_experiments.db

  • journal_mode – Which journal_mode should be used for atomic commit and rollback. Options are DELETE, TRUNCATE, PERSIST, MEMORY, WAL and OFF. If set to None no changes are made.

qcodes.dataset.initialised_database_at(db_file_with_abs_path: str | Path) Iterator[None][source]

Initializes or creates a database and restores the ‘db_location’ afterwards.

Parameters:

db_file_with_abs_path – Database file name with absolute path, for example C:\mydata\majorana_experiments.db

class qcodes.dataset.LinSweeper(*args: Any, **kwargs: Any)[source]

Bases: LinSweep

An iterable version of the LinSweep class

Iterations of this object, set the next setpoint and then wait the delay time

Attributes:

delay

Delay between two consecutive sweep points.

get_after_set

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

num_points

Number of sweep points.

param

Returns the Qcodes sweep parameter.

post_actions

actions to be performed after setting param to its setpoint.

Methods:

get_setpoints()

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

property delay: float

Delay between two consecutive sweep points.

property get_after_set: bool

Should we perform a call to get on the parameter after setting it and store that rather than the setpoint value in the dataset?

This defaults to False for backwards compatibility but subclasses should overwrite this to implement if correctly.

get_setpoints() ndarray[Any, dtype[float64]]

Linear (evenly spaced) numpy array for supplied start, stop and num_points.

property num_points: int

Number of sweep points.

property param: ParameterBase

Returns the Qcodes sweep parameter.

property post_actions: Sequence[Callable[[], None]]

actions to be performed after setting param to its setpoint.

qcodes.dataset.load_by_counter(counter: int, exp_id: int, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a dataset given its counter in a given experiment

Lookup is performed in the database file that is specified in the config.

Note that the counter used in this function in not preserved when copying data to another db file. We recommend using load_by_run_spec() which does not have this issue and is significantly more flexible.

If the raw data is in the database this will be loaded as a qcodes.dataset.data_set.DataSet otherwise it will be loaded as a DataSetInMemory

Parameters:
  • counter – counter of the dataset within the given experiment

  • exp_id – id of the experiment where to look for the dataset

  • conn – connection to the database to load from. If not provided, a connection to the DB file specified in the config is made

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory of the given counter in the given experiment

qcodes.dataset.load_by_guid(guid: str, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a dataset by its GUID

If no connection is provided, lookup is performed in the database file that is specified in the config.

If the raw data is in the database this will be loaded as a qcodes.dataset.data_set.DataSet otherwise it will be loaded as a DataSetInMemory

Parameters:
  • guid – guid of the dataset

  • conn – connection to the database to load from

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory with the given guid

Raises:
  • NameError – if no run with the given GUID exists in the database

  • RuntimeError – if several runs with the given GUID are found

qcodes.dataset.load_by_id(run_id: int, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a dataset by run id

If no connection is provided, lookup is performed in the database file that is specified in the config.

Note that the run_id used in this function in not preserved when copying data to another db file. We recommend using load_by_run_spec() which does not have this issue and is significantly more flexible.

If the raw data is in the database this will be loaded as a qcodes.dataset.data_set.DataSet otherwise it will be loaded as a DataSetInMemory

Parameters:
  • run_id – run id of the dataset

  • conn – connection to the database to load from

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory with the given run id

qcodes.dataset.load_by_run_spec(*, captured_run_id: int | None = None, captured_counter: int | None = None, experiment_name: str | None = None, sample_name: str | None = None, sample_id: int | None = None, location: int | None = None, work_station: int | None = None, conn: ConnectionPlus | None = None) DataSetProtocol[source]

Load a run from one or more pieces of runs specification. All fields are optional but the function will raise an error if more than one run matching the supplied specification is found. Along with the error specs of the runs found will be printed.

If the raw data is in the database this will be loaded as a qcodes.dataset.data_set.DataSet otherwise it will be loaded as a DataSetInMemory

Parameters:
  • captured_run_id – The run_id that was originally assigned to this at the time of capture.

  • captured_counter – The counter that was originally assigned to this at the time of capture.

  • experiment_name – name of the experiment that the run was captured

  • sample_name – The name of the sample given when creating the experiment.

  • sample_id – The sample_id assigned as part of the GUID.

  • location – The location code assigned as part of GUID.

  • work_station – The workstation assigned as part of the GUID.

  • conn – An optional connection to the database. If no connection is supplied a connection to the default database will be opened.

Raises:

NameError – if no run or more than one run with the given specification exists in the database

Returns:

qcodes.dataset.data_set.DataSet or DataSetInMemory matching the provided specification.

qcodes.dataset.load_experiment(exp_id: int, conn: ConnectionPlus | None = None) Experiment[source]

Load experiment with the specified id (from database file from config)

Parameters:
  • exp_id – experiment id

  • conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

experiment with the specified id

Raises:

ValueError – If experiment id is not an integer.

qcodes.dataset.load_experiment_by_name(name: str, sample: str | None = None, conn: ConnectionPlus | None = None, load_last_duplicate: bool = False) Experiment[source]

Try to load experiment with the specified name.

Nothing stops you from having many experiments with the same name and sample name. In that case this won’t work unless load_last_duplicate is set to True. Then, the last of duplicated experiments will be loaded.

Parameters:
  • name – the name of the experiment

  • sample – the name of the sample

  • load_last_duplicate – If True, prevent raising error for having multiple experiments with the same name and sample name, and load the last duplicated experiment, instead.

  • conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

The requested experiment

Raises:

ValueError – either if the name and sample name are not unique, unless load_last_duplicate is True, or if no experiment found for the supplied name and sample.

qcodes.dataset.load_from_netcdf(path: Path | str, path_to_db: Path | str | None = None) DataSetInMem[source]

Create a in memory dataset from a netcdf file. The netcdf file is expected to contain a QCoDeS dataset that has been exported using the QCoDeS netcdf export functions.

Parameters:
  • path – Path to the netcdf file to import.

  • path_to_db – Optional path to a database where this dataset may be exported to. If not supplied the path can be given at export time or the dataset exported to the default db as set in the QCoDeS config.

Returns:

The loaded dataset.

qcodes.dataset.load_last_experiment() Experiment[source]

Load last experiment (from database file from config)

Returns:

The last experiment

Raises:

ValueError – If no experiment exists in the db.

qcodes.dataset.load_or_create_experiment(experiment_name: str, sample_name: str | None = None, conn: ConnectionPlus | None = None, load_last_duplicate: bool = False) Experiment[source]

Find and return an experiment with the given name and sample name, or create one if not found.

Parameters:
  • experiment_name – Name of the experiment to find or create.

  • sample_name – Name of the sample.

  • load_last_duplicate – If True, prevent raising error for having multiple experiments with the same name and sample name, and load the last duplicated experiment, instead.

  • conn – Connection to the database. If not supplied, a new connection to the DB file specified in the config is made.

Returns:

The found or created experiment

Raises:

ValueError – If the name and sample name are not unique, unless load_last_duplicate is True.

qcodes.dataset.new_data_set(name: str, exp_id: int | None = None, specs: list[ParamSpec] | None = None, values: Sequence[str | complex | list | ndarray | bool | None] | None = None, metadata: Any | None = None, conn: ConnectionPlus | None = None, in_memory_cache: bool = True) DataSet[source]

Create a new dataset in the currently active/selected database.

If exp_id is not specified, the last experiment will be loaded by default.

Parameters:
  • name – the name of the new dataset

  • exp_id – the id of the experiments this dataset belongs to, defaults to the last experiment

  • specs – list of parameters to create this dataset with

  • values – the values to associate with the parameters

  • metadata – the metadata to associate with the dataset

  • in_memory_cache – Should measured data be keep in memory and available as part of the dataset.cache object.

Returns:

the newly created qcodes.dataset.data_set.DataSet

qcodes.dataset.new_experiment(name: str, sample_name: str | None, format_string: str = '{}-{}-{}', conn: ConnectionPlus | None = None) Experiment[source]

Create a new experiment (in the database file from config)

Parameters:
  • name – the name of the experiment

  • sample_name – the name of the current sample

  • format_string – basic format string for table-name must contain 3 placeholders.

  • conn – connection to the database. If not supplied, a new connection to the DB file specified in the config is made

Returns:

the new experiment

qcodes.dataset.plot_by_id(run_id: int, axes: Axes | Sequence[Axes] | None = None, colorbars: Colorbar | Sequence[Colorbar] | None = None, rescale_axes: bool = True, auto_color_scale: bool | None = None, cutoff_percentile: tuple[float, float] | float | None = None, complex_plot_type: Literal['real_and_imag', 'mag_and_phase'] = 'real_and_imag', complex_plot_phase: Literal['radians', 'degrees'] = 'radians', **kwargs: Any) AxesTupleList[source]

Construct all plots for a given run_id. Here run_id is an alias for captured_run_id for historical reasons. See the docs of qcodes.dataset.load_by_run_spec() for details of loading runs. All other arguments are forwarded to plot_dataset(), see this for more details.

qcodes.dataset.plot_dataset(dataset: DataSetProtocol, axes: Axes | Sequence[Axes] | None = None, colorbars: Colorbar | Sequence[Colorbar] | Sequence[None] | None = None, rescale_axes: bool = True, auto_color_scale: bool | None = None, cutoff_percentile: tuple[float, float] | float | None = None, complex_plot_type: Literal['real_and_imag', 'mag_and_phase'] = 'real_and_imag', complex_plot_phase: Literal['radians', 'degrees'] = 'radians', **kwargs: Any) AxesTupleList[source]

Construct all plots for a given dataset

Implemented so far:

  • 1D line and scatter plots

  • 2D plots on filled out rectangular grids

  • 2D scatterplots (fallback)

The function can optionally be supplied with a matplotlib axes or a list of axes that will be used for plotting. The user should ensure that the number of axes matches the number of datasets to plot. To plot several (1D) dataset in the same axes supply it several times. Colorbar axes are created dynamically. If colorbar axes are supplied, they will be reused, yet new colorbar axes will be returned.

The plot has a title that comprises run id, experiment name, and sample name.

**kwargs are passed to matplotlib’s relevant plotting functions By default the data in any vector plot will be rasterized for scatter plots and heatmaps if more than 5000 points are supplied. This can be overridden by supplying the rasterized kwarg.

Parameters:
  • dataset – The dataset to plot

  • axes – Optional Matplotlib axes to plot on. If not provided, new axes will be created

  • colorbars – Optional Matplotlib Colorbars to use for 2D plots. If not provided, new ones will be created

  • rescale_axes – If True, tick labels and units for axes of parameters with standard SI units will be rescaled so that, for example, ‘0.00000005’ tick label on ‘V’ axis are transformed to ‘50’ on ‘nV’ axis (‘n’ is ‘nano’)

  • auto_color_scale – If True, the colorscale of heatmap plots will be automatically adjusted to disregard outliers.

  • cutoff_percentile – Percentile of data that may maximally be clipped on both sides of the distribution. If given a tuple (a,b) the percentile limits will be a and 100-b. See also the plotting tuorial notebook.

  • complex_plot_type – Method for converting complex-valued parameters into two real-valued parameters, either "real_and_imag" or "mag_and_phase". Applicable only for the cases where the dataset contains complex numbers

  • complex_plot_phase – Format of phase for plotting complex-valued data, either "radians" or "degrees". Applicable only for the cases where the dataset contains complex numbers

Returns:

A list of axes and a list of colorbars of the same length. The colorbar axes may be None if no colorbar is created (e.g. for 1D plots)

Config dependencies: (qcodesrc.json)

qcodes.dataset.reset_default_experiment_id(conn: ConnectionPlus | None = None) None[source]

Resets the default experiment id to to the last experiment in the db.

qcodes.dataset.rundescriber_from_json(json_str: str) RunDescriber

Deserialize a JSON string into a RunDescriber of the current version