This page was generated from docs/examples/DataSet/Exporting-data-to-other-file-formats.ipynb. Interactive online version: Binder badge.

Exporting QCoDes Datasets

This notebook demonstrates how we can export QCoDeS datasets to other file formats.


First, we borrow an example from the measurement notebook to have some data to work with.

%matplotlib inline
import shutil
from pathlib import Path

import numpy as np

import qcodes as qc
import qcodes.logger
from qcodes.dataset import (
from qcodes.tests.instrument_mocks import (

Logging hadn't been started.
Activating auto-logging. Current session state plus future input saved.
Filename       : /home/runner/.qcodes/logs/command_history.log
Mode           : append
Output logging : True
Raw input log  : False
Timestamping   : True
State          : active
Qcodes Logfile : /home/runner/.qcodes/logs/230929-5062-qcodes.log
# preparatory mocking of physical setup
dac = DummyInstrument("dac", gates=["ch1", "ch2"])
dmm = DummyInstrumentWithMeasurement("dmm", setter_instr=dac)
station = qc.Station(dmm, dac)
exp = load_or_create_experiment(
    experiment_name="exporting_data", sample_name="no sample"
meas = Measurement(exp)
meas.register_parameter(dac.ch1)  # register the first independent parameter
meas.register_parameter(dac.ch2)  # register the second independent parameter
    dmm.v2, setpoints=(dac.ch1, dac.ch2)
)  # register the dependent one
<qcodes.dataset.measurements.Measurement at 0x7f12e1c08970>

We then perform two very basic measurements using dummy instruments.

# run a 2D sweep

with as datasaver:

    for v1 in np.linspace(-1, 0, 200, endpoint=False):
        for v2 in np.linspace(-1, 1, 201):
            val = dmm.v2.get()
            datasaver.add_result((dac.ch1, v1), (dac.ch2, v2), (dmm.v2, val))

dataset1 = datasaver.dataset
Starting experimental run with id: 1.
# run a 2D sweep

with as datasaver:
    for v1 in np.linspace(0, 1, 200, endpoint=False):
        for v2 in np.linspace(1, 2, 201):
            val = dmm.v2.get()
            datasaver.add_result((dac.ch1, v1), (dac.ch2, v2), (dmm.v2, val))

dataset2 = datasaver.dataset
Starting experimental run with id: 2.

Exporting data manually

The dataset can be exported using the export method. Currently exporting to netcdf and csv is supported.

dataset2.export("netcdf", path=".")

The export_info attribute contains information about where the dataset has been exported to:

ExportInfo(export_paths={'nc': '/home/runner/work/Qcodes/Qcodes/docs/examples/DataSet/'})

Looking at the signature of export we can see that in addition to the file format we can set the prefix and path to export to.


Export data automatically

Datasets may also be exported automatically using the configuration options given in dataset config section. Here you can toggle if a dataset should be exported automatically using the export_automatic option as well as set the default type, prefix, elements in the name, and path. See the table here for the relevant configuration options.

For more information about how to configure QCoDeS datasets see the page about configuration in the QCoDeS docs.

By default datasets are exported into a folder next to the database with the same name but . replaced by _ e.g. if you store data to ~/experiments.db the exported files will be storred in ~/experiments_db This folder is automatically created if it does not exist.

Automatically post process exported datasets.

QCoDeD will attempt to call any EntryPoint registered for the group “qcodes.dataset.on_export”. This will allow a user to setup a function that can trigger post processing such as backup to cloud, external drive, plotting or post process analysis. Functions registered for this entry point group are expected to take a Path to the file as input and return None. Please consult the Setuptools docs for more information on the use of EntryPoints. The entry point function must take the path to the exported file as a positional argument and take **kwargs for future compatibility. At the moment a single keyword argument automatic_export is passed to the function which indicates if the dataset was automatically or manually exported.

Importing exported datasets into a new database

The above dataset has been created in the following database


Now lets imagine that we move the exported dataset to a different computer. To emulate this we will create a new database file and set it as the active database.


We can then reload the dataset from the netcdf file as a DataSetInMem. This is a class that closely matches the regular DataSet class however its metadata may or may not be written to a database file and its data is not written to a database file. See more in In memory dataset . Concretely this means that the data captured in the dataset can be acceced via etc. and not via the methods directly on the dataset (dataset.get_parameter_data …)

Note that it is currently only possible to reload a dataset from a netcdf export and not from a csv export. This is due to the fact that a csv export only contains the raw data and not the metadata needed to recreate a dataset.

loaded_ds = load_from_netcdf(dataset2.export_info.export_paths["nc"])

However, we can still export the data to Pandasa and xarray.

Dimensions:  (dac_ch1: 200, dac_ch2: 201)
  * dac_ch1  (dac_ch1) float64 0.0 0.005 0.01 0.015 ... 0.98 0.985 0.99 0.995
  * dac_ch2  (dac_ch2) float64 1.0 1.005 1.01 1.015 ... 1.985 1.99 1.995 2.0
Data variables:
    dmm_v2   (dac_ch1, dac_ch2) float64 0.005865 0.005756 ... 0.0003594 0.001046
Attributes: (12/15)
    ds_name:                  results
    sample_name:              no sample
    exp_name:                 exporting_data
    snapshot:                 {"station": {"instruments": {"dmm": {"functions...
    guid:                     19a1f0da-0000-0000-0000-018ae074a8eb
    run_timestamp:            2023-09-29 10:20:56
    ...                       ...
    run_id:                   2
    run_description:          {"version": 3, "interdependencies": {"paramspec...
    parent_dataset_links:     []
    run_timestamp_raw:        1695982856.4334233
    completed_timestamp_raw:  1695982866.4953754
    export_info:              {"export_paths": {"nc": "/home/runner/work/Qcod...

And plot it using plot_dataset.

([<Axes: title={'center': 'Run #2, Experiment exporting_data (no sample)'}, xlabel='Gate ch1 (mV)', ylabel='Gate ch2 (V)'>],
 [<matplotlib.colorbar.Colorbar at 0x7f12d337f3d0>])

Note that the dataset will have the same captured_run_id and captured_counter as before:

captured_run_id = loaded_ds.captured_run_id

But do note that the run_id and counter are in general not preserved since they represent the datasets number in a given db file.


A loaded datasets metadata can be written to the current db file and subsequently the dataset including metadata and raw data reloaded from the database and netcdf file.


Now that the metadata has been written to a database the dataset can be plotted with plottr like a regular dataset.

del loaded_ds
reloaded_ds = load_by_run_spec(captured_run_id=captured_run_id)
([<Axes: title={'center': 'Run #2, Experiment exporting_data (no sample)'}, xlabel='Gate ch1 (mV)', ylabel='Gate ch2 (V)'>],
 [<matplotlib.colorbar.Colorbar at 0x7f12d12ca5c0>])

Note that loading a dataset from the database will also load the raw data into dataset.cache provided that the netcdf file is still in the location where file was when the metadata was written to the database. Load_by_runspec and related functions will load data into a regular DataSet provided that the data can be found in the database otherwise it will be loaded into a DataSetInMem

If the netcdf file cannot be found the dataset will load with a warning and the raw data will not be accessible from the dataset.

If this happens because you have moved the location of a netcdf file you can use the method set_netcdf_location to set a new location for the the netcdf file in the dataset and database file. Here we demonstrate this by copying the netcdf file and changing the location using this method.

filepath = dataset2.export_info.export_paths["nc"]
new_file_path = str(Path(dataset2.export_info.export_paths["nc"]).parent / "")
shutil.copyfile(dataset2.export_info.export_paths["nc"], new_file_path)
ExportInfo(export_paths={'nc': '/home/runner/work/Qcodes/Qcodes/docs/examples/DataSet/'})