In [1]: import pandas as pd
Data used for this tutorial:
  • For this tutorial, air quality data about \(NO_2\) is used, made available by OpenAQ and downloaded using the py-openaq package.

    The air_quality_no2_long.csv data set provides \(NO_2\) values for the measurement stations FR04014, BETR801 and London Westminster in respectively Paris, Antwerp and London.

    To raw data
    In [2]: air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
       ...:                               parse_dates=True)
       ...: 
    ---------------------------------------------------------------------------
    FileNotFoundError                         Traceback (most recent call last)
    <ipython-input-2-435195fa7086> in <module>
    ----> 1 air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
          2                               parse_dates=True)
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        209                 else:
        210                     kwargs[new_arg_name] = new_arg_value
    --> 211             return func(*args, **kwargs)
        212 
        213         return cast(F, wrapper)
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        329                     stacklevel=find_stack_level(),
        330                 )
    --> 331             return func(*args, **kwargs)
        332 
        333         # error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
        948     kwds.update(kwds_defaults)
        949 
    --> 950     return _read(filepath_or_buffer, kwds)
        951 
        952 
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _read(filepath_or_buffer, kwds)
        603 
        604     # Create the parser.
    --> 605     parser = TextFileReader(filepath_or_buffer, **kwds)
        606 
        607     if chunksize or iterator:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in __init__(self, f, engine, **kwds)
       1440 
       1441         self.handles: IOHandles | None = None
    -> 1442         self._engine = self._make_engine(f, self.engine)
       1443 
       1444     def close(self) -> None:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _make_engine(self, f, engine)
       1733                 if "b" not in mode:
       1734                     mode += "b"
    -> 1735             self.handles = get_handle(
       1736                 f,
       1737                 mode,
    
    /usr/lib/python3/dist-packages/pandas/io/common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
        854         if ioargs.encoding and "b" not in ioargs.mode:
        855             # Encoding
    --> 856             handle = open(
        857                 handle,
        858                 ioargs.mode,
    
    FileNotFoundError: [Errno 2] No such file or directory: 'data/air_quality_no2_long.csv'
    
    In [3]: air_quality_no2 = air_quality_no2[["date.utc", "location",
       ...:                                    "parameter", "value"]]
       ...: 
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-3-59862153042a> in <module>
    ----> 1 air_quality_no2 = air_quality_no2[["date.utc", "location",
          2                                    "parameter", "value"]]
    
    NameError: name 'air_quality_no2' is not defined
    
    In [4]: air_quality_no2.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-4-58c5e7a4e896> in <module>
    ----> 1 air_quality_no2.head()
    
    NameError: name 'air_quality_no2' is not defined
    
  • For this tutorial, air quality data about Particulate matter less than 2.5 micrometers is used, made available by OpenAQ and downloaded using the py-openaq package.

    The air_quality_pm25_long.csv data set provides \(PM_{25}\) values for the measurement stations FR04014, BETR801 and London Westminster in respectively Paris, Antwerp and London.

    To raw data
    In [5]: air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
       ...:                                parse_dates=True)
       ...: 
    ---------------------------------------------------------------------------
    FileNotFoundError                         Traceback (most recent call last)
    <ipython-input-5-b7504c7c078c> in <module>
    ----> 1 air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
          2                                parse_dates=True)
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        209                 else:
        210                     kwargs[new_arg_name] = new_arg_value
    --> 211             return func(*args, **kwargs)
        212 
        213         return cast(F, wrapper)
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        329                     stacklevel=find_stack_level(),
        330                 )
    --> 331             return func(*args, **kwargs)
        332 
        333         # error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
        948     kwds.update(kwds_defaults)
        949 
    --> 950     return _read(filepath_or_buffer, kwds)
        951 
        952 
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _read(filepath_or_buffer, kwds)
        603 
        604     # Create the parser.
    --> 605     parser = TextFileReader(filepath_or_buffer, **kwds)
        606 
        607     if chunksize or iterator:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in __init__(self, f, engine, **kwds)
       1440 
       1441         self.handles: IOHandles | None = None
    -> 1442         self._engine = self._make_engine(f, self.engine)
       1443 
       1444     def close(self) -> None:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _make_engine(self, f, engine)
       1733                 if "b" not in mode:
       1734                     mode += "b"
    -> 1735             self.handles = get_handle(
       1736                 f,
       1737                 mode,
    
    /usr/lib/python3/dist-packages/pandas/io/common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
        854         if ioargs.encoding and "b" not in ioargs.mode:
        855             # Encoding
    --> 856             handle = open(
        857                 handle,
        858                 ioargs.mode,
    
    FileNotFoundError: [Errno 2] No such file or directory: 'data/air_quality_pm25_long.csv'
    
    In [6]: air_quality_pm25 = air_quality_pm25[["date.utc", "location",
       ...:                                      "parameter", "value"]]
       ...: 
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-6-6cc7e48ac44f> in <module>
    ----> 1 air_quality_pm25 = air_quality_pm25[["date.utc", "location",
          2                                      "parameter", "value"]]
    
    NameError: name 'air_quality_pm25' is not defined
    
    In [7]: air_quality_pm25.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-7-14883aac8221> in <module>
    ----> 1 air_quality_pm25.head()
    
    NameError: name 'air_quality_pm25' is not defined
    

How to combine data from multiple tables?

Concatenating objects

../../_images/08_concat_row.svg
  • I want to combine the measurements of \(NO_2\) and \(PM_{25}\), two tables with a similar structure, in a single table.

    In [8]: air_quality = pd.concat([air_quality_pm25, air_quality_no2], axis=0)
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-8-ff53542a9541> in <module>
    ----> 1 air_quality = pd.concat([air_quality_pm25, air_quality_no2], axis=0)
    
    NameError: name 'air_quality_pm25' is not defined
    
    In [9]: air_quality.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-9-7c0df1c960a9> in <module>
    ----> 1 air_quality.head()
    
    NameError: name 'air_quality' is not defined
    

    The concat() function performs concatenation operations of multiple tables along one of the axes (row-wise or column-wise).

By default concatenation is along axis 0, so the resulting table combines the rows of the input tables. Let’s check the shape of the original and the concatenated tables to verify the operation:

In [10]: print('Shape of the ``air_quality_pm25`` table: ', air_quality_pm25.shape)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-10-88ac6c1a49ab> in <module>
----> 1 print('Shape of the ``air_quality_pm25`` table: ', air_quality_pm25.shape)

NameError: name 'air_quality_pm25' is not defined

In [11]: print('Shape of the ``air_quality_no2`` table: ', air_quality_no2.shape)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-11-e4a9c8320d81> in <module>
----> 1 print('Shape of the ``air_quality_no2`` table: ', air_quality_no2.shape)

NameError: name 'air_quality_no2' is not defined

In [12]: print('Shape of the resulting ``air_quality`` table: ', air_quality.shape)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-12-f14152766340> in <module>
----> 1 print('Shape of the resulting ``air_quality`` table: ', air_quality.shape)

NameError: name 'air_quality' is not defined

Hence, the resulting table has 3178 = 1110 + 2068 rows.

Note

The axis argument will return in a number of pandas methods that can be applied along an axis. A DataFrame has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Most operations like concatenation or summary statistics are by default across rows (axis 0), but can be applied across columns as well.

Sorting the table on the datetime information illustrates also the combination of both tables, with the parameter column defining the origin of the table (either no2 from table air_quality_no2 or pm25 from table air_quality_pm25):

In [13]: air_quality = air_quality.sort_values("date.utc")
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-13-f64ce77ef27c> in <module>
----> 1 air_quality = air_quality.sort_values("date.utc")

NameError: name 'air_quality' is not defined

In [14]: air_quality.head()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-14-7c0df1c960a9> in <module>
----> 1 air_quality.head()

NameError: name 'air_quality' is not defined

In this specific example, the parameter column provided by the data ensures that each of the original tables can be identified. This is not always the case. The concat function provides a convenient solution with the keys argument, adding an additional (hierarchical) row index. For example:

In [15]: air_quality_ = pd.concat([air_quality_pm25, air_quality_no2], keys=["PM25", "NO2"])
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-15-edcb3258be42> in <module>
----> 1 air_quality_ = pd.concat([air_quality_pm25, air_quality_no2], keys=["PM25", "NO2"])

NameError: name 'air_quality_pm25' is not defined

In [16]: air_quality_.head()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-16-22e24860e613> in <module>
----> 1 air_quality_.head()

NameError: name 'air_quality_' is not defined

Note

The existence of multiple row/column indices at the same time has not been mentioned within these tutorials. Hierarchical indexing or MultiIndex is an advanced and powerful pandas feature to analyze higher dimensional data.

Multi-indexing is out of scope for this pandas introduction. For the moment, remember that the function reset_index can be used to convert any level of an index to a column, e.g. air_quality.reset_index(level=0)

To user guide

Feel free to dive into the world of multi-indexing at the user guide section on advanced indexing.

To user guide

More options on table concatenation (row and column wise) and how concat can be used to define the logic (union or intersection) of the indexes on the other axes is provided at the section on object concatenation.

Join tables using a common identifier

../../_images/08_merge_left.svg
  • Add the station coordinates, provided by the stations metadata table, to the corresponding rows in the measurements table.

    Warning

    The air quality measurement station coordinates are stored in a data file air_quality_stations.csv, downloaded using the py-openaq package.

    In [17]: stations_coord = pd.read_csv("data/air_quality_stations.csv")
    ---------------------------------------------------------------------------
    FileNotFoundError                         Traceback (most recent call last)
    <ipython-input-17-25bc2335cf2a> in <module>
    ----> 1 stations_coord = pd.read_csv("data/air_quality_stations.csv")
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        209                 else:
        210                     kwargs[new_arg_name] = new_arg_value
    --> 211             return func(*args, **kwargs)
        212 
        213         return cast(F, wrapper)
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        329                     stacklevel=find_stack_level(),
        330                 )
    --> 331             return func(*args, **kwargs)
        332 
        333         # error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
        948     kwds.update(kwds_defaults)
        949 
    --> 950     return _read(filepath_or_buffer, kwds)
        951 
        952 
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _read(filepath_or_buffer, kwds)
        603 
        604     # Create the parser.
    --> 605     parser = TextFileReader(filepath_or_buffer, **kwds)
        606 
        607     if chunksize or iterator:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in __init__(self, f, engine, **kwds)
       1440 
       1441         self.handles: IOHandles | None = None
    -> 1442         self._engine = self._make_engine(f, self.engine)
       1443 
       1444     def close(self) -> None:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _make_engine(self, f, engine)
       1733                 if "b" not in mode:
       1734                     mode += "b"
    -> 1735             self.handles = get_handle(
       1736                 f,
       1737                 mode,
    
    /usr/lib/python3/dist-packages/pandas/io/common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
        854         if ioargs.encoding and "b" not in ioargs.mode:
        855             # Encoding
    --> 856             handle = open(
        857                 handle,
        858                 ioargs.mode,
    
    FileNotFoundError: [Errno 2] No such file or directory: 'data/air_quality_stations.csv'
    
    In [18]: stations_coord.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-18-18fa4ec3d77b> in <module>
    ----> 1 stations_coord.head()
    
    NameError: name 'stations_coord' is not defined
    

    Note

    The stations used in this example (FR04014, BETR801 and London Westminster) are just three entries enlisted in the metadata table. We only want to add the coordinates of these three to the measurements table, each on the corresponding rows of the air_quality table.

    In [19]: air_quality.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-19-7c0df1c960a9> in <module>
    ----> 1 air_quality.head()
    
    NameError: name 'air_quality' is not defined
    
    In [20]: air_quality = pd.merge(air_quality, stations_coord, how="left", on="location")
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-20-15f728d557c6> in <module>
    ----> 1 air_quality = pd.merge(air_quality, stations_coord, how="left", on="location")
    
    NameError: name 'air_quality' is not defined
    
    In [21]: air_quality.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-21-7c0df1c960a9> in <module>
    ----> 1 air_quality.head()
    
    NameError: name 'air_quality' is not defined
    

    Using the merge() function, for each of the rows in the air_quality table, the corresponding coordinates are added from the air_quality_stations_coord table. Both tables have the column location in common which is used as a key to combine the information. By choosing the left join, only the locations available in the air_quality (left) table, i.e. FR04014, BETR801 and London Westminster, end up in the resulting table. The merge function supports multiple join options similar to database-style operations.

  • Add the parameters’ full description and name, provided by the parameters metadata table, to the measurements table.

    Warning

    The air quality parameters metadata are stored in a data file air_quality_parameters.csv, downloaded using the py-openaq package.

    In [22]: air_quality_parameters = pd.read_csv("data/air_quality_parameters.csv")
    ---------------------------------------------------------------------------
    FileNotFoundError                         Traceback (most recent call last)
    <ipython-input-22-97f65fc5b812> in <module>
    ----> 1 air_quality_parameters = pd.read_csv("data/air_quality_parameters.csv")
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        209                 else:
        210                     kwargs[new_arg_name] = new_arg_value
    --> 211             return func(*args, **kwargs)
        212 
        213         return cast(F, wrapper)
    
    /usr/lib/python3/dist-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
        329                     stacklevel=find_stack_level(),
        330                 )
    --> 331             return func(*args, **kwargs)
        332 
        333         # error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
        948     kwds.update(kwds_defaults)
        949 
    --> 950     return _read(filepath_or_buffer, kwds)
        951 
        952 
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _read(filepath_or_buffer, kwds)
        603 
        604     # Create the parser.
    --> 605     parser = TextFileReader(filepath_or_buffer, **kwds)
        606 
        607     if chunksize or iterator:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in __init__(self, f, engine, **kwds)
       1440 
       1441         self.handles: IOHandles | None = None
    -> 1442         self._engine = self._make_engine(f, self.engine)
       1443 
       1444     def close(self) -> None:
    
    /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py in _make_engine(self, f, engine)
       1733                 if "b" not in mode:
       1734                     mode += "b"
    -> 1735             self.handles = get_handle(
       1736                 f,
       1737                 mode,
    
    /usr/lib/python3/dist-packages/pandas/io/common.py in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
        854         if ioargs.encoding and "b" not in ioargs.mode:
        855             # Encoding
    --> 856             handle = open(
        857                 handle,
        858                 ioargs.mode,
    
    FileNotFoundError: [Errno 2] No such file or directory: 'data/air_quality_parameters.csv'
    
    In [23]: air_quality_parameters.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-23-2c151a941880> in <module>
    ----> 1 air_quality_parameters.head()
    
    NameError: name 'air_quality_parameters' is not defined
    
    In [24]: air_quality = pd.merge(air_quality, air_quality_parameters,
       ....:                        how='left', left_on='parameter', right_on='id')
       ....: 
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-24-8ff577333900> in <module>
    ----> 1 air_quality = pd.merge(air_quality, air_quality_parameters,
          2                        how='left', left_on='parameter', right_on='id')
    
    NameError: name 'air_quality' is not defined
    
    In [25]: air_quality.head()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-25-7c0df1c960a9> in <module>
    ----> 1 air_quality.head()
    
    NameError: name 'air_quality' is not defined
    

    Compared to the previous example, there is no common column name. However, the parameter column in the air_quality table and the id column in the air_quality_parameters_name both provide the measured variable in a common format. The left_on and right_on arguments are used here (instead of just on) to make the link between the two tables.

To user guide

pandas supports also inner, outer, and right joins. More information on join/merge of tables is provided in the user guide section on database style merging of tables. Or have a look at the comparison with SQL page.

REMEMBER

  • Multiple tables can be concatenated both column-wise and row-wise using the concat function.

  • For database-like merging/joining of tables, use the merge function.

To user guide

See the user guide for a full description of the various facilities to combine data tables.