import fiona
import geopandas as gpd
import shapely
7 Geographic data I/O
7.1 Prerequisites
This chapter requires the following packages, used in previous chapters:
7.2 Introduction
This chapter is about reading and writing geographic data. Geographic data import is essential for geocomputation: real-world applications are impossible without data. Data output is also vital, enabling others to use valuable new or improved datasets resulting from your work. Taken together, these processes of import/output can be referred to as data I/O.
Geographic data I/O is often done with few lines of code at the beginning and end of projects. It is often overlooked as a simple one step process. However, mistakes made at the outset of projects (e.g. using an out-of-date or in some way faulty dataset) can lead to large problems later down the line, so it is worth putting considerable time into identifying which datasets are available, where they can be found and how to retrieve them. These topics are covered in Section 7.3, which describes various geoportals, which collectively contain many terabytes of data, and how to use them. To further ease data access, a number of packages for downloading geographic data have been developed, as described in Section 7.4.
There are many geographic file formats, each of which has pros and cons, described in Section 7.6. The process of reading and writing files in formats efficiently is covered in Sections Section 7.7 and Section 7.8, respectively. The final Section Section 7.9 demonstrates methods for saving visual outputs (maps), in preparation for Chapter 8 on visualization.
7.3 Retrieving open data
7.4 Geographic data packages
7.5 Geographic web services
7.6 File formats
Geographic datasets are usually stored as files or in spatial databases. File formats can either store vector or raster data, while spatial databases such as PostGIS can store both. The large variety of file formats may seem bewildering, but there has been much consolidation and standardization since the beginnings of GIS software in the 1960s when the first widely distributed program (SYMAP) for spatial analysis was created at Harvard University [@coppock_history_1991].
GDAL (which should be pronounced âgoo-dalâ, with the double âoâ making a reference to object-orientation), the Geospatial Data Abstraction Library, has resolved many issues associated with incompatibility between geographic file formats since its release in 2000. GDAL provides a unified and high-performance interface for reading and writing of many raster and vector data formats. Many open and proprietary GIS programs, including GRASS, ArcGIS and QGIS, use GDAL behind their GUIs for doing the legwork of ingesting and spitting out geographic data in appropriate formats.
GDAL provides access to more than 200 vector and raster data formats. Table 7.1 presents some basic information about selected and often used spatial file formats.
Name | Extension | Info | Type | Model |
---|---|---|---|---|
ESRI Shapefile | .shp (the main file) |
Popular format consisting of at least three files. No support for: files > 2GB;mixed types; names > 10 chars; cols > 255. | Vector | Partially open |
GeoJSON | .geojson |
Extends the JSON exchange format by including a subset of the simple feature representation; mostly used for storing coordinates in longitude and latitude; it is extended by the TopoJSON format | Vector | Open |
KML | .kml |
XML-based format for spatial visualization, developed for use with Google Earth. Zipped KML file forms the KMZ format. | Vector | Open |
GPX | .gpx |
XML schema created for exchange of GPS data. | Vector | Open |
FlatGeobuf | .fgb |
Single file format allowing for quick reading and writing of vector data. Has streaming capabilities. | Vector | Open |
GeoTIFF | .tif/.tiff |
Popular raster format. A TIFF file containing additional spatial metadata. | Raster | Open |
Arc ASCII | .asc |
Text format where the first six lines represent the raster header, followed by the raster cell values arranged in rows and columns. | Raster | Open |
SQLite/SpatiaLite | .sqlite |
Standalone relational database, SpatiaLite is the spatial extension of SQLite. | Vector and raster | Open |
ESRI FileGDB | .gdb |
Spatial and nonspatial objects created by ArcGIS. Allows: multiple feature classes; topology. Limited support from GDAL. | Vector and raster | Proprietary |
GeoPackage | .gpkg |
Lightweight database container based on SQLite allowing an easy and platform-independent exchange of geodata | Vector and (very limited) raster | Open |
An important development ensuring the standardization and open-sourcing of file formats was the founding of the Open Geospatial Consortium (OGC) in 1994. Beyond defining the simple features data model (see Section 1.2.4), the OGC also coordinates the development of open standards, for example as used in file formats such as KML and GeoPackage. Open file formats of the kind endorsed by the OGC have several advantages over proprietary formats: the standards are published, ensure transparency and open up the possibility for users to further develop and adjust the file formats to their specific needs.
ESRI Shapefile is the most popular vector data exchange format; however, it is not an open format (though its specification is open). It was developed in the early 1990s and has a number of limitations. First of all, it is a multi-file format, which consists of at least three files. It only supports 255 columns, column names are restricted to ten characters and the file size limit is 2 GB. Furthermore, ESRI Shapefile does not support all possible geometry types, for example, it is unable to distinguish between a polygon and a multipolygon. Despite these limitations, a viable alternative had been missing for a long time. In the meantime, GeoPackage emerged, and seems to be a more than suitable replacement candidate for ESRI Shapefile. GeoPackage is a format for exchanging geospatial information and an OGC standard. The GeoPackage standard describes the rules on how to store geospatial information in a tiny SQLite container. Hence, GeoPackage is a lightweight spatial database container, which allows the storage of vector and raster data but also of non-spatial data and extensions. Aside from GeoPackage, there are other geospatial data exchange formats worth checking out (Table 7.1).
The GeoTIFF format seems to be the most prominent raster data format. It allows spatial information, such as the CRS definition and the transformation matrix (see Section 1.3.2), to be embedded within a TIFF file. Similar to ESRI Shapefile, this format was firstly developed in the 1990s, but as an open format. Additionally, GeoTIFF is still being expanded and improved. One of the most significant recent addition to the GeoTIFF format is its variant called COG (Cloud Optimized GeoTIFF). Raster objects saved as COGs can be hosted on HTTP servers, so other people can read only parts of the file without downloading the whole file (see Sections 8.6.2 and 8.7.2âŚ).
There is also a plethora of other spatial data formats that we do not explain in detail or mention in Table 7.1 due to the book limits. If you need to use other formats, we encourage you to read the GDAL documentation about vector and raster drivers. Additionally, some spatial data formats can store other data models (types) than vector or raster. It includes LAS and LAZ formats for storing lidar point clouds, and NetCDF and HDF for storing multidimensional arrays.
Finally, spatial data is also often stored using tabular (non-spatial) text formats, including CSV files or Excel spreadsheets. This can be convenient to share spatial datasets with people who, or software that, struggle with spatial data formats.
7.7 Data input (I)
Executing commands such as geopandas.read_file
(the main function we use for loading vector data) or rasterio.open
+.read
(the main functions used for loading raster data) silently sets off a chain of events that reads data from files. Moreover, there are many Python packages containing a wide range of geographic data or providing simple access to different data sources. All of them load the data into the Python environment or, more precisely, assign objects to your workspace, stored in RAM and accessible within the Python session.
7.7.1 Vector data
Spatial vector data comes in a wide variety of file formats. Most popular representations such as .shp
, .geojson
, and .gpkg
files can be imported directly into Python with the geopandas
function read_file
, which uses GDALâs vector drivers behind the scenes. geopandas
uses GDAL via fiona
(the default) or pyogrio
packages (a recently developed alternative to fiona
). After fiona
is imported, the command fiona.supported_drivers
lists the drivers available to GDAL, including ability to read (r
), append (a
), and write (w
):
fiona.supported_drivers
{'DXF': 'rw',
'CSV': 'raw',
'OpenFileGDB': 'raw',
'ESRIJSON': 'r',
'ESRI Shapefile': 'raw',
'FlatGeobuf': 'raw',
'GeoJSON': 'raw',
'GeoJSONSeq': 'raw',
'GPKG': 'raw',
'GML': 'rw',
'OGR_GMT': 'rw',
'GPX': 'rw',
'MapInfo File': 'raw',
'DGN': 'raw',
'S57': 'r',
'SQLite': 'raw',
'TopoJSON': 'r'}
Other, less common, drivers can be âactivatedâ by manually supplementing fiona.supported_drivers
.
The first argument of gpd.read_file
is filename
, which is typically a string, but can also be a file connection. The content of a string could vary between different drivers. In most cases, as with the ESRI Shapefile (.shp
) or the GeoPackage format (.gpkg
), the dsn would be a file name. gpd.read_file
guesses the driver based on the file extension, as illustrated for a .gpkg
file below:
'data/world.gpkg') gpd.read_file(
iso_a2 | name_long | continent | ... | lifeExp | gdpPercap | geometry | |
---|---|---|---|---|---|---|---|
0 | FJ | Fiji | Oceania | ... | 69.960000 | 8222.253784 | MULTIPOLYGON (((-180.00000 -16.... |
1 | TZ | Tanzania | Africa | ... | 64.163000 | 2402.099404 | MULTIPOLYGON (((33.90371 -0.950... |
2 | EH | Western Sahara | Africa | ... | NaN | NaN | MULTIPOLYGON (((-8.66559 27.656... |
... | ... | ... | ... | ... | ... | ... | ... |
174 | XK | Kosovo | Europe | ... | 71.097561 | 8698.291559 | MULTIPOLYGON (((20.59025 41.855... |
175 | TT | Trinidad and Tobago | North America | ... | 70.426000 | 31181.821196 | MULTIPOLYGON (((-61.68000 10.76... |
176 | SS | South Sudan | Africa | ... | 55.817000 | 1935.879400 | MULTIPOLYGON (((30.83385 3.5091... |
177 rows Ă 11 columns
For some drivers, such as a File Geodatabase (OpenFileGDB
), filename
could be provided as a folder name. A GeoJSON string can also be read directly, rather than from a file:
'{"type":"Point","coordinates":[34.838848,31.296301]}') gpd.read_file(
geometry | |
---|---|
0 | POINT (34.83885 31.29630) |
Alternatively, the gpd.read_postgis
function can be used to read a vector layer from a PostGIS database.
Some vector formats, such as GeoPackage, can store multiple data layers. By default, gpd.read_file
automatically reads the first layer of the file specified in filename
. However, using the layer
argument you can specify any other layer.
The gpd.read_file
function also allows for reading just parts of the file into RAM with two possible mechanisms. The first one is related to the where
argument, which allows specifying what part of the data to read using an SQL WHERE
expression. An example below extracts data for Tanzania only (Figure âŚ). It is done by specifying that we want to get all rows for which name_long
equals to "Tanzania"
:
= gpd.read_file('data/world.gpkg', where='name_long="Tanzania"')
tanzania tanzania
iso_a2 | name_long | continent | ... | lifeExp | gdpPercap | geometry | |
---|---|---|---|---|---|---|---|
0 | TZ | Tanzania | Africa | ... | 64.163 | 2402.099404 | MULTIPOLYGON (((33.90371 -0.950... |
1 rows Ă 11 columns
If you do not know the names of the available columns, a good approach is to just read one row of the data using the rows
argument, which can be used to read the first N rows, then use the .columns
property to examine the column names:
'data/world.gpkg', rows=1).columns gpd.read_file(
Index(['iso_a2', 'name_long', 'continent', 'region_un', 'subregion', 'type',
'area_km2', 'pop', 'lifeExp', 'gdpPercap', 'geometry'],
dtype='object')
The second mechanism uses the mask
argument to filter data based on intersection with an existing geometry. This argument expects a geometry (GeoDataFrame
, GeoSeries
, or shapely
) representing the area where we want to extract the data. Letâs try it using a small exampleâwe want to read polygons from our file that intersect with the buffer of 50,000 \(m\) of Tanzaniaâs borders. To do it, we need to (a) transform the geometry to a projected CRS (such as EPSG:32736), (b) prepare our âfilterâ by creating the buffer (Section 4.3.3), and (c) transform back to the original CRS to be used as a mask:
= tanzania.to_crs(32736).buffer(50000).to_crs(4326)
tanzania_buf 0] tanzania_buf.iloc[
Now, we can apply this âfilterâ using the mask
argument.
= gpd.read_file('data/world.gpkg', mask=tanzania_buf)
tanzania_neigh tanzania_neigh
iso_a2 | name_long | continent | ... | lifeExp | gdpPercap | geometry | |
---|---|---|---|---|---|---|---|
0 | MZ | Mozambique | Africa | ... | 57.099 | 1079.823866 | MULTIPOLYGON (((34.55999 -11.52... |
1 | ZM | Zambia | Africa | ... | 60.775 | 3632.503753 | MULTIPOLYGON (((30.74001 -8.340... |
2 | MW | Malawi | Africa | ... | 61.932 | 1090.367208 | MULTIPOLYGON (((32.75938 -9.230... |
... | ... | ... | ... | ... | ... | ... | ... |
6 | BI | Burundi | Africa | ... | 56.688 | 803.172837 | MULTIPOLYGON (((30.46967 -2.413... |
7 | UG | Uganda | Africa | ... | 59.224 | 1637.275081 | MULTIPOLYGON (((33.90371 -0.950... |
8 | RW | Rwanda | Africa | ... | 66.188 | 1629.868866 | MULTIPOLYGON (((30.41910 -1.134... |
9 rows Ă 11 columns
Our result, shown in Figure 7.1, contains Tanzania and every country within its 50,000 \(m\) buffer. Note that the last two expressions are used to add text labels with the name_long
of each country, placed at the country centroid:
= plt.subplots(ncols=2, figsize=(9,5))
fig, axes =axes[0], color='lightgrey', edgecolor='grey')
tanzania.plot(ax=axes[1], color='lightgrey', edgecolor='grey')
tanzania_neigh.plot(ax=axes[1], color='none', edgecolor='red')
tanzania_buf.plot(ax0].set_title('where')
axes[1].set_title('mask')
axes[apply(lambda x: axes[0].annotate(text=x['name_long'], xy=x.geometry.centroid.coords[0], ha='center'), axis=1)
tanzania.apply(lambda x: axes[1].annotate(text=x['name_long'], xy=x.geometry.centroid.coords[0], ha='center'), axis=1); tanzania_neigh.
where
query (left) and a mask
(right)Often we need to read CSV files (or other tabular formats) which have x and y coordinate columns, and turn them into a GeoDataFrame
with point geometries. To do that, we can import the file using pandas
(e.g., pd.read_csv
or pd.read_excel
), then go from DataFrame
to GeoDataFrame
using the gpd.points_from_xy
function, as shown earlier in the book (See Section 1.2.6 and Section 3.3.4). For example, the table cycle_hire_xy.csv
, where the coordinates are stored in the X
and Y
columns in EPSG:4326, can be imported, converted to a GeoDataFrame
, and plotted, as follows:
= pd.read_csv('data/cycle_hire_xy.csv')
cycle_hire = gpd.points_from_xy(cycle_hire['X'], cycle_hire['Y'], crs=4326)
geom = gpd.GeoSeries(geom)
geom = gpd.GeoDataFrame(data=cycle_hire, geometry=geom)
cycle_hire ; cycle_hire.plot()
Instead of columns describing âXYâ coordinates, a single column can also contain the geometry information. Well-known text (WKT), well-known binary (WKB), and the GeoJSON formats are examples of this. For instance, the world_wkt.csv
file has a column named WKT representing polygons of the worldâs countries. To import and convert it to a GeoDataFrame
, we can apply the shapely.wkt.loads
function (Section 1.2.5) on WKT strings, to convert them into shapely
geometries:
= pd.read_csv('data/world_wkt.csv')
world_wkt 'geometry'] = world_wkt['WKT'].apply(shapely.wkt.loads)
world_wkt[= gpd.GeoDataFrame(world_wkt)
world_wkt ; world_wkt.plot()
Not all of the supported vector file formats store information about their coordinate reference system. In these situations, it is possible to add the missing information using the .set_crs
function. Please refer also to Section 6.4 for more information.
As a final example, we will show how geopandas
also reads KML files. A KML file stores geographic information in XML formatâa data format for the creation of web pages and the transfer of data in an application-independent way (Nolan and Lang 2014 âŚ). Here, we access a KML file from the web. First, we need to âactivateâ the KML
driver, which isnât available by default (see above):
'KML'] = 'r' fiona.supported_drivers[
This file contains more than one layer. To list the available layers, we can use the fiona.listlayers
function:
= 'https://developers.google.com/kml/documentation/KML_Samples.kml'
u fiona.listlayers(u)
['Placemarks',
'Highlighted Icon',
'Paths',
'Google Campus',
'Extruded Polygon',
'Absolute and Relative']
Finally, we can choose the first layer Placemarks
and read it, using gpd.read_file
with an additional layer
argument:
='Placemarks') gpd.read_file(u, layer
Name | Description | geometry | |
---|---|---|---|
0 | Simple placemark | Attached to the ground. Intelli... | POINT Z (-122.08220 37.42229 0.... |
1 | Floating placemark | Floats a defined distance above... | POINT Z (-122.08407 37.42200 50... |
2 | Extruded placemark | Tethered to the ground by a cus... | POINT Z (-122.08577 37.42157 50... |
7.7.2 Raster data
âŚ
7.8 Data output (O)
Writing geographic data allows you to convert from one format to another and to save newly created objects for permanent storage. Depending on the data type (vector or raster), object class (e.g., GeoDataFrame
), and type and amount of stored information (e.g., object size, range of values), it is important to know how to store spatial files in the most efficient way. The next two sections will demonstrate how to do this.
7.8.1 Vector data
âŚ
7.8.2 Raster data
âŚ