dave_core
- class dave_core.Converter(grid_data, infilename: str = '', basefilepath: str = '')[source]
- Converter class
- defines:
strategy the strategy interface basefilepath # basic file path for output files infilename # input json file with DaVe structure data
- Example Usage:
Converter(infilename=”myDaveFile.json”, basefilepath=”/tmp”)
- basefilepath = ''
- class dave_core.DAVEJSONDecoder(**kwargs)[source]
- __init__(**kwargs)[source]
object_hook
, if specified, will be called with the result of every JSON object decoded and its return value will be used in place of the givendict
. This can be used to provide custom deserializations (e.g. to support JSON-RPC class hinting).object_pairs_hook
, if specified will be called with the result of every JSON object decoded with an ordered list of pairs. The return value ofobject_pairs_hook
will be used instead of thedict
. This feature can be used to implement custom decoders. Ifobject_hook
is also defined, theobject_pairs_hook
takes priority.parse_float
, if specified, will be called with the string of every JSON float to be decoded. By default this is equivalent to float(num_str). This can be used to use another datatype or parser for JSON floats (e.g. decimal.Decimal).parse_int
, if specified, will be called with the string of every JSON int to be decoded. By default this is equivalent to int(num_str). This can be used to use another datatype or parser for JSON integers (e.g. float).parse_constant
, if specified, will be called with one of the following strings: -Infinity, Infinity, NaN. This can be used to raise an exception if invalid JSON numbers are encountered.If
strict
is false (true is the default), then control characters will be allowed inside strings. Control characters in this context are those with character codes in the 0-31 range, including'\t'
(tab),'\n'
,'\r'
and'\0'
.
- class dave_core.DAVEJSONEncoder(isinstance_func=<function isinstance_partial>, **kwargs)[source]
- __init__(isinstance_func=<function isinstance_partial>, **kwargs)[source]
Constructor for JSONEncoder, with sensible defaults.
If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.
If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.
If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an RecursionError). Otherwise, no such check takes place.
If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.
If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.
If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.
If specified, separators should be an (item_separator, key_separator) tuple. The default is (’, ‘, ‘: ‘) if indent is
None
and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a
TypeError
.
- default(o)[source]
Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return super().default(o)
- class dave_core.Element(element_type='None', name='')[source]
This class defines a single object of a net
- defines:
attributes: dictionary of properties of the object as string pairs key:value key: the attribute’s name, value: the attribute’s value type: type of the attribute: n(ode) or p(ipe) or v(alve) c(ompressor) name: name of the object
example usage: Element(type=”p”, name=”Pipe_1”)
- class dave_core.Elements(element_type=None, data=None)[source]
This class defines a dictionary of objects of a net as Element objects of a single type
- ignoreList = ('param', 'uncertainty', 'method', 'dave_name')
- insert(element_type, data)[source]
This function fills the dictionary with data elements from Dave; defines:
n_ele number of elements type short form for type of the Elements: n(ode) or p(ipe) or v(alve) or c(ompressor)
- INPUT:
element_type (src) - data (dict) - all Information’s about the grid elements (e.g. pandas.core.series.Series)
- class dave_core.FromSerializableRegistry(obj, d, dave_hook_funct)[source]
-
- class_name = ''
- from_serializable
- module_name = ''
- class dave_core.JSONSerializableClass(**kwargs)[source]
-
- add_to_net(net, element, index=None, column='object', overwrite=False, preserve_dtypes=False, fill_dict=None)[source]
- json_excludes: ClassVar[str] = ['self', '__class__']
- dave_core.add_geodata(net, buffer=10, crs='epsg:4326', save_data=True)[source]
This function extends a pandapower/pandapipes net with geodata from DAVE
- INPUT:
net (pandapower net) - A pandapower network
dave_user (str) - User name of a DAVE Account
dave_password (str) - Password of a DAVE Account
- OPTIONAL:
- buffer (float) - Buffer around the considered network elements
crs (str, default: ‘epsg:4326’) - Definition of the network coordinate reference system
save_data (boolean, default True) - if true, the resulting data will stored in a local folder
- OUTPUT:
net (pandapower/pandapipes net) - pandapower net extended with geodata
- dave_core.add_voltage_level(plant)[source]
This function adds the voltage level to the conventional power plants
- dave_core.adress_to_coords(adress, geolocator=None)[source]
This function request geocoordinates to a given adress.
- INPUT:
- Adress (string) - format: street_name housenummber postal_code city
example: ‘Königstor 59 34119 Kassel’
- OUTPUT:
geocoordinates (tuple) - geocoordinates for the adress in format (longitude, latitude)
- dave_core.aggregate_plants_con(grid_data, plants_aggr, aggregate_name=None)[source]
This function aggregates conventionals power plants with the same energy source which are connected to the same trafo
- INPUT:
grid_data (dict) - all Informations about the target area plants_aggr (DataFrame) - all conventional power plants that sould be aggregate after
voronoi analysis
- OPTIONAL:
aggregate_name (string) - the original voltage level of the aggregated power plants
- dave_core.aggregate_plants_ren(grid_data, plants_aggr, aggregate_name=None)[source]
This function aggregates renewables power plants with the same energy source which are connected to the same trafo
- INPUT:
grid_data (dict) - all Informations about the target area plants_aggr (DataFrame) - all renewable power plants that sould be aggregate after
voronoi analysis
- OPTIONAL:
aggregate_name (string) - the original voltage level of the aggregated power plants
- dave_core.archiv_inventory(grid_data, read_only=False)[source]
This function check if a the dave archiv already contain the dataset. Otherwise the dataset name and possibly the inventory list were created
- dave_core.change_empty_gpd(grid_data)[source]
This function replaces all empty geopandas objects with empty pandas objects in a DAVE dataset
- INPUT:
grid_data (attr Dict) - DAVE Dataset with empty geopandas objects
- Output:
dataset (attr Dict) - DAVE Dataset with empty pandas objects
- dave_core.change_voltage_con(plant)[source]
This function changes the voltage parameter of the conventional power plants
- dave_core.change_voltage_ren(plant)[source]
This function changes the voltage level of the renewable power plants
- dave_core.clean_disconnected_elements_gas(grid_data, min_number_nodes)[source]
This function clean up disconnected elements for the diffrent gas grid levels
- dave_core.clean_disconnected_elements_power(grid_data, min_number_nodes)[source]
This function clean up disconnected elements for the diffrent power grid levels
- dave_core.clean_up_data(grid_data, min_number_nodes=4)[source]
This function clean up the DaVe Dataset for diffrent kinds of failures
- dave_core.clean_wrong_lines(grid_data)[source]
This function drops power lines which have wrong charakteristics
- dave_core.clean_wrong_piplines(grid_data)[source]
This function drops gas pipelines which have wrong charakteristics
- dave_core.connect_grid_nodes(road_course, road_points, start_node, end_node)[source]
This function builds lines to connect grid nodes with each other along road courses
- dave_core.create_compressors(grid_data, scigrid_compressors)[source]
This function adds the data for gas compressors
- dave_core.create_conventional_powerplants(grid_data)[source]
This function collects the generators based on ego_conventional_powerplant from OEP Furthermore assign a grid node to the generators and aggregated them depending on the situation
- dave_core.create_ehv_topology(grid_data)[source]
This function creates a dictonary with all relevant parameters for the extra high voltage level
- INPUT:
grid_data (dict) - all Informations about the grid area
- OUTPUT:
Writes data in the DaVe dataset
- dave_core.create_empty_dataset()[source]
This function initializes the dave datastructure and create all possible data categories
- OUTPUT:
grid_data (attrdict) - dave attrdict with empty tables
Example
grid_data = create_empty_dataset()
- dave_core.create_gaslib(grid_data, output_folder, save_data=True)[source]
This function creates a network in gaslib format based of an DAVE dataset
- INPUT:
grid_data (attrdict) - calculated grid data from DAVE output_folder (str) - patht to the location where the results will be saved
- OPTIONAL:
save_data (boolean, default True) - if true, the resulting data will stored in a local folder
- dave_core.create_grid(postalcode=None, town_name=None, federal_state=None, nuts_region=None, own_area=None, geodata=None, power_levels=None, gas_levels=None, convert_power=None, convert_gas=None, opt_model=False, combine_areas=None, transformers=True, renewable_powerplants=True, conventional_powerplants=True, loads=True, compressors=True, sinks=True, sources=True, storages_gas=True, valves=True, output_folder=PosixPath('/home/docs/Desktop/DAVE_output'), filename='dave_dataset', output_format='json', save_data=True)[source]
This is the main function of dave. This function generates automaticly grid models for power and gas networks in the defined target area
- INPUT:
One of these parameters must be set:
postalcode (List of strings) - numbers of the target postalcode areas. it could also be choose [‘ALL’] for all postalcode areas in germany
town_name (List of strings) - names of the target towns it could also be choose [‘ALL’] for all citys in germany
federal_state (List of strings) - names of the target federal states it could also be choose [‘ALL’] for all federal states in germany
nuts_region (tuple(List of strings, string)) - this tuple includes first a list of the target nuts regions codes (independent from nuts level). It could also be choose [‘ALL’] for all nuts regions in europe. The second tuple parameter defines the nuts year as string. The year options are 2013, 2016 and 2021.
own_area (string / Polygon) - First Option for this parameter is to hand over a string which could be the absolute path to a geographical file (.shp or .geojson) which includes own target area (e.g. “C:/Users/name/test/test.shp”) or a JSON string with the area information. The second option is to hand over a shapely Polygon which defines the area
- OPTIONAL:
geodata (list, default None) - this parameter defines which geodata should be considered. options: ‘roads’,’buildings’,’landuse’, ‘railways’, ‘waterways’, []. there could be choose: one/multiple geoobjects or ‘ALL’
power_levels (list, default None) - this parameter defines which power levels should be considered. options: ‘ehv’,’hv’,’mv’,’lv’, []. there could be choose: one/multiple level(s) or ‘ALL’
gas_levels (list, default None) - this parameter defines which gas levels should be considered. options: ‘hp’ and []. there could be choose: one/multiple level(s) or ‘ALL’
convert_power (list, default None) - this parameter defines in witch formats the power grid data should be converted. Available formats are currently: ‘pandapower’
convert_gas (list, default None) - this parameter defines in witch formats the gas grid data should be converted. Available formats are currently: ‘pandapipes’, ‘gaslib’, ‘mynts’
opt_model (boolean, default True) - if this value is true dave will be use the optimal power flow calculation to get no boundary violations. Currently a experimental feature and only available for pandapower
combine_areas (list, default None) - this parameter defines on which power levels not connected areas should combined. options: ‘EHV’,’HV’,’MV’,’LV’, []
transformers (boolean, default True) - if true, transformers are added to the grid model
renewable_powerplants (boolean, default True) - if true, renewable powerplans are added to the grid model
conventional_powerplants (boolean, default True) - if true, conventional powerplans are added to the grid model
loads (boolean, default True) - if true, loads are added to the grid model
compressors (boolean, default True) - if true, compressors are added to the grid model
sinks (boolean, default True) - if true, gas sinks are added to the grid model
sources (boolean, default True) - if true, gas sources are added to the grid model
output_folder (string, default user desktop) - absolute path to the folder where the generated data should be saved. if for this path no folder exists, dave will be create one
output_format (string, default ‘json’) - this parameter defines the output format. Available formats are currently: ‘json’, ‘hdf’ and ‘gpkg’
filename (string, default ‘dave_dataset’) - this parameter defines the name of the output file save_data (boolean, default True) - if true, the resulting data will stored in a local folder
- OUTPUT:
grid_data (attrdict) - grid_data as a attrdict in dave structure
net_power
net_pipes
Example
from dave.create import create_grid
- grid_data = create_grid(town_name=[‘Kassel’, ‘Baunatal’], power_levels=[‘hv’, ‘mv’],
gas_levels=[‘HP’], plot=False, convert = False)
- dave_core.create_hp_topology(grid_data)[source]
This function creates a dictonary with all relevant parameters for the high pressure level
- INPUT:
grid_data (dict) - all Informations about the grid area
- OUTPUT:
Writes data in the DaVe dataset
- dave_core.create_hv_mv_substations(grid_data)[source]
This function requests data for the hv/mv substations if there not already included in grid data
- dave_core.create_hv_topology(grid_data)[source]
This function creates a dictonary with all relevant parameters for the high voltage level
- INPUT:
grid_data (dict) - all Informations about the grid area
- OUTPUT:
Writes data in the DaVe dataset
- dave_core.create_interim_area(areas)[source]
This function creats a interim area to combine not connected areas.
- INPUT:
areas (GeoDataFrame) - all considered grid areas
- OUTPUT:
areas (GeoDataFrame) - all considered grid areas extended with interim areas
- dave_core.create_loads(grid_data)[source]
This function creates loads by osm landuse polygons in the target area an assigne them to a suitable node on the considered voltage level by voronoi analysis
- dave_core.create_lv_topology(grid_data)[source]
This function creates a dictonary with all relevant geographical informations for the target area
- INPUT:
grid_data (attrdict) - all Informations about the grid
- OUTPUT:
Writes data in the DaVe dataset
- dave_core.create_mv_lv_substations(grid_data)[source]
This function requests data for the mv/lv substations if there not already included in grid data
- dave_core.create_mv_topology(grid_data)[source]
This function creates a dictonary with all relevant parameters for the medium voltage level
- INPUT:
grid_data (dict) - all Informations about the target area
- OUTPUT:
Writes data in the DaVe dataset
- dave_core.create_mynts(grid_data, output_folder, idx_ref='dave_name')[source]
This function creates a network in MYNTS format based of an DAVE dataset
- INPUT:
grid_data (attrdict) - calculated grid data from DAVE output_folder (str) - patht to the location where the results will be saved
- OPTIONAL:
idx_ref (str, default=’dave_name’) - defines parameter which should use as referenz for setting the indices
- dave_core.create_pandapipes(grid_data, save_data=True, output_folder=None, fluid=None, idx_ref='dave_name')[source]
This function creates a pandapipes network based an the DaVe dataset
- INPUT:
grid_data (attrdict) - calculated grid data from dave
- OPTIONAL:
save_data (boolean, default True) - if true, the resulting data will stored in a local folder output_folder (str, default=None) - patht to the location where the results will be saved
idx_ref (str, default=’dave_name’) - defines parameter which should use as referenz for setting the indices fluid (str, default=None) - A fluid that can be added to the net from the start. A fluid is required for pipeflow calculations. Existing fluids in pandapipes are “hgas”, “lgas”, “hydrogen”, “methane”, “water”, “air”
- OUTPUT:
net (attrdict) - pandapipes attrdict with grid data
- dave_core.create_pandapower(grid_data, opt_model, output_folder, save_data=True)[source]
This function creates a pandapower network based an the DaVe dataset
- INPUT:
grid_data (attrdict) - calculated grid data from dave
opt_model (bool) - optimize model during model processing
output_folder (str) - patht to the location where the results will be saved
save_data (boolean, default True) - if true, the resulting data will stored in a local folder
- OUTPUT:
net (attrdict) - pandapower attrdict with grid data
- dave_core.create_power_plant_lines(grid_data)[source]
This function checks the distance between a power plant and the associated grid node. If the distance is greater than 50 meteres, a auxillary node for the power plant and a connection line to the originial node will be created.
This function is not for aggregated power plants because these are anyway close to the connection point
- dave_core.create_renewable_powerplants(grid_data)[source]
This function collects the generators based on ego_renewable_powerplant from OEP and perhaps assign them their exact location by adress, if these are available. Furthermore assign a grid node to the generators and aggregated them depending on the situation
- dave_core.create_sinks(grid_data, scigrid_consumers)[source]
This function adds the data for gas consumers
- dave_core.create_sources(grid_data, scigrid_productions)[source]
This function adds the data for gas sources
- dave_core.create_tqdm(desc, bar_type='main_bar')[source]
This function creates a tqdm progress bar object INPUT:
desc (str) - Name of the task (max 33 signs)
- OPTIONAL:
bar_type (str, default “main_bar”) - Which style of progress bar should be used Options: “main_bar, “sub_bar”
- OUTPUT:
tqdm_object (tqdm object) - tqdm object suitale to the usage in DAVE code
- dave_core.create_transformers(grid_data)[source]
This function collects the transformers. EHV/EHV and EHV/HV trafos are based on ego_pf_hv_transformer from OEP HV/MV trafos are based on ego_dp_hvmv_substation from OEP MV/LV trafos are based on ego_dp_mvlv_substation from OEP
- dave_core.dave_hook(d, deserialize_pandas=True, empty_dict_like_object=None, registry_class=<class 'dave_core.io.io_utils.FromSerializableRegistry'>)[source]
- dave_core.df_lists_to_str(data_df)[source]
This function checks dataframes if there are any lists included and in the case convert them to strings. This is necessary for converting into geopackage format.
- INPUT:
data_df (DataFrame) - Data which includes lists
- Output:
data_df (DataFrame) - Data without including lists
- dave_core.disconnected_nodes(nodes, edges, min_number_nodes)[source]
converts nodes and lines to a networkX graph
- INPUT:
nodes (DataFrame) - Dataset of nodes with DaVe name
edges (DataFrame) - Dataset of edges (lines, pipelines) with DaVe name
- OUTPUT:
- nodes (set) - all dave names for nodes which are not connected to a grid with a minumum
number of nodes
- dave_core.format_input_levels(power_levels, gas_levels)[source]
This function formats the power and gas levels to get the right format for the dave processing
- dave_core.from_archiv(dataset_name)[source]
This functions reads a dave dataset from the dave internal archiv
- dave_core.from_hdf(file_path)[source]
This functions reads a dave dataset given in HDF5 format from a user given path
- INPUT:
file_path (str ) - absoulut path where the HDF5 file will be stored.
- OUTPUT:
grid_data (attr Dict) - DAVE Dataset
Example grid_data = from_hdf(file_path)
- dave_core.from_json(file_path, encryption_key=None)[source]
Load a dave dataset from a JSON file.
- INPUT:
file_path (str ) - absoulut path where the JSON file will be stored. If None is given the function returns only a JSON string encrytion_key (string, None) - If given, the DAVE dataset is stored as an encrypted json string
- OUTPUT:
file (json) - the DAVE dataset in JSON format
- dave_core.from_json_string(json_string, encryption_key=None)[source]
Load a dave dataset from a JSON string.
- INPUT:
json_string (str ) - json string encrytion_key (string, None) - If given, the DAVE dataset is stored as an encrypted json string
- OUTPUT:
test (json) - the DAVE dataset in JSON format
- dave_core.from_osm(grid_data, pbar, roads, buildings, landuse, railways, waterways, target_geom, progress_step=None)[source]
This function searches for data on OpenStreetMap (OSM) and filters the relevant paramerters for grid modeling
target = geometry of the considerd target
- dave_core.gas_components(grid_data, compressor, sink, source, storage_gas, valve)[source]
This function calls all the functions for creating the gas components in the wright order
- dave_core.gaslib_pipe_clustering()[source]
This function is clustering the gaslib pipe data and calculate the avarage for the parameters. The pipesUsedForData parameter describt the number of pipes within the cluster
- dave_core.geo_info_needs(power_levels, gas_levels, loads)[source]
This function decides which geographical informations are necessary for the different grid levels
- dave_core.get_data_path(filename=None, dirname=None)[source]
This function returns the full os path for a given directory (and filename)
- dave_core.get_grid_area(net, buffer=10, crs='epsg:4326', convex_hull=True)[source]
Calculation of the grid area on the basis of an pandapower / pandapipes model and the inclusion of a buffer.
The crs will temporary project to epsg 3035 for adding the buffer in meter as unit
- Input:
net (pandapower/pandapipes net) - A energy grid in pandapower or pandapipes format
buffer (float, 10) - Buffer around the considered network elements in meter
crs (str, ‘epsg:4326’) - Definition of the network coordinate reference system
convex_hull (boolean, True)- If true the the convex hull will calculated for the given lines instaed of onyly using a buffer around the lines
- OUTPUT:
grid_area (Shapely polygon) - Polygon which defines the grid area for a given network
- dave_core.get_household_power(consumption_data, household_size)[source]
This function calculates the active and reactive power consumption for a given houshold size based on the consumption data for a year
- INPUT:
consumption_data (Dict) - consumption data for germany from dave internal datapool household_size (int) - size of the houshold between 1 and 5 person
- dave_core.get_osm_data(grid_data, key, border, target_geom)[source]
This function requests data from osm and filter it
INPUT:
grid_data (string) - DAVE data dictionary key (string) - name of the object type which should be considered border (geometrie) - border for the data consideration target_geom (geometrie) - geometry of the considerd target
- dave_core.intersection_with_area(gdf, area, remove_columns=True, only_limit=True)[source]
This function intersects a given geodataframe with an area in consideration of mixed geometry types at both input variables INPUT:
gdf (GeoDataFrame) - Data to be intersect with an area area (GeoDataFrame) - Considered Area remove_columns (bool, default True) - If True the area parameters will deleted in the result only_limit (bool, default True) - If True it will only considered if the data intersects the area instead of which part of the area they intersect if the area is split in multiple polygons
- OUTPUT:
gdf_over (GeoDataFrame) - Data which intersetcs with considered area
- dave_core.isinstance_partial(obj, cls)[source]
this function shall make sure that for the given classes, no default string functions are used, but the registered ones (to_serializable registry)
- dave_core.json_to_pp(file_path)[source]
This functions converts a json file into a pandapower model in consideration of converting geometry as strings to geometry objects
- INPUT:
file_path (str) - absoulut path where the pandapower file is stored in json format
- OUTPUT:
net (attr Dict) - pandapower network
- dave_core.json_to_ppi(file_path)[source]
This functions converts a json file into a pandapipes model in consideration of converting geometry as strings to geometry objects
- INPUT:
file_path (str) - absoulut path where the pandapipes file is stored in json format
- OUTPUT:
net (attr Dict) - pandapipes network
- dave_core.line_connections(grid_data)[source]
This function creates the line connections between the building lines (Points on the roads) and the road junctions
- dave_core.multiline_coords(line_geometry)[source]
This function extracts the coordinates from a MultiLineString
INPUT: line_geometry (Shapely MultiLinesString) - geometry in MultiLineString format
- OUTPUT:
line_coords (list) - coordinates of the given MultiLineString
- dave_core.nearest_road_points(points, roads)[source]
This function finds the shortest way between points (e.g. building centroids and a road
- INPUT:
points (GeoDataSeries) - series of point geometrys roads (GeoSeries) - relevant road geometries
- OUTPUT:
near_points (GeoSeries) - nearest points on road to given points
- dave_core.oep_request(table, schema=None, where=None, geometry=None, db_update=False)[source]
This function is to requesting data from the open energy platform. The available data is to find on https://openenergy-platform.org/dataedit/schemas
- INPUT:
table (string) - table name of the searched data
- OPTIONAL:
schema (string, default None) - schema name of the searched data. By default DAVE search for the schema in the settings file via table name example: ‘postcode=34225’ where (string, default None) - filter the table of the searched data
example: ‘postcode=34225’
geometry (string, default None) - name of the geometry parameter in the OEP dataset to transform it from WKB to WKT db_update (boolean, default False) - If True in every case the data will be related from the oep
- OUTPUT:
requested_data (DataFrame) - table of the requested data
- dave_core.osm_request(data_type, area)[source]
This function requests OSM data from database or OSM directly
- dave_core.plot_geographical_data(grid_data, save_image=False, output_folder=None)[source]
This function plots the geographical informations in the target area
- INPUT:
grid_data (attrdict) - all Informations about the target area
- OPTIONAL:
save_image* (boolean, False) - If True the plot will be saved as svg in output folder output_folder* (string, None) - absolute path to the folder where the plot should be saved
- OUTPUT:
target area plot (svg file) - plot as vektor graphic
- dave_core.plot_grid_data(grid_data, save_image=False, output_folder=None)[source]
This function plots primary the grid data and auxillary greyed out the geographical informations in the target area
- INPUT:
grid_data (dict) - all Informations about the target area and the grid
- OPTIONAL:
save_image* (boolean, False) - If True the plot will be saved as svg in output folder output_folder* (string, None) - absolute path to the folder where the plot should be saved
- OUTPUT:
grid data plot (svg file) - plot as vektor graphic
- dave_core.plot_grid_data_osm(grid_data, save_image=False, output_folder=None)[source]
This function plots primary the grid data with a osm map in the background
- INPUT:
grid_data (dict) - all Informations about the target area and the grid
- OPTIONAL:
save_image* (boolean, False) - If True the plot will be saved as svg in output folder output_folder* (string, None) - absolute path to the folder where the plot should be saved
- OUTPUT:
grid data osm plot (svg file) - plot as vektor graphic
- dave_core.plot_land(area, only_area=False)[source]
This funcition plots the polygon of the target area, which can be used for the background.
INPUT: area (GeoDataFrame) - polygon of the target area
OPTIONAL: only_area (boolean, False) - If this parameter is True only the polygon fopr the area will be plotted
- OUTPUT:
ax - axes of figure
- dave_core.plot_landuse(grid_data, save_image=False, output_folder=None)[source]
This function plots the landuses in the target area
- INPUT:
grid_data (dict) - all Informations about the target area and the grid
- OPTIONAL:
save_image* (boolean, False) - If True the plot will be saved as svg in output folder output_folder* (string, None) - absolute path to the folder where the plot should be saved
- OUTPUT:
landuse plot (svg file) - plot as vektor graphic
- dave_core.power_processing(net, opt_model=False, min_vm_pu=0.95, max_vm_pu=1.05, max_line_loading=100, max_trafo_loading=100)[source]
This function run a diagnosis of the pandapower network and clean up occurring failures. Furthermore the grid will be adapt so all boundarys be respected.
- INPUT:
net (attrdict) - pandapower attrdict
- OPTIONAL:
opt_model (bool, default False) - If True the model will be optimized to respecting defined grid limits min_vm_pu (float, default 0.95) - minimal permissible node voltage in p.u.
max_vm_pu (float, default 1.05) - maximum permissible node voltage in p.u.
max_line_loading (int, default 100) - maximum permissible line loading in %
max_trafo_loading (int, default 100) - maximum permissible transformer loading in %
- OUTPUT:
net (attrdict) - A cleaned up and if necessary optimized pandapower attrdict
- dave_core.pp_to_json(net, file_path)[source]
This functions converts a pandapower model into a json file in consideration of converting geometry objects to strings
- INPUT:
net (attr Dict) - pandapower network file_path (str) - absoulut path where the pandapower file will be stored in json format
- dave_core.ppi_to_json(net, file_path)[source]
This functions converts a pandapipes model into a json file in consideration of converting geometry objects to strings
- INPUT:
net (attr Dict) - pandapipes network file_path (str) - absoulut path where the pandapipes file will be stored in json format
- dave_core.read_federal_states()[source]
This data includes the name, the length, the area, the population and the geometry for all german federal states
- OUTPUT:
federal_statess (GeodataFrame) - all german federal states
Example
import dave.datapool as data
federal = data.read_federal_states()
- dave_core.read_gaslib_cs()[source]
This function reads informations about gaslib compressor stations as reference for the converter
- dave_core.read_household_consumption()[source]
This data includes informations for the german avarage houshold consumption and the avarage houshold sizes per federal state
- OUTPUT:
houshold consumption data (dict) - Informations for the german high pressure gas grid
Example
import dave.datapool as data
household_consumption = data.read_household_consumption()
- dave_core.read_json(path_or_buf: FilePath | ReadBuffer[str] | ReadBuffer[bytes], *, orient: str | None = None, typ: Literal['frame', 'series'] = 'frame', dtype: DtypeArg | None = None, convert_axes: bool | None = None, convert_dates: bool | list[str] = True, keep_default_dates: bool = True, precise_float: bool = False, date_unit: str | None = None, encoding: str | None = None, encoding_errors: str | None = 'strict', lines: bool = False, chunksize: int | None = None, compression: CompressionOptions = 'infer', nrows: int | None = None, storage_options: StorageOptions | None = None, dtype_backend: DtypeBackend | lib.NoDefault = <no_default>, engine: JSONEngine = 'ujson') DataFrame | Series | JsonReader [source]
Convert a JSON string to pandas object.
- Parameters:
path_or_buf (a valid JSON str, path object or file-like object) – Any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. A local file could be:
file://localhost/path/to/table.json
.If you want to pass in a path object, pandas accepts any
os.PathLike
.By file-like object, we refer to objects with a
read()
method, such as a file handle (e.g. via builtinopen
function) orStringIO
.Deprecated since version 2.1.0: Passing json literal strings is deprecated.
orient (str, optional) – Indication of expected JSON string format. Compatible JSON strings can be produced by
to_json()
with a corresponding orient value. The set of possible orients is:'split'
: dict like{index -> [index], columns -> [columns], data -> [values]}
'records'
: list like[{column -> value}, ... , {column -> value}]
'index'
: dict like{index -> {column -> value}}
'columns'
: dict like{column -> {index -> value}}
'values'
: just the values array'table'
: dict like{'schema': {schema}, 'data': {data}}
The allowed and default values depend on the value of the typ parameter.
when
typ == 'series'
,allowed orients are
{'split','records','index'}
default is
'index'
The Series index must be unique for orient
'index'
.
when
typ == 'frame'
,allowed orients are
{'split','records','index', 'columns','values', 'table'}
default is
'columns'
The DataFrame index must be unique for orients
'index'
and'columns'
.The DataFrame columns must be unique for orients
'index'
,'columns'
, and'records'
.
typ ({‘frame’, ‘series’}, default ‘frame’) – The type of object to recover.
dtype (bool or dict, default None) – If True, infer dtypes; if a dict of column to dtype, then use those; if False, then don’t infer dtypes at all, applies only to the data.
For all
orient
values except'table'
, default is True.convert_axes (bool, default None) – Try to convert the axes to the proper dtypes.
For all
orient
values except'table'
, default is True.convert_dates (bool or list of str, default True) – If True then default datelike columns may be converted (depending on keep_default_dates). If False, no dates will be converted. If a list of column names, then those columns will be converted and default datelike columns may also be converted (depending on keep_default_dates).
keep_default_dates (bool, default True) – If parsing dates (convert_dates is not False), then try to parse the default datelike columns. A column label is datelike if
it ends with
'_at'
,it ends with
'_time'
,it begins with
'timestamp'
,it is
'modified'
, orit is
'date'
.
precise_float (bool, default False) – Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit (str, default None) – The timestamp unit to detect if converting dates. The default behaviour is to try and detect the correct precision, but if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force parsing only seconds, milliseconds, microseconds or nanoseconds respectively.
encoding (str, default is ‘utf-8’) – The encoding to use to decode py3 bytes.
encoding_errors (str, optional, default “strict”) – How encoding errors are treated. List of possible values .
Added in version 1.3.0.
lines (bool, default False) – Read the file as a json object per line.
chunksize (int, optional) – Return JsonReader object for iteration. See the line-delimited json docs for more information on
chunksize
. This can only be passed if lines=True. If this is None, the file will be read into memory all at once.compression (str or dict, default ‘infer’) – For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buf’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’ (otherwise no compression). If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in. Set to
None
for no decompression. Can also be a dict with key'method'
set to one of {'zip'
,'gzip'
,'bz2'
,'zstd'
,'xz'
,'tar'
} and other key-value pairs are forwarded tozipfile.ZipFile
,gzip.GzipFile
,bz2.BZ2File
,zstandard.ZstdDecompressor
,lzma.LZMAFile
ortarfile.TarFile
, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary:compression={'method': 'zstd', 'dict_data': my_compression_dict}
.Added in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
nrows (int, optional) – The number of lines from the line-delimited jsonfile that has to be read. This can only be passed if lines=True. If this is None, all the rows will be returned.
storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to
urllib.request.Request
as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded tofsspec.open
. Please seefsspec
andurllib
for more details, and for more examples on storage options refer here.dtype_backend ({‘numpy_nullable’, ‘pyarrow’}, default ‘numpy_nullable’) – Back-end data type applied to the resultant
DataFrame
(still experimental). Behaviour is as follows:"numpy_nullable"
: returns nullable-dtype-backedDataFrame
(default)."pyarrow"
: returns pyarrow-backed nullableArrowDtype
DataFrame.
Added in version 2.0.
engine ({“ujson”, “pyarrow”}, default “ujson”) – Parser engine to use. The
"pyarrow"
engine is only available whenlines=True
.Added in version 2.0.
- Returns:
Series, DataFrame, or pandas.api.typing.JsonReader – A JsonReader is returned when
chunksize
is not0
orNone
. Otherwise, the type returned depends on the value oftyp
.
See also
DataFrame.to_json
Convert a DataFrame to a JSON string.
Series.to_json
Convert a Series to a JSON string.
json_normalize
Normalize semi-structured JSON data into a flat table.
Notes
Specific to
orient='table'
, if aDataFrame
with a literalIndex
name of index gets written withto_json()
, the subsequent read operation will incorrectly set theIndex
name toNone
. This is because index is also used byDataFrame.to_json()
to denote a missingIndex
name, and the subsequentread_json()
operation cannot distinguish between the two. The same limitation is encountered with aMultiIndex
and any names beginning with'level_'
.Examples
>>> from io import StringIO >>> df = pd.DataFrame([['a', 'b'], ['c', 'd']], ... index=['row 1', 'row 2'], ... columns=['col 1', 'col 2'])
Encoding/decoding a Dataframe using
'split'
formatted JSON:>>> df.to_json(orient='split') '{"columns":["col 1","col 2"],"index":["row 1","row 2"],"data":[["a","b"],["c","d"]]}' >>> pd.read_json(StringIO(_), orient='split') col 1 col 2 row 1 a b row 2 c d
Encoding/decoding a Dataframe using
'index'
formatted JSON:>>> df.to_json(orient='index') '{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'
>>> pd.read_json(StringIO(_), orient='index') col 1 col 2 row 1 a b row 2 c d
Encoding/decoding a Dataframe using
'records'
formatted JSON. Note that index labels are not preserved with this encoding.>>> df.to_json(orient='records') '[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]' >>> pd.read_json(StringIO(_), orient='records') col 1 col 2 0 a b 1 c d
Encoding with Table Schema
>>> df.to_json(orient='table') '{"schema":{"fields":[{"name":"index","type":"string"},{"name":"col 1","type":"string"},{"name":"col 2","type":"string"}],"primaryKey":["index"],"pandas_version":"1.4.0"},"data":[{"index":"row 1","col 1":"a","col 2":"b"},{"index":"row 2","col 1":"c","col 2":"d"}]}'
The following example uses
dtype_backend="numpy_nullable"
>>> data = '''{"index": {"0": 0, "1": 1}, ... "a": {"0": 1, "1": null}, ... "b": {"0": 2.5, "1": 4.5}, ... "c": {"0": true, "1": false}, ... "d": {"0": "a", "1": "b"}, ... "e": {"0": 1577.2, "1": 1577.1}}''' >>> pd.read_json(StringIO(data), dtype_backend="numpy_nullable") index a b c d e 0 0 1 2.5 True a 1577.2 1 1 <NA> 4.5 False b 1577.1
- dave_core.read_nuts_regions(year)[source]
This data includes the name and the geometry for the nuts regions of the years 2013, 2016 and 2021
- OUTPUT:
nuts_regions (GeodataFrame) - nuts regions of the years 2013, 2016 and 2021
Example
import dave.datapool as data
nuts = data.read_nuts_regions()
- dave_core.read_postal()[source]
This data includes the town name, the area, the population and the geometry for all german postalcode areas
- OUTPUT:
postal areas (GeodataFrame) - all german postalcode areas
Example
import dave.datapool as data
postal = data.read_postal()
- dave_core.read_scigridgas_iggielgn()[source]
This data includes informations for the europe gas grid produced by scigridgas. The dataset is know as “iggielgn”.
- OUTPUT:
scigridgas iggielgn data (dict) - Informations for the europe gas grid
Example
import dave.datapool as data
scigridgas_iggielgn = data.read_scigridgas_iggielgn()
- dave_core.read_simone_file(topology_path, scenario_path=None, result_path=None, crs='epsg:4326')[source]
This function reads given simone files in xml format
- INPUT:
topology_path (str) - path to the simone network XML file
- OPTIONAL:
scenario_path (str, default None) - path to the simone scenario file
result_path (str, default None) - path to the simone result file
crs (str, default “epsg:4326”) - coordinate system of the data
- OUTPUT:
data (dict) - dict which contains all data as GeoDataFrames
- dave_core.reduce_network(net, area, cross_border=True, crs='epsg:4326')[source]
Reduce a pandapower/pandapipes network to a smaller area of interest
- Input:
net (pandapower/pandapipes net) - A energy grid in pandapower or pandapipes format
area (shapely Polygon) - Polygon of the considered network area
cross_border (bool, default True) - Definition how to deal with lines that going beyond the area border. If True these lines will considered and their associated nodes outside the area border as well. If False these lines will deleted and all network elements are within the area border
crs (str, default: ‘epsg:4326’) - Definition of the network coordinate reference system
- OUTPUT:
net (pandapower/pandapipes net) - network reduced to considered area
This function searches the related substation for a bus and returns some substation information
- INPUT:
bus (Shapely Point) - bus geometry substations (DataFrame) - Table of the possible substations
- OUTPUT:
(Tuple) - Substation information for a given bus (ego_subst_id, subst_dave_name, subst_name)
- dave_core.request_geo_data(grid_area, crs, save_data=True)[source]
This function requests all available geodata for a given area from DAVE.
- Input:
grid_area (Shapely polygon) - Polygon which defines the considered grid area
crs (str, default: ‘epsg:4326’) - Definition of the network coordinate reference system
- OPTIONAL:
save_data (boolean, default True) - if true, the resulting data will stored in a local folder
- OUTPUT:
request_geodata (pandapower net) - geodata for the grid_area from DAVE
- dave_core.road_junctions(roads, grid_data)[source]
This function searches junctions for the relevant roads in the target area
- dave_core.save_dataset_to_archiv(grid_data)[source]
This function saves the dave dataset in the own archiv. Hint: datasets based on own area definitions will not be saved
- dave_core.save_dataset_to_user_folder(grid_data, output_format, output_folder, filename, save_data)[source]
This function saves the DAVE dataset to an output folder.
- Input:
grid_data (attrdict) - dave attrdict with empty tables output_format (string, default ‘json’) - this parameter defines the output format. Available formats are currently: ‘json’, ‘hdf’ and ‘gpkg’
output_folder (string, default user desktop) - absolute path to the folder where the generated data should be saved. if for this path no folder exists, dave will be create one
save_data (boolean, default True) - if true, the resulting data will stored in a local folder
- OUTPUT:
grid_data (attrdict) - grid_data as a attrdict in dave structure
- dave_core.set_dave_settings()[source]
This function returns a dictonary with the DaVe settings for used data and assumptions
- dave_core.simone_to_dave(data_simone)[source]
This functions converts data from simone into DAVE Format
- INPUT:
data_simone (dict) - all available simone data. Including Topology and optional includes scenario and result data
- dave_core.target_area(grid_data, power_levels, gas_levels, postalcode=None, town_name=None, federal_state=None, nuts_region=None, own_area=None, buffer=0, roads=True, buildings=True, landuse=True, railways=True, waterways=True)[source]
This function calculate all relevant geographical informations for the target area and add it to the grid_data
- INPUT:
grid_data (attrdict) - grid_data as a attrdict in dave structure power_levels (list) - this parameter defines which power levels should be considered options: ‘ehv’,’hv’,’mv’,’lv’, []. there could be choose: one level, multiple levels or ‘ALL’ gas_levels (list) - this parameter defines which gas levels should be considered options: ‘hp’,’mp’,’lp’, []. there could be choose: one level, multiple levels or ‘ALL’ One of these parameters must be set: postalcode (List of strings) - numbers of the target postalcode areas. it could also be choose [‘ALL’] for all postalcode areas in germany town_name (List of strings) - names of the target towns it could also be choose [‘ALL’] for all citys in germany federal_state (List of strings) - names of the target federal states it could also be choose [‘ALL’] for all federal states in germany nuts_region (List of strings) - codes of the target nuts regions it could also be choose [‘ALL’] for all nuts regions in europe own_area (string) - full path to a shape file which includes own target area (e.g. “C:/Users/name/test/test.shp”) or Geodataframe as string
- OPTIONAL:
buffer (float, default 0) - buffer for the target area roads (boolean, default True) - obtain informations about roads which are relevant for the grid model buildings (boolean, default True) - obtain informations about buildings landuse (boolean, default True) - obtain informations about landuses railway (boolean, default True) - obtain informations about railways waterways (boolean, default True) - obtain informations about waterways
Example
from dave.topology import target_area target_area(town_name = [‘Kassel’], buffer=0)
- dave_core.to_archiv(grid_data)[source]
This functions stores a dave dataset in the dave internal archiv
- dave_core.to_gpkg(grid_data, file_path)[source]
This functions stores a dave dataset at a given path in the geopackage format
- INPUT:
grid_data (attr Dict) - DAVE Dataset file_path (str) - absoulut path where the gpkg file will be stored.
- dave_core.to_hdf(grid_data, file_path)[source]
This functions stores a dave dataset at a given path in the HDF5 format
- INPUT:
grid_data (attr Dict) - DAVE Dataset file_path (str) - absoulut path where the HDF5 file will be stored.
- dave_core.to_json(grid_data, file_path=None, encryption_key=None)[source]
This function saves a DAVE dataset in JSON format.
- INPUT:
grid_data (attr Dict) - DAVE Dataset file_path (str , None) - absoulut path where the JSON file will be stored. If None is given the function returns only a JSON string encrytion_key (string, None) - If given, the DaVe dataset is stored as an encrypted json string
- OUTPUT:
json_string (Str) - The Data converted to a json string
- dave_core.voronoi(points, polygon_param=True)[source]
This function calculates the voronoi diagram for given points
- INPUT:
points (GeoDataFrame) - all nodes for voronoi analysis (centroids) polygon_param (bool, default True) - if True the centroid and dave name for each voronoi polygon will be searched
- OUTPUT:
voronoi polygons (GeoDataFrame) - all voronoi areas for the given points
- dave_core.wkb_to_wkt(data_df, crs)[source]
This function converts geometry data from WKB (hexadecimal string) to WKT (geometric object) format for a given dataframe and convert it to a geodataframe
- INPUT:
data_df (DataFrame) - Data with geometry data which is in hexadecimal string format crs (str) - Koordinatereference system for the data
- Output:
data_df (DataFrame) - Data with geometry as shapely objects
- dave_core.wkt_to_wkb(data_df)[source]
This function converts a geometry data from WKT (geometric object) to WKB (hexadecimal string) format for a given geodataframe
- INPUT:
data_df (DataFrame) - Data with geometry data as shapely objects
- Output:
data_df (DataFrame) - Data with geometry which is in hexadecimal string format
- dave_core.wkt_to_wkb_dataset(grid_data)[source]
This function converts all geometry data from WKT (geometric object) to WKB (hexadecimal string) format for a given DaVe dataset
- INPUT:
grid_data (attr Dict) - DAVE Dataset with Data that contains geometry data as shapely objects
- Output:
dataset (attr Dict) - DAVE Dataset with Data that contains geometry in hexadecimal string format