sedona.spark.stats.clustering package

Submodules

sedona.spark.stats.clustering.dbscan module

DBSCAN is a popular clustering algorithm for spatial data.

It identifies groups of data where enough records are close enough to each other. This implementation leverages spark, sedona and graphframes to support large scale datasets and various, heterogeneous geometric feature types.

sedona.spark.stats.clustering.dbscan.dbscan(dataframe: DataFrame, epsilon: float, min_pts: int, geometry: str | None = None, include_outliers: bool = True, use_spheroid=False, is_core_column_name='isCore', cluster_column_name='cluster')[source]

Annotates a dataframe with a cluster label for each data record using the DBSCAN algorithm.

The dataframe should contain at least one GeometryType column. Rows must be unique. If one geometry column is present it will be used automatically. If two are present, the one named ‘geometry’ will be used. If more than one are present and neither is named ‘geometry’, the column name must be provided.

Parameters:
  • dataframe – spark dataframe containing the geometries

  • epsilon – minimum distance parameter of DBSCAN algorithm

  • min_pts – minimum number of points parameter of DBSCAN algorithm

  • geometry – name of the geometry column

  • include_outliers – whether to return outlier points. If True, outliers are returned with a cluster value of -1. Default is False

  • use_spheroid – whether to use a cartesian or spheroidal distance calculation. Default is false

  • is_core_column_name – what the name of the column indicating if this is a core point should be. Default is “isCore”

  • cluster_column_name – what the name of the column indicating the cluster id should be. Default is “cluster”

Returns:

A PySpark DataFrame containing the cluster label for each row

Module contents

The clustering module contains spark based implementations of popular geospatial clustering algorithms.

These implementations are designed to scale to larger datasets and support various geometric feature types.