Python API Reference¶
sedonadb.context ¶
SedonaContext ¶
Context for executing queries using Sedona
This object keeps track of state such as registered functions, registered tables, and available memory. This is similar to a Spark SessionContext or a database connection.
Examples:
>>> sd = sedona.db.connect()
>>> sd.options.interactive = True
>>> sd.sql("SELECT 1 as one")
┌───────┐
│ one │
│ int64 │
╞═══════╡
│ 1 │
└───────┘
create_data_frame ¶
Create a DataFrame from an in-memory or protocol-enabled object.
Converts supported Python objects into a SedonaDB DataFrame so you can run SQL and spatial operations on them.
Parameters:
-
obj(Any) –A supported object: - pandas DataFrame - GeoPandas DataFrame - Polars DataFrame - pyarrow Table
-
schema(Any, default:None) –Optional object implementing
__arrow_schema__for providing an Arrow schema.
Returns:
-
DataFrame(DataFrame) –A SedonaDB DataFrame.
Examples:
>>> import pandas as pd
>>> sd = sedona.db.connect()
>>> sd.create_data_frame(pd.DataFrame({"x": [1, 2]})).head(1).show()
┌───────┐
│ x │
│ int64 │
╞═══════╡
│ 1 │
└───────┘
drop_view ¶
drop_view(name: str) -> None
Remove a named view
Parameters:
-
name(str) –The name of the view
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geom").to_view("foofy")
>>> sd.drop_view("foofy")
read_parquet ¶
read_parquet(
table_paths: Union[str, Path, Iterable[str]],
options: Optional[Dict[str, Any]] = None,
) -> DataFrame
Create a DataFrame from one or more Parquet files
Parameters:
-
table_paths(Union[str, Path, Iterable[str]]) –A str, Path, or iterable of paths containing URLs to Parquet files.
-
options(Optional[Dict[str, Any]], default:None) –Optional dictionary of options to pass to the Parquet reader. For S3 access, use {"aws.skip_signature": True, "aws.region": "us-west-2"} for anonymous access to public buckets.
Examples:
>>> sd = sedona.db.connect()
>>> url = "https://github.com/apache/sedona-testing/raw/refs/heads/main/data/parquet/geoparquet-1.1.0.parquet"
>>> sd.read_parquet(url)
<sedonadb.dataframe.DataFrame object at ...>
read_pyogrio ¶
read_pyogrio(
table_paths: Union[str, Path, Iterable[str]],
options: Optional[Dict[str, Any]] = None,
extension: str = "",
) -> DataFrame
Read spatial file formats using GDAL/OGR via pyogrio
Creates a DataFrame from one or more paths or URLs to a file supported by
pyogrio, which is the same package
that powers geopandas.read_file() by default. Some common formats that can be
opened using GDAL/OGR are FlatGeoBuf, GeoPackage, Shapefile, GeoJSON, and many,
many more. See https://gdal.org/en/stable/drivers/vector/index.html for a list
of available vector drivers.
Like read_parquet(), globs and directories can be specified in addition to
individual file paths. Paths ending in .zip are automatically prepended with
/vsizip/ (i.e., are automatically unzipped by GDAL). HTTP(s) URLs are
supported via /vsicurl/.
Parameters:
-
table_paths(Union[str, Path, Iterable[str]]) –A str, Path, or iterable of paths containing URLs or paths. Globs (i.e.,
path/*.gpkg), directories, and zipped versions of otherwise readable files are supported. -
options(Optional[Dict[str, Any]], default:None) –An optional mapping of key/value pairs (open options) passed to GDAL/OGR.
-
extension(str, default:'') –An optional file extension (e.g.,
"fgb") used whentable_pathsspecifies one or more directories or a glob that does not enforce a file extension.
Examples:
>>> import geopandas
>>> import tempfile
>>> sd = sedona.db.connect()
>>> df = geopandas.GeoDataFrame({
... "geometry": geopandas.GeoSeries.from_wkt(["POINT (0 1)"], crs=3857)
... })
>>>
>>> with tempfile.TemporaryDirectory() as td:
... df.to_file(f"{td}/df.fgb")
... sd.read_pyogrio(f"{td}/df.fgb").show()
...
┌──────────────┐
│ wkb_geometry │
│ geometry │
╞══════════════╡
│ POINT(0 1) │
└──────────────┘
register_udf ¶
register_udf(udf: Any)
Register a user-defined function
Parameters:
-
udf(Any) –An object implementing the DataFusion PyCapsule protocol (i.e.,
__datafusion_scalar_udf__) or a function annotated with arrow_udf.
Examples:
>>> import pyarrow as pa
>>> from sedonadb import udf
>>> sd = sedona.db.connect()
>>> @udf.arrow_udf(pa.int64(), [udf.STRING])
... def char_count(arg0):
... arg0 = pa.array(arg0.to_array())
...
... return pa.array(
... (len(item) for item in arg0.to_pylist()),
... pa.int64()
... )
...
>>> sd.register_udf(char_count)
>>> sd.sql("SELECT char_count('abcde') as col").show()
┌───────┐
│ col │
│ int64 │
╞═══════╡
│ 5 │
└───────┘
sql ¶
Create a DataFrame by executing SQL
Parses a SQL string into a logical plan and returns a DataFrame that can be used to request results or further modify the query.
Parameters:
-
sql(str) –A single SQL statement.
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geom")
<sedonadb.dataframe.DataFrame object at ...>
view ¶
Create a DataFrame from a named view
Refer to a named view registered with this context.
Parameters:
-
name(str) –The name of the view
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geom").to_view("foofy")
>>> sd.view("foofy").show()
┌────────────┐
│ geom │
│ geometry │
╞════════════╡
│ POINT(0 1) │
└────────────┘
>>> sd.drop_view("foofy")
configure_proj ¶
configure_proj(
preset: Literal[
"auto", "pyproj", "homebrew", "conda", "system", None
] = None,
*,
shared_library: Union[str, Path] = None,
database_path: Union[str, Path] = None,
search_path: Union[str, Path] = None,
verbose: bool = False,
)
Configure PROJ source
SedonaDB loads PROJ dynamically to ensure aligned results and configuration against other Python and/or system libraries. This is normally configured on package load but may need additional configuration (particularly if the automatic configuration fails).
This function may be called at any time; however, once ST_Transform has been called, subsequent configuration has no effect.
Parameters:
-
preset(Literal['auto', 'pyproj', 'homebrew', 'conda', 'system', None], default:None) –One of: - None: Use custom values of shared_library and/or other keyword arguments. - auto: Try all presets in the order pyproj, conda, homebrew, system and warn if none succeeded. - pyproj: Attempt to use shared libraries bundled with pyproj. This aligns transformations with those performed by geopandas and is the option that is tried first. - conda: Attempt to load libproj and data files installed via
conda install proj. - homebrew: Attempt to load libproj and data files installed viabrew install proj. Note that the Homebrew install also includes proj-data grid files and may be able to perform more accurate transforms by default/without network capability. - system: Attempt to load libproj from a directory already on LD_LIBRARY_PATH (linux), DYLD_LIBRARY_PATH (MacOS), or PATH (Windows). This should find the version of PROJ installed by a Linux system package manager. -
shared_library(Union[str, Path], default:None) –Path to a PROJ shared library.
-
database_path(Union[str, Path], default:None) –Path to the PROJ database (proj.db).
-
search_path(Union[str, Path], default:None) –Path to the directory containing PROJ data files.
-
verbose(bool, default:False) –If True, print information about the configuration process.
Examples:
>>> sedona.db.configure_proj("auto")
sedonadb.dataframe ¶
DataFrame ¶
Representation of a (lazy) collection of columns
This object is usually constructed from a SedonaContext][sedonadb.context.SedonaContext] by importing an object, reading a file, or executing SQL.
schema
property
¶
schema
Return the column names and data types
Examples:
>>> sd = sedona.db.connect()
>>> df = sd.sql("SELECT 1 as one")
>>> df.schema
SedonaSchema with 1 field:
one: non-nullable int64<Int64>
>>> df.schema.field(0)
SedonaField one: non-nullable int64<Int64>
>>> df.schema.field(0).name, df.schema.field(0).type
('one', SedonaType int64<Int64>)
__arrow_c_schema__ ¶
__arrow_c_schema__()
ArrowSchema PyCapsule interface
Returns a PyCapsule wrapping an Arrow C Schema for interoperability with libraries that understand Arrow C data types. See the Arrow PyCapsule interface for more details.
__arrow_c_stream__ ¶
__arrow_c_stream__(requested_schema: Any = None)
ArrowArrayStream Stream PyCapsule interface
Returns a PyCapsule wrapping an Arrow C ArrayStream for interoperability with libraries that understand Arrow C data types. See the Arrow PyCapsule interface for more details.
Parameters:
-
requested_schema(Any, default:None) –A PyCapsule representing the desired output schema.
count ¶
count() -> int
Compute the number of rows in this DataFrame
Examples:
>>> sd = sedona.db.connect()
>>> df = sd.sql("SELECT * FROM (VALUES ('one'), ('two'), ('three')) AS t(val)")
>>> df.count()
3
execute ¶
execute() -> None
Execute the plan represented by this DataFrame
This will execute the query without collecting results into memory, which is useful for executing SQL statements like SET, CREATE VIEW, and CREATE EXTERNAL TABLE.
Note that this is functionally similar to .count() except it does
not apply any optimizations (e.g., does not use statistics to avoid
reading data to calculate a count).
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("CREATE OR REPLACE VIEW temp_view AS SELECT 1 as one").execute()
0
>>> sd.view("temp_view").show()
┌───────┐
│ one │
│ int64 │
╞═══════╡
│ 1 │
└───────┘
explain ¶
Return the execution plan for this DataFrame as a DataFrame
Retrieves the logical and physical execution plans that will be used to compute this DataFrame. This is useful for understanding query performance and optimization.
Parameters:
-
type(str, default:'standard') –The type of explain plan to generate. Supported values are: "standard" (default) - shows logical and physical plans, "extended" - includes additional query optimization details, "analyze" - executes the plan and reports actual metrics.
-
format(str, default:'indent') –The format to use for displaying the plan. Supported formats are "indent" (default), "tree", "pgjson" and "graphviz".
Returns:
-
DataFrame–A DataFrame containing the execution plan information with columns
-
DataFrame–'plan_type' and 'plan'.
Examples:
>>> import sedonadb
>>> con = sedonadb.connect()
>>> df = con.sql("SELECT 1 as one")
>>> df.explain().show()
┌───────────────┬─────────────────────────────────┐
│ plan_type ┆ plan │
│ utf8 ┆ utf8 │
╞═══════════════╪═════════════════════════════════╡
│ logical_plan ┆ Projection: Int64(1) AS one │
│ ┆ EmptyRelation: rows=1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ physical_plan ┆ ProjectionExec: expr=[1 as one] │
│ ┆ PlaceholderRowExec │
│ ┆ │
└───────────────┴─────────────────────────────────┘
head ¶
Limit result to the first n rows
Note that this is non-deterministic for many queries.
Parameters:
-
n(int, default:5) –The number of rows to return
Examples:
>>> sd = sedona.db.connect()
>>> df = sd.sql("SELECT * FROM (VALUES ('one'), ('two'), ('three')) AS t(val)")
>>> df.head(1).show()
┌──────┐
│ val │
│ utf8 │
╞══════╡
│ one │
└──────┘
limit ¶
Limit result to n rows starting at offset
Note that this is non-deterministic for many queries.
Parameters:
-
n(Optional[int]) –The number of rows to return
-
offset(int, default:0) –The number of rows to skip (optional)
Examples:
>>> sd = sedona.db.connect()
>>> df = sd.sql("SELECT * FROM (VALUES ('one'), ('two'), ('three')) AS t(val)")
>>> df.limit(1).show()
┌──────┐
│ val │
│ utf8 │
╞══════╡
│ one │
└──────┘
>>> df.limit(1, offset=2).show()
┌───────┐
│ val │
│ utf8 │
╞═══════╡
│ three │
└───────┘
show ¶
Print the first limit rows to the console
Parameters:
-
limit(Optional[int], default:10) –The number of rows to display. Using None will display the entire table which may result in very large output.
-
width(Optional[int], default:None) –The number of characters to use to display the output. If None, uses
Options.widthor detects the value from the current terminal if available. The default width is 100 characters if a width is not set by another mechanism. -
ascii(bool, default:False) –Use True to disable UTF-8 characters in the output.
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geometry").show()
┌────────────┐
│ geometry │
│ geometry │
╞════════════╡
│ POINT(0 1) │
└────────────┘
to_arrow_table ¶
to_arrow_table(schema: Any = None) -> Table
Execute and collect results as a PyArrow Table
Executes the logical plan represented by this object and returns a PyArrow Table. This requires that pyarrow is installed.
Parameters:
-
schema(Any, default:None) –The requested output schema or
Noneto use the inferred schema.
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geometry").to_arrow_table()
pyarrow.Table
geometry: extension<geoarrow.wkb<WkbType>> not null
----
geometry: [[01010000000000000000000000000000000000F03F]]
to_memtable ¶
to_memtable() -> DataFrame
Collect a data frame into a memtable
Executes the logical plan represented by this object and returns a DataFrame representing it.
Does not guarantee ordering of rows. Use to_arrow_table() if
ordering is needed.
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geom").to_memtable().show()
┌────────────┐
│ geom │
│ geometry │
╞════════════╡
│ POINT(0 1) │
└────────────┘
to_pandas ¶
to_pandas(geometry: Optional[str] = None) -> Union[DataFrame, GeoDataFrame]
Execute and collect results as a pandas DataFrame or GeoDataFrame
If this data frame contains geometry columns, collect results as a
single geopandas.GeoDataFrame. Otherwise, collect results as a
pandas.DataFrame.
Parameters:
-
geometry(Optional[str], default:None) –If specified, the name of the column to use for the default geometry column. If not specified, this is inferred as the column named "geometry", the column named "geography", or the first column with a spatial data type (in that order).
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geometry").to_pandas()
geometry
0 POINT (0 1)
to_parquet ¶
to_parquet(
path: Union[str, Path],
*,
partition_by: Optional[Union[str, Iterable[str]]] = None,
sort_by: Optional[Union[str, Iterable[str]]] = None,
single_file_output: Optional[bool] = None,
geoparquet_version: Literal["1.0", "1.1"] = "1.0",
overwrite_bbox_columns: bool = False,
)
Write this DataFrame to one or more (Geo)Parquet files
For input that contains geometry columns, GeoParquet metadata is written such that suitable readers can recreate Geometry/Geography types when reading the output and potentially read fewer row groups when only a subset of the file is needed for a given query.
Parameters:
-
path(Union[str, Path]) –A filename or directory to which parquet file(s) should be written.
-
partition_by(Optional[Union[str, Iterable[str]]], default:None) –A vector of column names to partition by. If non-empty, applies hive-style partitioning to the output.
-
sort_by(Optional[Union[str, Iterable[str]]], default:None) –A vector of column names to sort by. Currently only ascending sort is supported.
-
single_file_output(Optional[bool], default:None) –Use True or False to force writing a single Parquet file vs. writing one file per partition to a directory. By default, a single file is written if
partition_byis unspecified andpathends with.parquet. -
geoparquet_version(Literal['1.0', '1.1'], default:'1.0') –GeoParquet metadata version to write if output contains one or more geometry columns. The default (1.0) is the most widely supported and will result in geometry columns being recognized in many readers; however, only includes statistics at the file level.
Use GeoParquet 1.1 to compute an additional bounding box column for every geometry column in the output: some readers can use these columns to prune row groups when files contain an effective spatial ordering. The extra columns will appear just before their geometry column and will be named "[geom_col_name]_bbox" for all geometry columns except "geometry", whose bounding box column name is just "bbox".
-
overwrite_bbox_columns(bool, default:False) –Use
Trueto overwrite any bounding box columns that already exist in the input. This is useful in a read -> modify -> write scenario to ensure these columns are up-to-date. IfFalse(the default), an error will be raised if a bbox column already exists.
Examples:
>>> import tempfile
>>> sd = sedona.db.connect()
>>> td = tempfile.TemporaryDirectory()
>>> url = "https://github.com/apache/sedona-testing/raw/refs/heads/main/data/parquet/geoparquet-1.1.0.parquet"
>>> sd.read_parquet(url).to_parquet(f"{td.name}/tmp.parquet")
to_view ¶
Create a view based on the query represented by this object
Registers this logical plan as a named view with the underlying context such that it can be referred to in SQL.
Parameters:
-
name(str) –The name to which this query should be referred
-
overwrite(bool, default:False) –Use
Trueto overwrite an existing view of this name
Examples:
>>> sd = sedona.db.connect()
>>> sd.sql("SELECT ST_Point(0, 1) as geom").to_view("foofy")
>>> sd.view("foofy").show()
┌────────────┐
│ geom │
│ geometry │
╞════════════╡
│ POINT(0 1) │
└────────────┘
sedonadb.testing ¶
DBEngine ¶
Engine-agnostic catalog and SQL engine
Represents a connection to an engine, abstracting the details of registering a few common types of inputs and generating a few common types of outputs. This is intended for general testing and benchmarking usage and should not be used for anything other than that purpose. Notably, generated SQL is not hardened against injection and table creators always drop any existing table of that name.
assert_query_result ¶
Assert a SQL query result matches an expected target
A wrapper around execute_and_collect() and assert_result() that captures the most common usage of the DBEngine.
assert_result ¶
assert_result(result, expected, **kwargs) -> DBEngine
Assert a result against an expected target
Supported expected targets include:
- A pyarrow.Table (compared using ==)
- A geopandas.GeoDataFrame (compared using geopandas.testing)
- A pandas.DataFrame (for non-spatial results; compared using pandas.testing)
- A list of tuples where all values have been converted to strings. For geometry results, these strings are converted to WKT using geoarrow.pyarrow (which ensures a consistent WKT output format).
- A tuple of strings as the string output of a single row
- A string as the string output of a single column of a single row
- A bool for a single boolean value
- An int or float for single numeric values (optionally with a numeric_epsilon)
- bytes for single binary values
Using Arrow table equality is the most strict (ensures exact type equality and byte-for-byte value equality); however, string output is most useful for checking logical value quality among engines. GeoPandas/Pandas expected targets generate the most useful assertion failures and are probably the best option for general usage.
create_or_skip
classmethod
¶
create_or_skip(*args, **kwargs) -> DBEngine
Create this engine or call pytest.skip()
This is the constructor that should be used in tests to ensure that integration style tests don't cause failure for contributors working on Python-only behaviour.
If SEDONADB_PYTHON_NO_SKIP_TESTS is set, this function will never skip to avoid accidentally skipping tests on CI.
create_table_arrow ¶
create_table_arrow(name, obj) -> DBEngine
Copy an Arrow readable into an engine's native table format
create_table_pandas ¶
create_table_pandas(name, obj) -> DBEngine
Copy a GeoPandas or Pandas table into an engine's native table format
create_table_parquet ¶
create_table_parquet(name, paths) -> DBEngine
Scan one or more Parquet files and bring them an the engine's native table format
This is needed for engines that can't lazily scan Parquet (e.g., PostGIS) or engines that have an optimized internal format (e.g., DuckDB). The ability of engines to push down a scan into their own table format is variable.
create_view_parquet ¶
create_view_parquet(name, paths) -> DBEngine
Create a named view of Parquet files without scanning them
This is usually the best option for a benchmark if both engines support pushing down a spatial filter into the Parquet files in question. This is not supported by the PostGIS engine.
execute_and_collect ¶
execute_and_collect(query)
Execute a query and collect results to the driver
The output type here is engine-specific (use other methods to resolve the result into concrete output formats). Current engines typically collect results as Arrow; however, result_to_table() is required to guarantee that geometry results are encoded as GeoArrow.
This is typically the execution step that should be benchmarked (although the end-to-end time that includes data loading can also be a useful number for some result types)
install_hint
classmethod
¶
install_hint() -> str
A short install hint printed when skipping tests due to failed construction
name
classmethod
¶
name() -> str
This engine's name
A short string used to identify this engine in error messages and work around differences in behaviour.
result_to_pandas ¶
result_to_pandas(result) -> DataFrame
Convert a query result into a pandas.DataFrame or geopandas.GeoDataFrame
result_to_tuples ¶
Convert a query result into row tuples
This option strips away fine-grained type information but is helpful for generally asserting a query result or verifying results between engines that have (e.g.) differing integer handling.
PostGIS ¶
PostGISSingleThread ¶
sedonadb.dbapi ¶
connect ¶
Connect to Sedona via Python DBAPI
Creates a DBAPI-compatible connection as a thin wrapper around the ADBC Python driver manager's DBAPI compatibility layer. Support for DBAPI is experimental.
Parameters:
-
kwargs(Mapping[str, Any], default:{}) –Extra keyword arguments passed to
adbc_driver_manager.dbapi.Connection().
Examples:
>>> con = sedona.dbapi.connect()
>>> with con.cursor() as cur:
... _ = cur.execute("SELECT 1 as one")
... cur.fetchall()
[(1,)]
sedonadb.udf ¶
BINARY
module-attribute
¶
BINARY: TypeMatcher = 'binary'
Match any binary argument (i.e., binary, binary view, large binary, fixed-size binary)
STRING
module-attribute
¶
STRING: TypeMatcher = 'string'
Match any string argument (i.e., string, string view, large string)
ScalarUdfImpl ¶
Scalar user-defined function wrapper
This class is a wrapper class used as the return value for user-defined function constructors. This wrapper allows the UDF to be registered with a SedonaDB context or any context that accepts DataFusion Python Scalar UDFs. This object is not intended to be used to call a UDF.
TypeMatcher ¶
Bases: str
Helper class to mark type matchers that can be used as the input_types for
user-defined functions
Note that the internal storage of the type matcher (currently a string) is
arbitrary and may change in a future release. Use the constants provided by
the udf module.
arrow_udf ¶
arrow_udf(
return_type: Any,
input_types: List[Union[TypeMatcher, Any]] = None,
volatility: Literal["immutable", "stable", "volatile"] = "immutable",
name: Optional[str] = None,
)
Generic Arrow-based user-defined scalar function decorator
This decorator may be used to annotate a function that accepts arguments as Arrow array wrappers implementing the Arrow PyCapsule Interface. The annotated function must return a value of a consistent length of the appropriate type.
Warning
SedonaDB will call the provided function from multiple threads. Attempts to modify shared state from the body of the function may crash or cause unusual behaviour.
SedonaDB Python UDFs are experimental and this interface may change based on user feedback.
Parameters:
-
return_type(Any) –One of - A data type (e.g., pyarrow.DataType, arro3.core.DataType, nanoarrow.Schema) if this function returns the same type regardless of its inputs. - A function of
arg_types(list of data types) andscalar_args(list of optional scalars) that returns a data type. This function is also responsible for returningNoneif this function does not apply to the input types. -
input_types(List[Union[TypeMatcher, Any]], default:None) –One of - A list where each member is a data type or a
TypeMatcher. Theudf.GEOMETRYandudf.GEOGRAPHYtype matchers are the most useful because otherwise the function will only match spatial data types whose coordinate reference system (CRS) also matches (i.e., based on simple equality). Using these type matchers will also ensure input CRS consistency and will automatically propagate input CRSes into the output. -None, indicating that this function can accept any number of arguments of any type. Usually this is paired with a functionalreturn_typethat dynamically computes a return type or returnsNoneif the number or types of arguments do not match. -
volatility(Literal['immutable', 'stable', 'volatile'], default:'immutable') –Use "immutable" for functions whose output is always consistent for the same inputs (even between queries); use "stable" for functions whose output is always consistent for the same inputs but only within the same query, and use "volatile" for functions that generate random or otherwise non-deterministic output.
-
name(Optional[str], default:None) –An optional name for the UDF. If not given, it will be derived from the name of the provided function.
Examples:
>>> import pyarrow as pa
>>> from sedonadb import udf
>>> sd = sedona.db.connect()
The simplest scalar UDF only specifies return types. This implies that
the function can handle input of any type.
>>> @udf.arrow_udf(pa.string())
... def some_udf(arg0, arg1):
... arg0, arg1 = (
... pa.array(arg0.to_array()).to_pylist(),
... pa.array(arg1.to_array()).to_pylist(),
... )
... return pa.array(
... (f"{item0} / {item1}" for item0, item1 in zip(arg0, arg1)),
... pa.string(),
... )
...
>>> sd.register_udf(some_udf)
>>> sd.sql("SELECT some_udf(123, 'abc') as col").show()
┌───────────┐
│ col │
│ utf8 │
╞═══════════╡
│ 123 / abc │
└───────────┘
Use the `TypeMatcher` constants where possible to specify input.
This ensures that the function can handle the usual range of input
types that might exist for a given input.
>>> @udf.arrow_udf(pa.int64(), [udf.STRING])
... def char_count(arg0):
... arg0 = pa.array(arg0.to_array())
...
... return pa.array(
... (len(item) for item in arg0.to_pylist()),
... pa.int64()
... )
...
>>> sd.register_udf(char_count)
>>> sd.sql("SELECT char_count('abcde') as col").show()
┌───────┐
│ col │
│ int64 │
╞═══════╡
│ 5 │
└───────┘
In this case, the type matcher ensures we can also use the function
for string view input which is the usual type SedonaDB emits when
reading Parquet files.
>>> sd.sql("SELECT char_count(arrow_cast('abcde', 'Utf8View')) as col").show()
┌───────┐
│ col │
│ int64 │
╞═══════╡
│ 5 │
└───────┘
Geometry UDFs are best written using Shapely because pyproj (including its use
in GeoPandas) is not thread safe and can crash when attempting to look up
CRSes when importing an Arrow array. The UDF framework supports returning
geometry storage to make this possible. Coordinate reference system metadata
is propagated automatically from the input.
>>> import shapely
>>> import geoarrow.pyarrow as ga
>>> @udf.arrow_udf(ga.wkb(), [udf.GEOMETRY, udf.NUMERIC])
... def shapely_udf(geom, distance):
... geom_wkb = pa.array(geom.storage.to_array())
... distance = pa.array(distance.to_array())
... geom = shapely.from_wkb(geom_wkb)
... result_shapely = shapely.buffer(geom, distance)
... return pa.array(shapely.to_wkb(result_shapely))
...
>>>
>>> sd.register_udf(shapely_udf)
>>> sd.sql("SELECT ST_SRID(shapely_udf(ST_Point(0, 0), 2.0)) as col").show()
┌────────┐
│ col │
│ uint32 │
╞════════╡
│ 0 │
└────────┘
>>> sd.sql("SELECT ST_SRID(shapely_udf(ST_SetSRID(ST_Point(0, 0), 3857), 2.0)) as col").show()
┌────────┐
│ col │
│ uint32 │
╞════════╡
│ 3857 │
└────────┘
Annotated functions may also declare keyword arguments `return_type` and/or `num_rows`,
which will be passed the appropriate value by the UDF framework. This facilitates writing
generic UDFs and/or UDFs with no arguments.
>>> import numpy as np
>>> def random_impl(return_type, num_rows):
... pa_type = pa.field(return_type).type
... return pa.array(np.random.random(num_rows), pa_type)
...
>>> @udf.arrow_udf(pa.float32(), [])
... def random_f32(*, return_type=None, num_rows=None):
... return random_impl(return_type, num_rows)
...
>>> @udf.arrow_udf(pa.float64(), [])
... def random_f64(*, return_type=None, num_rows=None):
... return random_impl(return_type, num_rows)
...
>>> np.random.seed(487)
>>> sd.register_udf(random_f32)
>>> sd.register_udf(random_f64)
>>> sd.sql("SELECT random_f32() AS f32, random_f64() as f64;").show()
┌────────────┬─────────────────────┐
│ f32 ┆ f64 │
│ float32 ┆ float64 │
╞════════════╪═════════════════════╡
│ 0.35385555 ┆ 0.24793247139474195 │
└────────────┴─────────────────────┘