Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load data using pyarrow types #306

Merged
merged 15 commits into from
May 9, 2024
Merged

Conversation

camposandro
Copy link
Collaborator

@camposandro camposandro commented May 6, 2024

Adds the use_pyarrow_types argument to the read_hipscat interface. This allows us to pass the dtype_backend to the calls that read the parquet leaf files, as well as metadata, loading the data and respective schema with pyarrow types.

Using the pyarrow backend (which is now the default!) we can load strings directly into a pandas dataframe with the pyarrow string type, avoiding the creation of slow tasks to convert python strings into pyarrow strings.

This change required me to also address #303, allowing catalogs to be created with pyarrow types in from_dataframe.

Closes #279 and #303.

@camposandro camposandro self-assigned this May 6, 2024
Copy link

github-actions bot commented May 6, 2024

Before [5482cf5] <v0.2.2> After [85a46f2] Ratio Benchmark (Parameter)
36.8±0.9ms 52.5±0.6ms 1.43 benchmarks.time_kdtree_crossmatch
12.5±0.2ms 16.6±0.7ms 1.33 benchmarks.time_polygon_search
6.45±0.3ms 7.09±0.3ms 1.1 benchmarks.time_box_filter_on_partition
494±2ms 494±4ms 1 benchmarks.time_create_midsize_catalog
3.24±0.02s 3.21±0.01s 0.99 benchmarks.time_create_large_catalog

Click here to view all benchmarks.

Copy link

codecov bot commented May 6, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.08%. Comparing base (5482cf5) to head (5cd240f).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #306      +/-   ##
==========================================
+ Coverage   99.06%   99.08%   +0.02%     
==========================================
  Files          41       41              
  Lines        1277     1306      +29     
==========================================
+ Hits         1265     1294      +29     
  Misses         12       12              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@camposandro camposandro marked this pull request as ready for review May 7, 2024 13:53
@camposandro camposandro requested a review from smcguire-cmu May 7, 2024 15:12
@hombit
Copy link
Contributor

hombit commented May 7, 2024

Does it also fix #89?

@hombit hombit linked an issue May 7, 2024 that may be closed by this pull request
3 tasks
@camposandro
Copy link
Collaborator Author

Does it also fix #89?

Yes, the string conversion tasks should not be required for catalogs loaded with pyarrow types.

I created a notebook that loads a chunk of ZTF sources with the "band" information, of string-type (we saw several bottlenecks happening with this column before). Using a single worker and the latest version of Dask (2024.5.0) these were the compute times I obtained:

Previously (w/ pyarrow string conversion): 181.35 s
Previously (w/o pyarrow string conversion): 88.31 s
Currently (w/ pyarrow typed catalogs): 83.81 s

@camposandro camposandro requested a review from smcguire-cmu May 8, 2024 21:42
@hombit hombit linked an issue May 9, 2024 that may be closed by this pull request
src/lsdb/loaders/dataframe/dataframe_catalog_loader.py Outdated Show resolved Hide resolved
src/lsdb/loaders/dataframe/from_dataframe.py Outdated Show resolved Hide resolved
Co-authored-by: Melissa DeLucchi <[email protected]>
@camposandro camposandro merged commit 7c49a59 into main May 9, 2024
11 checks passed
@camposandro camposandro deleted the issue/279/default-pyarrow-backend branch May 9, 2024 18:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants