Conversation

timlod

Instead of converting each chunk to a StringArray after casting to array and then concatenating, instead use pyarrow to concatenate chunks and convert to numpy.

Finally, bypass validation logic (unneeded as validated on parquet write) by initializing NDArrayBacked instead of StringArray.

This removes most of the performance overhead seen in #47345. There is still a slight overhead when comparing to object string arrays because of None -> NA conversion. I found that leaving that out still results in NA types in the example I gave (and would actually improve performance over the object case), but this is not consistent and thus conversion is left in.

timlod added 3 commits July 18, 2022 18:38
Instead of converting each chunk to a StringArray after casting to
array and then concatenating, instead use pyarrow to concatenate chunks
and convert to numpy.

Finally, we bypass validation the validation logic by initializing
NDArrayBacked instead of StringArray.
@timlodtimlod changed the title Bypass chunking/validation logic in StringDtype__from_arrow__ PERF: Bypass chunking/validation logic in StringDtype__from_arrow__ Jul 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this compatible with the minimal pyarrow version we are supporting?

@timlod

Good point, I hadn't considered this. No - afaict this code requires pyarrow 3.0 (pyarrow.concat_arrays as well as array.to_numpy(zero_copy_only=False) were both introduced in 3.0, whereas https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html states pyarrow 1.0.1 as the minimum version.

I understand this performance issue alone may not be sufficient reason to bump a version, but, in general, what would be the requirements for that? There's just 5 months between the two releases.
Edit: Looking at how pyarrow has been bumped in accordance with new pandas versions in the past, it feels like moving to pyarrow 3 for pandas 1.5 could be reasonable (current pyarrow is version 8).

@mroeschkemroeschke added PerformanceMemory or execution speed performanceStringsString extension data type and string dataArrowpyarrow functionalitylabels Jul 22, 2022
@phofl

Could you open an issue about bumping pyarrow? The we can discuss there and move forward from that

@phofl

Any way doing this without requiring 3.0? Otherwise would have to wait for a bit

@timlod

I think it's possible to implement something that's already a little better than what's on 1.4 without requiring pyarrow 3.
However, it's probably wise to switch to how it's done in this PR once pandas does require pa3.
I could make another PR later this week, if that's not too late for this release - and this one could be kept open for 1.5.1.

@phofl

Depends on the nature of the change, we don’t backport anything big to a release candidate.

this one would have to wait for 1.6, we avoid Performance things on 1.5.x

@timlod

In that case, I think it's fine to just wait for 1.6 and make this change directly. One can work around the performance impact by using object strings until then.

@github-actions

This pull request is stale because it has been open for thirty days with no activity. Please update and respond to this comment if you're still interested in working on this.

@github-actions-actions bot added the Stale label Oct 4, 2022
@mroeschkemroeschke mentioned this pull request Oct 14, 2022
5 tasks
@phofl

We just increased the minimum version to 6.0, so we could finish this

@timlod

Excellent, I'll revisit this soon!

Edit: I recently found that pyarrow's to_pandas() method can be the bottleneck when loading large parquet files that are read as large chunked arrays. I think implementing a similar logic (using pyarrow's own methods over concatenating lists of numpy arrays) for other datatypes might drastically improve read performance. Would it make sense to open a larger PR containing all those changes (if I can show improvements), or add those here?

@mroeschke

Would it make sense to open a larger PR containing all those changes (if I can show improvements), or add those here?

Smaller, singular topic scoped PRs would be preferred

@lithomas1lithomas1 removed the Stale label Oct 23, 2022
@timlod

I think this is ready then - I just changed the whatsnew edit, the code change stays the same.

I also briefly checked what I thought might have improved performance across the other dtypes, but this wasn't so. There may be some parts where one could switch to pyarrow concatenation, but those that I checked (integer numerical) didn't yield performance improvements (and may result in some memory overhead).

@phofl

Can you merge main?

@simonjayhawkins

@timlod there is a merge conflict here but since the rc is now cut this would probably need the release note moved to 2.1

@mroeschkemroeschke added this to the 2.1 milestone Feb 24, 2023
@mroeschkemroeschke merged commit 129108f into pandas-dev:main Feb 24, 2023
@mroeschke

Thanks for sticking with this @timlod

Sign up for free to join this conversation on . Already have an account? Sign in to comment
Arrowpyarrow functionalityPerformanceMemory or execution speed performanceStringsString extension data type and string data
None yet

Successfully merging this pull request may close these issues.

PERF: using use_nullable_dtypes=True in read_parquet slows performance on large dataframes