You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On large datasets (24 million rows) getting "pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays error" when performing string operations. with Python 3.11 and pandas 2.2.2. Seems to be related to this pervious issue: #55606
You need to construct a column with more than np.iinfo("int32").max total number of characters (more than 2GB) to run into those issues. See #56259 for some context (and a reproducer), and that is the reason we decided to use large_string for the future default string dtype.
@corey-dawson short term solution is to cast that column to pyarrow's large string type (df["full_path"] = df["full_path"].astype(pd.ArrowDtype(pa.large-string()))). That uses a bit more memory, but will avoid those issues in general.
Longer term, this is something that needs to be fixed on the pyarrow side, and it actually has already been improved lately. See apache/arrow#25822 and apache/arrow#33049 for related issues. There is a WIP PR at apache/arrow#41700 which will hopefully make it into one of the next releases.
Thanks for the workaround and the upcoming fix information @jorisvandenbossche. Looking forward to continue testing arrow backend and trying to realize performance improvements arrow brings within my jobs. Will be watching out for upcoming releases and close this issue within pandas, as it seems it exists in the arrow repo
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
On large datasets (24 million rows) getting "pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays error" when performing string operations. with Python 3.11 and pandas 2.2.2. Seems to be related to this pervious issue: #55606
Expected Behavior
string operations work with arrow backend and large datasets
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2e
python : 3.11.9.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22621
machine : AMD64
processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.2
numpy : 2.1.0
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 72.1.0
pip : 24.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: