-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: Inconsistent index type when using read_csv with string[pyarrow] dtype #61145
Comments
Thanks for the report!
I think this depends on the context. For example, I think of all the values in a CSV as "data" regardless of how pandas uses it as a column or index, but would indeed differentiate "data" from "index" in the DataFrame constructor.
Is there a reason you would want this behavior? I would think users prefer consistent string dtypes. With the above in mind, I would lean toward having |
I usually save without an index and let pandas create one for me when I read in. This was an unusual situation for me, so that's why my experience is that way. Also if you do something like df.astype("string[pyarrow]"), it does not change the index dtype.
This was why I said "... or that the index dtype is always affected by the specified dtype, and that this behavior is the same for read_excel." immediately after that. It's more important that it's consistent than anything else. I did some more testing:
The index dtype does seem to be affected by the dtype parameter in read_csv for more dtypes than just "string[pyarrow]". The str dtype is the outlier here perhaps. The index dtype should be object when using the str dtype if it were to follow the same logic as float and "string[pyarrow]". There is still the issue of read_excel always giving you an int64 dtype regardless. So dtype behavior within read_csv is not consistent, and dtype behavior between read_csv and read_excel is not consistent. IMO they should be. What should the behavior be standardized to? I don't have a big preference, but I would come down on having the read_excel behavior of the index being its own thing. This would also be more inline with how the astype method works, that is that it does not change the dtype of your index. |
Currently pandas sees df = pandas.read_csv(path, dtype=str, index_col=0)
df.index # Index dtype is int64. With both Also, in case users aren't aware there is also a |
Let's also not forget read_excel. It always gives an int64 dtype regardless of the specified dtype. I'm just mentioning this again because everyone seems to be focusing on read_csv. I don't know why we also wouldn't want the 2 biggest table reading methods to behave consistently between each other. |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
I'm not sure if this is intended or not. If you use the index_col parameter in read_csv with the 'string[pyarrow]' dtype the index dtype is string, but if you use the str dtype or don't specify a dtype it will be int64. It has been my experience with pandas that the dtype only refers to the data and not the index. I initially had a few bugs stemming from switching to the 'string[pyarrow]' dtype and the resulting index type change. I also tested read_excel, and it has an index with an int64 dtype when specifying a 'string[pyarrow]' dtype, so there is also inconsistency when reading different table formats.
Expected Behavior
The expected behavior is that the index is the same dtype regardless of the specified dtype in read_csv, or that the index dtype is always affected by the specified dtype, and that this behavior is the same for read_excel.
Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.10.5
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 12, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 1.24.4
pytz : 2022.1
dateutil : 2.8.2
pip : 25.0.1
Cython : 3.0.11
sphinx : 5.1.1
IPython : 8.21.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : 4.9.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.4
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.9
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : 3.2.0
zstandard : None
tzdata : 2024.1
qtpy : 2.4.1
pyqt5 : None
The text was updated successfully, but these errors were encountered: