Starting with 0.5, we will follow the following versioning scheme:
- We don't bump MAJOR yet.
- We bump MINOR on breaking changes.
- We increase PATCH otherwise.
- Allow NumPy ufuncs to work with
np.ndarray
outputs where operations are clearly defined (i.e. the fletcher array has no nulls).
- Fix return values for
str
functions withpandas=1.2
andpyarrow=1
. - Ensure that parallel variants of
apply_binary_str
actually parallize.
- Add tests for all
str
functions. - Fix tests for
pyarrow=0.17.1
and add CI jobs for0.17.1
and1.0.1
. - Implement a faster take for list arrays.
- Use
utf8_is_*
functions from Apache Arrow if available. - Simplify
factorize
implementation to work for chunked arrays with more or less than a single chunk. - Switch to
pandas.NA
as the user-facing null value - Add convenience function
fletcher.algorithms.string.apply_binary_str
to apply a binary function on two string columns.
- Return correct index in functions like
fr_str.extractall
.
- Create a shallow copy on
.astype(equal dtype, copy=True)
. - Import
pad_1d
only in olderpandas
versions, otherwise useget_fill_func
- Handle
fr_str.extractall
and similar functions correctly, returning apd.Dataframe
containing accoringfletcher
array types.
- Use
binary_contains_exact
if available frompyarrow
instead of our own numba-based implementation. - Provide two more consistent accessors:
.fr_strx
: Call efficient string functions onfletcher
arrays, error if not available..fr_str
: Call string functions onfletcher
andobject
-typed arrays, convert toobject
if nofletcher
function is available.- Add a numba-based implementation for
strip
,slice
, andreplace
. - Support
LargeListArray
as a backing structure for lists. - Implement
isnan
ufunc.
- Release the GIL where possible.
- Register with dask's
make_array_nonempty
to be able to handle the extension types indask
.
- Implement
FletcherBaseArray.__or__
andFletcherBaseArray.__any__
to supportpandas.Series.replace
.
- Forward the
__array__
protocol directly to Arrow - Add naive implementation for
zfill
- Add efficient (Numba-based) implementations for
endswith
,startswith
andcontains
- Support roundtrips of
pandas.DataFrame
instances withfletcher
columns throughpyarrow
data structures. - Move CI to Github Actions
Major changes:
- We now provide two different extension array implementations.
There now is the more simpler
FletcherContinuousArray
which is backed by apyarrow.Array
instance and thus is always a continuous memory segments. The initialFletcherArray
which is backed by apyarrow.ChunkedArray
is now renamed toFletcherChunkedArray
. Whilepyarrow.ChunkedArray
allows for more flexibility on how the data is stored, the implementation of algorithms is more complex for it. As this hinders contributions and also the adoption in downstream libraries, we now provide both implementations with an equal level of support. We don't provide the more general named classFletcherArray
anymore as there is not a clear opinion on whether this should point toFletcherContinuousArray
orFletcherChunkedArray
. As usage increases, we might provide such an alias class in future again. - Support for
ArithmeticOps
andComparisonOps
on numerical data as well as numeric reductions such assum
. This should allow the use of nullable int and float type for many use cases. Performance of nullable integeter columns is on the same level as inpandas.IntegerArray
as we have similar implementations of the masked arithmetic. In future versions, we plan to delegate the workload into the C++ code ofpyarrow
and expect significant performance improvements though the usage of bitmasks over bytemasks. any
andall
are now efficiently implemented on boolean arrays. We blogged about this and how its performance is about twice as fast while only using 1/16 - 1/32 of RAM as the reference boolean array with missing inpandas
. This is due to the fact that prior topandas=1.0
you have had to use a float array to have a boolean array that can deal with missing values. Inpandas=1.0
a newBooleanArray
class was added that improves this stituation but also change a bit of the logic. We will adapt to this class in the next release and also publish new benchmarks.
New features / performance improvements:
- For
FletcherContinuousArray
in general and allFletcherChunkedArray
instances with a single chunk, we now provide an efficient implementation oftake
. - Support for Python 3.8 and Pandas 1.0
- We now check typing in CI using
mypy
and have annotated the code with type hints. We only plan to mark the packages aspy.typed
whenpandas
is also marked aspy.typed
. - You can query
fletcher
for its version viafletcher.__version__
- Implemented
.str.cat
as.fr_strx.cat
for arrays withpa.string()
dtype. unique
is now supported on all array types wherepyarrow
provides aunique
implementation.
- Drop Python 2 support
- Support for Python 3.7
- Fixed handling of
date
columns due to new default behaviours inpyarrow
.
Rerelease with the sole purpose of rendering MarkDown on PyPI.
Load the README in setup.py to have a description on PyPI.
Initial release of fletcher that is based on Pandas 0.23.3 and Apache Arrow 0.9. This release already supports any Apache Arrow type but the unit tests are yet limited to string and date.