Skip to content

New morton class with arithmetic and comparison operators #860

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 29 commits into
base: master
Choose a base branch
from

Conversation

Fletterio
Copy link
Contributor

Description

Adds a new class for 2,3 and 4-dimensional morton codes, with arithmetic and comparison operators

Testing

TODO

TODO list:

Need to make sure all operators work properly before merging

Comment on lines 355 to 358
if (extractHighestBit<Bits, D, storage_t>(thisCoord) != _static_cast<storage_t>(uint64_t(0)))
thisCoord = thisCoord | ~leftShift(Mask, i);
if (extractHighestBit<Bits, D, storage_t>(rhsCoord) != _static_cast<storage_t>(uint64_t(0)))
rhsCoord = rhsCoord | ~leftShift(Mask, i);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be done branchlessly with

mix(coord,~impl::mask_v<Bits,D>,vector<bool,D>(sign_mask_v<Bits,D> & coord));

also the problem is that you're still performing a comparison on storage_t (which is always uint64_t) and not int64_t

and you're adding 1 bits everywhere whenever number is negative, which makes it larger.

you should have caught broken negative number comparsions in your example test suite.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We actually have some code for our legacy deprecated SSE3 vectors which fashioned an unsigned comparison from a signed one (opposite to your problem)

if constexpr(!std::is_signed<T>::value) \

sign mask got XORED into the operand, turning numbers higher than or equal to 2^(N-1) negative

meaning 2^(N-1) got remapped to 0 and 2^N-1 got remapped to 2^(N-1)-1, while 0 got remapped to -2^(N-1) and 2^(N-1)-1 to -1

note that its enough to apply the same trick to your spread mortons:

  1. XOR the sign bit (so position 1<<(Bits-1)*D+i)
  2. still perform the comparison as storage_t (so unsigned comparison

What (1) does is that it remaps signed morton codes to unsigned in the following way:

MinSignedMortonPattern -> 0
-1 -> MaxUnsignedMortonPattern>>1
0 -> 1<<(Bits-1)*D
MinSignedMortonPattern  -> MaxUnsignedMortonPattern

…c and generic ternary operator that should work for all compatible types, address PR review comments
@@ -11,6 +11,7 @@
#define NBL_CONSTEXPR constexpr // TODO: rename to NBL_CONSTEXPR_VAR
#define NBL_CONSTEXPR_FUNC constexpr
#define NBL_CONSTEXPR_STATIC constexpr static
#define NBL_CONSTEXPR_INLINE constexpr inline
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constexpr implies inline for functions (not variables)

([dcl.constexpr], §7.1.5/2 in the C++11 standard): "constexpr functions and constexpr constructors are implicitly inline (7.1.2)."

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use CONSTEXPR_FUNC instead

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems that NBL_CONSTEXPR_INLINE_FUNC and NBL_CONSTEXPR_STATIC_INLINE_FUNC should be removed

the constexpr inline and constexpr static inline only make sense for variables

Comment on lines 153 to 158
template<typename Condition, typename ResultType>
NBL_CONSTEXPR_INLINE_FUNC ResultType select(Condition condition, ResultType object1, ResultType object2)
{
return cpp_compat_intrinsics_impl::select_helper<Condition, ResultType>::__call(condition, object1, object2);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we already have mix in the #811 branch, shall we just make select(C,T,F) {return mix_helper<ResultType,Condition>::__call(F,T,C)} in the future

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can drop this and just use mix, yeah

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok the thing is that select can either:

-take a bool and return just one of the objects entirely

-take a vector<bool, N> and return a mix of each object (provided the objects are vectors as well)

so the latter does exactly the same as mix, but it also can act as the usual ternary ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aaah ok mix doesn't work on structs, coordinate with @Przemog1 and @keptsecret later, so then mix would need to be done in terms of select_helper

Comment on lines 238 to 246
#define NBL_EMULATED_VECTOR_OPERATOR(OP, ENABLE_CONDITION) NBL_CONSTEXPR_INLINE_FUNC enable_if_t< ENABLE_CONDITION , this_t> operator##OP (component_t val)\
{\
this_t output;\
[[unroll]]\
for (uint32_t i = 0u; i < CRTP::Dimension; ++i)\
output.setComponent(i, CRTP::getComponent(i) OP val);\
return output;\
}\
NBL_CONSTEXPR_INLINE_FUNC enable_if_t< ENABLE_CONDITION , this_t> operator##OP (this_t other)\

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enable_if_t does not work if there's no "extra" unresolved/deducible template parameter
https://godbolt.org/z/h3rjcbdxd

and since templated operators are busted in DXC, you need to basically add bool IsComponentTypeIntegral in the style of bool IsComponentTypeFundamental and make 4 partial specializations (on/off for each bool)

@@ -428,7 +478,7 @@ namespace impl
template<typename To, typename From>
struct static_cast_helper<emulated_vector_t2<To>, vector<From, 2>, void>
{
static inline emulated_vector_t2<To> cast(vector<From, 2> vec)
NBL_CONSTEXPR_STATIC_INLINE emulated_vector_t2<To> cast(vector<From, 2> vec)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't be NBL_CONSTEXPR_STATIC_INLINE but NBL_CONSTEXPR_STATIC_FUNC or NBL_CONSTEXPR_STATIC_METHOD

@@ -132,15 +130,19 @@ struct emulated_int64_base
{
// Either the topmost bits, when interpreted with correct sign, are less than those of `rhs`, or they're equal and the lower bits are less
// (lower bits are always positive in both unsigned and 2's complement so comparison can happen as-is)
const bool MSBEqual = __getMSB() == rhs.__getMSB();
const bool MSB = Signed ? (_static_cast<int32_t>(__getMSB()) < _static_cast<int32_t>(rhs.__getMSB())) : (__getMSB() < rhs.__getMSB());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably want bit_cast instead of _static_cast

*/
NBL_CONSTEXPR_STATIC_INLINE_FUNC portable_vector_t<encode_t, Dim> interleaveShift(NBL_CONST_REF_ARG(decode_t) decodedValue)
{
NBL_CONSTEXPR_STATIC encode_t EncodeMasks[CodingStages + 1] = { _static_cast<encode_t>(coding_mask_v<Dim, Bits, 0>), _static_cast<encode_t>(coding_mask_v<Dim, Bits, 1>), _static_cast<encode_t>(coding_mask_v<Dim, Bits, 2>) , _static_cast<encode_t>(coding_mask_v<Dim, Bits, 3>) , _static_cast<encode_t>(coding_mask_v<Dim, Bits, 4>) , _static_cast<encode_t>(coding_mask_v<Dim, Bits, 5>) };

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was that really less effort to type out than macro your loop body and "hand unroll" it ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{
NBL_CONSTEXPR_STATIC encode_t EncodeMasks[CodingStages + 1] = { _static_cast<encode_t>(coding_mask_v<Dim, Bits, 0>), _static_cast<encode_t>(coding_mask_v<Dim, Bits, 1>), _static_cast<encode_t>(coding_mask_v<Dim, Bits, 2>) , _static_cast<encode_t>(coding_mask_v<Dim, Bits, 3>) , _static_cast<encode_t>(coding_mask_v<Dim, Bits, 4>) , _static_cast<encode_t>(coding_mask_v<Dim, Bits, 5>) };
left_shift_operator<portable_vector_t<encode_t, Dim> > leftShift;
portable_vector_t<encode_t, Dim> interleaved = _static_cast<portable_vector_t<encode_t, Dim> >(decodedValue)& EncodeMasks[CodingStages];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK we don't use static_cast to widen or truncate our scalars and vectors, use and specialize promote/truncate instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

making a truncate then

Comment on lines 102 to 104
NBL_CONSTEXPR_STATIC uint16_t Stages = mpl::log2_ceil_v<Bits>;
[[unroll]]
for (uint16_t i = Stages; i > 0; i--)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this loop will never unroll, static const will never be constexpr as a plain variable in a function, @keptsecret got screwed over by this in the HLSL Path Tracer!

Better unroll by hand, or use mpl::log2_ceil_v<Bits> directly as an initializer to uint16_t i=0

struct MortonEncoder
{
template<typename decode_t = conditional_t<(Bits > 16), vector<uint32_t, Dim>, vector<uint16_t, Dim> >
NBL_FUNC_REQUIRES(concepts::IntVector<decode_t> && 8 * sizeof(typename vector_traits<decode_t>::scalar_type) >= Bits)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its actually >Bits+Dim not >=Bits because you will be left shifting the components, and last will have its MSB at Bits+Dim-1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this is for the decode_t, which immediately gets transformed to a vector of encode_t which does have enough Bits to hold the interleaved and shifted coordinates

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk what I was thinking when I wrote this tbh, maybe the check should be the other way around?

8 * sizeof(typename vector_traits<decode_t>::scalar_type) <= max(Bits, 16)

to ensure you don't get an implicit truncation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or just drop that altogether idk

Comment on lines 125 to 129
encode_t encoded = _static_cast<encode_t>(uint64_t(0));
array_get<portable_vector_t<encode_t, Dim>, encode_t> getter;
[[unroll]]
for (uint16_t i = 0; i < Dim; i++)
encoded = encoded | getter(interleaveShifted, i);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't count on compiler noticing that |0 is identity and can be optimzed out for emulated_uint64_t, so do

encode_t ecnoded = getter(interleaveShifted,0);
[[unroll]]
for (uint32_t i=1; i<Dim; i++)
   encoded = encoded | getter(interleaveShifted,i);

// ----------------------------------------------------------------- MORTON ENCODER ---------------------------------------------------

template<uint16_t Dim, uint16_t Bits, typename encode_t NBL_PRIMARY_REQUIRES(Dimension<Dim> && Dim * Bits <= 64 && 8 * sizeof(encode_t) == mpl::round_up_to_pot_v<Dim * Bits>)
struct MortonEncoder

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

morton::impl::Morton,, too many mortons

Comment on lines 133 to 139
};

// ----------------------------------------------------------------- MORTON DECODER ---------------------------------------------------

template<uint16_t Dim, uint16_t Bits, typename encode_t NBL_PRIMARY_REQUIRES(Dimension<Dim> && Dim * Bits <= 64 && 8 * sizeof(encode_t) == mpl::round_up_to_pot_v<Dim * Bits>)
struct MortonDecoder
{

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not merge Decoder and Encoder into a single Transcoder ?

struct MortonDecoder
{
template<typename decode_t = conditional_t<(Bits > 16), vector<uint32_t, Dim>, vector<uint16_t, Dim> >
NBL_FUNC_REQUIRES(concepts::IntVector<decode_t> && 8 * sizeof(typename vector_traits<decode_t>::scalar_type) >= Bits)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing with >=Bits needing to be > Bits+Dim

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually same comments as for the interleaveShift function

Comment on lines 151 to 152
setter(decoded, i, encodedValue);
decoded = rightShift(decoded, _static_cast<vector<uint32_t, Dim> >(vector<uint32_t, 4>(0, 1, 2, 3)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could just write setter(decoded, i, encodedValue>>i);

Comment on lines 60 to 76
NBL_HLSL_MORTON_SPECIALIZE_FIRST_CODING_MASK(2, 0x5555555555555555) // Groups bits by 1 on, 1 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(2, 1, uint64_t(0x3333333333333333)) // Groups bits by 2 on, 2 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(2, 2, uint64_t(0x0F0F0F0F0F0F0F0F)) // Groups bits by 4 on, 4 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(2, 3, uint64_t(0x00FF00FF00FF00FF)) // Groups bits by 8 on, 8 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(2, 4, uint64_t(0x0000FFFF0000FFFF)) // Groups bits by 16 on, 16 off

NBL_HLSL_MORTON_SPECIALIZE_FIRST_CODING_MASK(3, 0x9249249249249249) // Groups bits by 1 on, 2 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(3, 1, uint64_t(0x30C30C30C30C30C3)) // Groups bits by 2 on, 4 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(3, 2, uint64_t(0xF00F00F00F00F00F)) // Groups bits by 4 on, 8 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(3, 3, uint64_t(0x00FF0000FF0000FF)) // Groups bits by 8 on, 16 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(3, 4, uint64_t(0xFFFF00000000FFFF)) // Groups bits by 16 on, 32 off

NBL_HLSL_MORTON_SPECIALIZE_FIRST_CODING_MASK(4, 0x1111111111111111) // Groups bits by 1 on, 3 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(4, 1, uint64_t(0x0303030303030303)) // Groups bits by 2 on, 6 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(4, 2, uint64_t(0x000F000F000F000F)) // Groups bits by 4 on, 12 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(4, 3, uint64_t(0x000000FF000000FF)) // Groups bits by 8 on, 24 off
NBL_HLSL_MORTON_SPECIALIZE_CODING_MASK(4, 4, uint64_t(0x000000000000FFFF)) // Groups bits by 16 on, 48 off (unused but here for completion + likely keeps compiler from complaining)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ull sufficies on the mask literals please

Comment on lines +162 to +163
// If `Bits` is greater than half the bitwidth of the decode type, then we can avoid `&`ing against the last mask since duplicated MSB get truncated
NBL_IF_CONSTEXPR(Bits > 4 * sizeof(typename vector_traits<decode_t>::scalar_type))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that > should be a >= because if you have 16 bit morton (e.g. dim=2 stored in a uint32_t) getting decoded into a vector of uint16_t you'll have a shift by 8 in the final coding round

Copy link
Contributor Author

@Fletterio Fletterio Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the comparison is against half the bitwidth. For example if decoding to a vector of uint16_t this decision is made based on whether we have more than 8 bits.

For example if you have exactly 8 bits the last shift is by 4. Ignore hex, let's just say the encoded number is ABCDEFGH where each letter is just representing a binary value. Then in the last round you'll have decoded = 0000ABCD0000EFGH, decoded >> 4 = 00000000ABCD0000 (need 16 bits to hold two 8-bit Mortons) and the | between these looks like 0000ABCDABCDEFGH. Here to get the correct value I do need to mask off the highest 8 bits.

Now say you have more than half the bitwidth of the decode type. For example a 9bit Morton ABCDEFGHI being decoded to a uint16_t. Here since we have more than 8 bits, the last round is a shift by 8 so the spacing between bits is also 8, so decoded will look like decoded = 000000000000000A00000000BCDEFGHI (now need 32 bits to hold two 9-bit mortons) and decoded >> 8 = 00000000000000000000000A00000000 so the | between them returns 000000000000000A0000000ABCDEFGHI. Now there's no need to mask, since taking only the lowest 16bits correctly yields 0000000ABCDEFGHI (same holds for any value from 10 to 16bits)

template<typename I NBL_FUNC_REQUIRES(Comparable<Signed, Bits, storage_t, true, I>)
NBL_CONSTEXPR_STATIC_INLINE_FUNC vector<bool, D> __call(NBL_CONST_REF_ARG(storage_t) value, NBL_CONST_REF_ARG(portable_vector_t<I, D>) rhs)
{
NBL_CONSTEXPR portable_vector_t<storage_t, D> zeros = _static_cast<portable_vector_t<storage_t, D> >(_static_cast<vector<uint64_t, D> >(vector<uint64_t, 4>(0,0,0,0)));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, this will create a hidden variable with an initializer

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you literally have to use a temporary to compare against

or declare the variable as a plain const, not a static const

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait what does static const do vs using just const

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read the discord thread

Comment on lines 218 to 219
NBL_CONSTEXPR_STATIC portable_vector_t<storage_t, D> InterleaveMasks = _static_cast<portable_vector_t<storage_t, D> >(_static_cast<vector<uint64_t, D> >(vector<uint64_t, 4>(coding_mask_v<D, Bits, 0>, coding_mask_v<D, Bits, 0> << 1, coding_mask_v<D, Bits, 0> << 2, coding_mask_v<D, Bits, 0> << 3)));
NBL_CONSTEXPR_STATIC portable_vector_t<storage_t, D> SignMasks = _static_cast<portable_vector_t<storage_t, D> >(_static_cast<vector<uint64_t, D> >(vector<uint64_t, 4>(SignMask<Bits, D>, SignMask<Bits, D> << 1, SignMask<Bits, D> << 2, SignMask<Bits, D> << 3)));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again a plain const is okay, static const is not

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also is there a pretier way to write this ? or at least format (ever component new line?)

Comment on lines 221 to 224
// Obtain a vector of deinterleaved coordinates and flip their sign bits
const portable_vector_t<storage_t, D> thisCoord = (InterleaveMasks & value) ^ SignMasks;
// rhs already deinterleaved, just have to cast type and flip sign
const portable_vector_t<storage_t, D> rhsCoord = _static_cast<portable_vector_t<storage_t, D> >(rhs) ^ SignMasks;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are you always flipping signs, regardless of Signed ?

Comment on lines 411 to 451
NBL_CONSTEXPR_INLINE_FUNC vector<bool, D> equals(NBL_CONST_REF_ARG(vector<I, D>) rhs) NBL_CONST_MEMBER_FUNC
{
return impl::Equals<Signed, Bits, D, storage_t, BitsAlreadySpread>::__call(value, rhs);
}

NBL_CONSTEXPR_INLINE_FUNC bool operator!=(NBL_CONST_REF_ARG(this_t) rhs) NBL_CONST_MEMBER_FUNC
{
return value != rhs.value;
}

template<bool BitsAlreadySpread, typename I
NBL_FUNC_REQUIRES(impl::Comparable<Signed, Bits, storage_t, BitsAlreadySpread, I>)
NBL_CONSTEXPR_INLINE_FUNC vector<bool, D> notEquals(NBL_CONST_REF_ARG(vector<I, D>) rhs) NBL_CONST_MEMBER_FUNC
{
return !equals<BitsAlreadySpread, I>(rhs);
}

template<bool BitsAlreadySpread, typename I
NBL_FUNC_REQUIRES(impl::Comparable<Signed, Bits, storage_t, BitsAlreadySpread, I>)
NBL_CONSTEXPR_INLINE_FUNC vector<bool, D> less(NBL_CONST_REF_ARG(vector<I, D>) rhs) NBL_CONST_MEMBER_FUNC
{
return impl::LessThan<Signed, Bits, D, storage_t, BitsAlreadySpread>::__call(value, rhs);
}

template<bool BitsAlreadySpread, typename I
NBL_FUNC_REQUIRES(impl::Comparable<Signed, Bits, storage_t, BitsAlreadySpread, I>)
NBL_CONSTEXPR_INLINE_FUNC vector<bool, D> lessEquals(NBL_CONST_REF_ARG(vector<I, D>) rhs) NBL_CONST_MEMBER_FUNC
{
return impl::LessEquals<Signed, Bits, D, storage_t, BitsAlreadySpread>::__call(value, rhs);
}

template<bool BitsAlreadySpread, typename I
NBL_FUNC_REQUIRES(impl::Comparable<Signed, Bits, storage_t, BitsAlreadySpread, I>)
NBL_CONSTEXPR_INLINE_FUNC vector<bool, D> greater(NBL_CONST_REF_ARG(vector<I, D>) rhs) NBL_CONST_MEMBER_FUNC
{
return impl::GreaterThan<Signed, Bits, D, storage_t, BitsAlreadySpread>::__call(value, rhs);
}

template<bool BitsAlreadySpread, typename I
NBL_FUNC_REQUIRES(impl::Comparable<Signed, Bits, storage_t, BitsAlreadySpread, I>)
NBL_CONSTEXPR_INLINE_FUNC vector<bool, D> greaterEquals(NBL_CONST_REF_ARG(vector<I, D>) rhs) NBL_CONST_MEMBER_FUNC

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spelling nitpick, those functions are usually called equal without an s at the end
https://registry.khronos.org/OpenGL-Refpages/gl4/html/equal.xhtml

`NBL_CONSTEXPR_FUNC`
Adds `OpUndef` to spirv `intrinsics.hlsl` and `cpp_compat.hlsl`
Adds an explicit `truncate` function for vectors and emulated vectors
Adds a bunch of specializations for vectorial types in `functional.hlsl`
Bugfixes and changes to Morton codes, very close to them working
properly with emulated ints
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants