Skip to content
This repository was archived by the owner on Sep 13, 2025. It is now read-only.
This repository was archived by the owner on Sep 13, 2025. It is now read-only.

Why convert ASCII to Trytes? #6

@sketch34

Description

@sketch34

I'm trying to understand why the asciiToTrytes functions exist. Please don't interpret this as critical, I'm just a programmer interested in understanding the tech :)

//      RESULT:
//        The ASCII char "Z" is represented as "IC" in trytes.

I want some help understanding this decision. In the above code, all that is happening is a remapping to a different encoding than ASCII. Essentially its just encoding ASCII into an arbitrarily chosen tryte alphabet (which is also ASCII), but it has the effect of doubling the amount of memory needed to store "Z" or indeed any char. To me it appears like this ASCII <-> tryte conversion step simply results in a more memory hungry representation that is also in ASCII. Why is this done?

I believe the decision to use trytes came from a desire to anticipate future ternary hardware. But if the hardware natively supports ternary / trytes / trits, then we still don't need the asciiToTrytes functions right? ASCII chars would be natively represented as trytes by the underlying compiler / interpreter / JIT, this doesn't happen at the software level, it is dictated by the hardware. On binary hardware all you can do is choose some arbitrary representation of a tryte. But why are we even doing this when it will happen automatically when ternary hardware comes along? You can't force a byte to be a tryte. It feels like we're inventing a problem that doesn't need solving and making things less efficient in the process?

As a final thought, the best way I can think of to achieve this representation in JavaScript is using DataView / ArrayBuffer interfaces to encode trytes across byte boundaries. E.g. Represent the tryte sequence as a sequence of bits of arbitrary length using bitwise ops. This way you will waste a maximum of 7 bits at the end of the sequence. Obviously this would still be more CPU heavy than simply using the original ASCII encoding. Plus you're opened up to endianess issues across platforms / languages. If it's data compression we're after there are far better ways to achieve this.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions