Skip to content

Conversation

@OBorce
Copy link
Collaborator

@OBorce OBorce commented Sep 30, 2025

No description provided.

Ericson2314 and others added 30 commits January 26, 2023 09:53
 - All Pull requests are run in GitHub Actions

 - All pushes to `main` are run in GitHub Actions
use dedicated sections for each device
@OBorce OBorce force-pushed the feature/mintlayer-app branch 12 times, most recently from ab1fa35 to 877082e Compare October 26, 2025 23:47
@OBorce OBorce force-pushed the feature/mintlayer-app branch from 877082e to 2fa1c33 Compare October 28, 2025 09:02
src/main.rs Outdated
Comment on lines 173 to 197
(Ins::GET_VERSION, P1_GET_VERSION, P2_DONE) => Ok(Instruction::GetVersion),
(Ins::APP_NAME, P1_APP_NAME, P2_DONE) => Ok(Instruction::GetAppName),
(Ins::PUB_KEY, p1, P2_DONE) => {
let p1: PubKeyP1 = p1.try_into()?;
Ok(Instruction::GetPubkey {
display: p1.display(),
})
}
(Ins::SIGN_TX, p1, P2_SIGN_MORE | P2_DONE) => Ok(Instruction::SignTx {
p1: p1.try_into()?,
more: value.p2 == P2_SIGN_MORE,
}),
(Ins::SIGN_MSG, P1_SIGN_START, P2_DONE)
| (Ins::SIGN_MSG, P1_SIGN_NEXT..=P1_SIGN_MAX_CHUNKS, P2_DONE | P2_SIGN_MORE) => {
Ok(Instruction::SignMessage {
chunk: value.p1,
more: value.p2 == P2_SIGN_MORE,
})
}
(
Ins::GET_VERSION | Ins::APP_NAME | Ins::PUB_KEY | Ins::SIGN_TX | Ins::SIGN_MSG,
_,
_,
) => Err(StatusWord::WrongP1P2),
(_, _, _) => Err(StatusWord::InsNotSupported),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is too messy. I think we should go with a more layered approach.

  1. Since p2 values are universal, we can have a lower layer that handles APDUs and collects one or more of them into a single "low-level instruction", which would consist of ins, p1, and the data (possibly collected from multiple chunks).
    This layer doesn't need to know what those ins/p1/data mean, it should just look at p2 - if p2 says "more chunks", then it should expect another APDU with the same ins/p1, collecting the received data in a "context" (this "context" should be separate from any other "context" that a higher-level layer may need).
    Btw, on each APDU that says "more chunks" IMO it's better to return a special status word saying "expecting more chunks", so that the host can do sanity checks ensuring that it and the app are in sync.
  2. Once a full "low-level instruction" (ins, p1 and full data) has been collected, it can be passed to the higher-level layer (also, perhaps p1 should just be named "parameter" at this point). This layer will interpret ins/p1, decode the data accordingly and produce a "high-level instruction" (need to come up with decent terminology, e.g. one of them may be called "command" and the other "instruction"), which should be a Rust enum containing all the data in the decoded form.
  3. The "high-level instruction" is then passed to the handler, which returns Result<Response, Error>, where both Response and Error are Rust enums. This Result can then be converted to a status word and response data, which can be sent back to host.

Comment on lines 760 to 772
fn process_input_additional_info(
ctx: &mut TxContext,
additional_info: InputAdditionalInfoReq,
) -> Result<SighashInputCommitment, StatusWord> {
let inp_data = ctx
.inputs_data
.get(ctx.num_prcessed_input_commitments as usize)
.ok_or(StatusWord::WrongContext)?;

let commitment = match additional_info {
InputAdditionalInfoReq::None => SighashInputCommitment::None,
InputAdditionalInfoReq::Utxo { utxo } => {
match &utxo {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All this looks too convoluted IMO. Why don't you do it the same way as in the trezor fw? I.e. where each input already contains all the data, e.g. FillOrder has the balances etc.

I mean, I understand that currently you pass core enum types as part of the messages, but you don't have to do it. You could have a separate "TxInput" enum (with a more message-like name) specifically for the communication with the app, from which both the core's TxInput could be constructed and input commitments could be calculated. IMO it'd be cleaner that way.

@OBorce OBorce force-pushed the feature/mintlayer-app branch from 372ff0d to efb1812 Compare December 15, 2025 09:06
@OBorce OBorce force-pushed the feature/mintlayer-app branch from efb1812 to f0a534a Compare December 15, 2025 09:34
@OBorce OBorce force-pushed the feature/mintlayer-app branch from 98949a4 to 5faeaaf Compare December 19, 2025 02:36
Copy link
Collaborator

@ImplOfAnImpl ImplOfAnImpl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't have enough time to look through all the code, I'll continue after returning from my vacation.

Comment on lines 11 to 15
# By default, heap size is enforced to 8192 bytes.
# Authorized values are [2048, 4096, 8192, 16384, 24576]
# Uncomment the following lines to set the heap size to 4096 bytes for instance
[env]
HEAP_SIZE = "16384"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plz remove the original comment and write our own, explaining why we need this particular heap size

bech32 = { version = "0.11", default-features = false, features = ["alloc"] }
chrono = { version = "0.4", default-features = false, features = ["alloc"] }

mintlayer-core-primitives = { git = "https://github.com/mintlayer/mintlayer-core-primitives", branch = "feature/ledger-coin-type" }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plz don't forget to update this.
Also, IMO it's better to specify a specific revision and mention the commit message in a comment.

@@ -0,0 +1,44 @@
name: Build and run functional tests using ragger through reusable workflow
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename the file to build_and_run_functional_tests.yml?

Cargo.toml Outdated
Comment on lines 27 to 29
[profile.release]
opt-level = 'z'
lto = true
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing worth adding here is codegen-units = 1
E.g. I tried building for apex_p. Before:

# llvm-size /app/target/apex_p/release/mintlayer-app
   text	  data	   bss	   dec	   hex	filename
 145080	  1024	 19576	165680	 28730	/app/target/apex_p/release/mintlayer-app

and with codegen-units = 1:

# llvm-size /app/target/apex_p/release/mintlayer-app
   text	  data	   bss	   dec	   hex	filename
 136224	  1024	 19576	156824	 26498	/app/target/apex_p/release/mintlayer-app

I.e. the app will occupy 136224+1024 bytes of flash instead of 145080+1024.

P.S. the BSS section is the amount of RAM that the app's static data will occupy (it will be filled with zeroes at startup). I wonder whether this value is high and, if yes, whether we should look for ways to reduce it. IMO it'd make sense to at least look at other app's binaries (e.g. Ethereum and Bitcoin) and check their BSS sections (will need to build the apps from source for this).
One possible way of trying to reduce it would going through all dependencies and turning off all unneeded default features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.