Performance: 70× faster Lambda invocation path and interpretation preferred for Eval. #370
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thank you for your great work for maintaining such a high-quality open-source library. I've used it for a while, really appreciate all the effort that has gone into it.
In our scenario, we execute few same expressions under very high concurrency requirements — on the order of 100,000 invocations per second.
To support this, we already cache the
Lambdainstance and reuse it across calls. However, we found that the currentLambda.Invokepath leave some room for optimization, especially in extremely hot paths: TheLambdainvocation path involvesDynamicInvokeand repeated LINQ allocations.This PR removes a hot LINQ (to reduce allocations putting pressure on GC), introduces a fast invoker path for
Lambdato replace dynamic invoke, and adds a “prefer interpretation” option forEval, reducing allocations and improving performance in high-frequency scenarios.Benchmark Scenario
We used the following benchmark (BenchmarkDotNet) to measure the performance of a cached
Lambdaand repeatedly invoking it:Results
Before optimization:
After optimization:
We see a dramatic reduction in both latency (70x) and allocations (-99%) for this hot-path scenario after the fast invoker optimization.
Btw, Eval() is also 1.7x faster using interpretation instead of Compiling. We don't really care since we cache lambda, but it's simple to add. I see we had a discussion here too: #362
What This PR Changes
1. Optimize
Lambdainvocation path and reduce allocationsGoal: Avoid repeated LINQ allocations and heavy
DynamicInvokeusage on every call, especially when the same lambda is invoked extremely frequently with consistent argument shapes.Concretely:
Pre-snapshot and cache parameter metadata in
Lambda:Convert
DeclaredParameters/UsedParametersto arrays and cache the correspondingParameterExpressioninstances.Precompute the mapping “used parameter index → declared parameter index” so we don’t have to enumerate and look up parameters on each invocation.
Introduce a fast path for invocation in declared-parameter order:
Add a fast invocation delegate (e.g.
_fastInvokerFromDeclared) built from an expression tree that takesobject[]and performs strongly-typed invocation logic.When the number and types of arguments exactly match the expected parameters, we go through this fast path, avoiding:
DynamicInvokeRepeated boxing/unboxing
Extra allocations.
If the arguments do not match (wrong count or incompatible types), we safely fall back to the original
DynamicInvokepath to preserve behavior and exception semantics.Optimize
Invokeoverloads:Invoke(IEnumerable<Parameter>):Replace LINQ-based matching with an implementation based on the cached
_usedParametersmapping.When parameters fully match, route to the fast path; otherwise, fall back to the existing logic.
Invoke(object[] args):Build the invocation argument array directly in declared-parameter order and reuse the fast path.
Only fall back when argument types or counts do not match.
Overall, this significantly reduces per-call allocations and improves performance in high-frequency, cached-lambda scenarios.
2. Adjust
Evaldefault behavior to favor interpretationGoal: Improve performance for typical
Evalscenarios, which are often one-off evaluations where compilation overhead dominates.Changes:
Interpreter.Eval(string, Type, params Parameter[])is updated to:Call
ParseAsLambda(..., preferInterpretation: true).Then execute the resulting
Lambdavialambda.Invoke(parameters).From a library user’s perspective, the public API stays the same, but:
The default evaluation strategy for
Evalbecomes interpretation-first.This reduces IL generation and JIT overhead, which is especially beneficial when
Evalis used frequently in hot paths or in environments where startup latency and memory pressure matter.Compatibility
All changes are limited to internal constructors, private helpers, and invocation internals.
There are "almost" no breaking changes to the public API surface, unless I missed anything.
When the fast path cannot be used (e.g., argument count/type mismatch), the code falls back to the original
DynamicInvokelogic, preserving:Exception types
Observable behavior
Compatibility with existing code.
Thank you again for providing and maintaining this project. I hope these optimizations are useful and are happy to adjust the implementation if you have any suggestions or style preferences.