Replies: 2 comments 3 replies
-
Please see https://github.com/GoogleChrome/lighthouse/blob/main/docs/throttling.md#types-of-network-throttling which summarizes the tradeoffs for using each throttling method. I think the following line best answers your question:
Lighthouse uses simulated throttling because it generally offers a better approximation of real networks than DevTools throttling, but packet-level throttling is still the best approximation of real networks if you are willing to set it up. Lighthouse doesn't use packet-level throttling because it operates at the OS level and would throttle the entire device, not just the page being tested. |
Beta Was this translation helpful? Give feedback.
-
Hey @adamraine Simulated┌──────────────────────┬─────────┐ │ (index) │ Values │ ├──────────────────────┼─────────┤ │ totalScore │ 55.8 │ │ fcp │ 1960.72 │ │ lcp │ 2766.15 │ │ tbt │ 5049.36 │ │ cls │ 0.022 │ │ speedIndex │ 6783.28 │ │ deviceBenchmarkIndex │ 804.1 │ └──────────────────────┴─────────┘ Devtools┌──────────────────────┬────────────────────┐ │ (index) │ Values │ ├──────────────────────┼────────────────────┤ │ totalScore │ 59.333333333333336 │ │ fcp │ 1431.55 │ │ lcp │ 1635.09 │ │ tbt │ 7594.54 │ │ cls │ 0.039 │ │ speedIndex │ 12001 │ │ deviceBenchmarkIndex │ 804.5 │ └──────────────────────┴────────────────────┘ Provided (Network = Fast3g & CPU slowdown factor = 4)┌──────────────────────┬────────────────────┐ │ (index) │ Values │ ├──────────────────────┼────────────────────┤ │ totalScore │ 57.666666666666664 │ │ fcp │ 1866.79 │ │ lcp │ 2072.45 │ │ tbt │ 8299.69 │ │ cls │ 0.028 │ │ speedIndex │ 14425.33 │ │ deviceBenchmarkIndex │ 800 │ └──────────────────────┴────────────────────┘ For Simulated vs Provided, even though the overall scores , if you look at individual metrics like LCP, TBT, SI, they all vary very much as I have shown below Since packet-level throttling is considered to be the best practice for a throttle test, I would then have to consider that results given by Simulated LH are incorrect / inaccurate. Hence I would like to request if we can look into this issue so that we can make Simulated LH more accurate |
Beta Was this translation helpful? Give feedback.
-
I have been using lighthouse quite extensively for the past couple of years for measuring performance of my web applications. I understand that in
Simulated throttling
, the metrics reported by LH are based on some calculations under the hood (using pessimistic + optimistic + flex dependency graphs) and inDevtools throttling
, LH just reports whatever values it actually observed for a throttled run.Even though there is this difference in the behaviour of these two throttling methods, at the end of the day I believe a
Simulated throttling
audit should produce results that are close to aDevtools throttling
audit if the latter was executed in an environment with no noticeable network / server variance.Still in many occasions, I have seen that these two results are too far off from each other, especially with metrics like FCP and TBT (even after running many iterations of LH and averaging the results based on DBI). Even though
Simulated
LH audit helps us get faster and more consistent results, I believe its prediction is not accurate enough due to this. So I would like to understand from you, the possible reasons for this and whether or not we should be usingSimulated
orDevtools
throttling methods for getting accurate results.Beta Was this translation helpful? Give feedback.
All reactions