Skip to content

Wrong result with Unif+ and probabilistic initial state #897

@volkm

Description

@volkm

Found by @kvdglind.

We consider the following MA which initially decides between taking either a path with an Erlang distribution or a faster approach but with probability of failure.

ma

const int K=2;
const double R=7;

module erlang
	s : [0..6];
	// s=5 sink
	// s=6 done
	chainStage : [0..K-1];

	[goToChain] s=0 -> 1 : (s'=1);
	<> s=1 -> 1 : (s'=2);
	<> s=2 & chainStage<K-1 -> R: (chainStage' = chainStage+1);
	<> s=2 & chainStage=K-1 -> R: (s' = 6);
	[goToProbs] s=0 -> 1 : (s'=3);
	<> s=3 -> 1 : (s'=4);
	<> s=4 -> 0.5 : (s'=6) + 0.5: (s'=5);
	<> s=5 -> true;
	<> s=6 -> true;
endmodule

label "done" = (s=6);

We run the following:

storm --prism erlang.ma --prop 'Pmin=?[F<=1 "done"]' --mamethod unifplus

This yields a wrong upper bound and an assertion error in debug mode:

ERROR (SparseMarkovAutomatonCslHelper.cpp:359): Upper bound 0.131893 is smaller than lower bound 0.132121.
Assertion failed: (absDiff > storm::utility::zero<ValueType>()), function checkConvergence, file SparseMarkovAutomatonCslHelper.cpp, line 390.

It goes wrong because the MA does not adhere to the Initial Markov Assumption (Assumption 8.1 in Timo Gros' Master thesis):

"all states included in the initial distribution are Markov states."

This assumption is needed to make the transformation to CTMDP work.

I looked deeper into the analysis behavior and we see the following for the computation of the upper bound:

  • In the inner iteration 64, the two Markovian states s=1 and s=3 have probability mass 0 and 0.00022797, respectively.
  • The initial state minimizes and therefore chooses state s=1 with probability mass 0.
  • In the next inner iteration 63 (counting down), the two Markovian states have probability mass 0.00744704 and 0.00167178, respectively.
  • Now, the initial state chooses state s=3 with probability mass 0.00167178 and adds it to the probability mass of 0.
  • In all subsequent iterations, state s=3 is always the lower probability and therefore chosen.
  • In the end however, the probability mass in the initial state is 0.131893 whereas state s=3 has 0.132121. The difference is exactly the probability mass 0.00022797 which was "lost" in the inner iteration 64.

The behavior in the initial state therefore allows to switch the scheduler depending on the time. This is however not correct, because the initial decision immediately takes place at time 0.
I suggest to add a post-processing step here which explicitly accounts for initial states. This is in line with the suggestion of Timo who wrote in his thesis:

"As we could interpret an initial probabilistic state as an extension of the initial distribution, we can easily transform every MA with initial probabilistic states such that this assumption holds. If an initial probabilistic state would have multiple probabilistic transitions, we would obtain a set of initial distributions. We would then regard a set of MAs with one of the initial distributions each and calculate time-bounded reachability for every MA and take the best result with respect to the subject of interest, i.e. maximal time-bounded reachability. Note that we would keep these probabilistic states whenever they have incoming edges, but still substitute them in the initial distribution."

So after finishing the computation for the upper bound, we should introduce a last model checking call where we compute the optimal probability from the initial state to the first layer of Markovian states. This should fix the issue in my view.

@tquatmann Do you agree or is there a better fix?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions