You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The tutorial notes that the "particular behavior [of partial evaluations] is not fixed and might be different for different versions of MeTTa", so ML isn't exactly "wrong" here.
; Some facts as very basic equalities
(= (croaks Hank) True)
(= (croaks Fritz) True)
(= (eats_flies Fritz) True)
(= (croaks Sam) True)
(= (eats_flies Sam) False)
; If something croaks and eats flies, it is a frog.
; Note that if either (croaks $x) or (eats_flies $x)
; is false, (frog $x) is also false.
(= (frog $x)
(and (croaks $x)
(eats_flies $x)))
! (if (frog $x) ($x is Frog) ($x is-not Frog))
; (green $x) is true if (frog $x) is true,
; otherwise it is not calculated.
(= (green $x)
(if (frog $x) True (empty)))
! (if (green $x) ($x is Green) ($x is-not Green))
HE:
[(Sam is-not Frog), (if (and True (eats_flies Hank)) (Hank is Frog) (Hank is-not Frog)), (Fritz is Frog)]
[(if (if (and True (eats_flies Hank)) True (empty)) (Hank is Green) (Hank is-not Green)), (Fritz is Green)]
ML:
[(Hank is Frog), (Fritz is Frog)]
[(Hank is Green), (Fritz is Green)]
The text was updated successfully, but these errors were encountered:
In the given scenario, the issue revolves around the treatment of partial evaluations in MeTTa (HE and ML versions), with the core of the matter lying in whether the systems should compute partial results when some information is incomplete or proceed directly to full evaluations. Let's analyze the expected behavior and the results provided by both HE and ML.
Understanding the Code and Expected Outcomes:
Facts and Definitions:
Hank and Fritz both croak, which evaluates to true.
Fritz eats flies, but Hank doesn't have an explicit value for eating flies.
Sam croaks but does not eat flies.
Logical Inferences:
A subject is considered a frog if they both croak and eat flies.
The green property is derived from being a frog, meaning it evaluates to true only if frog evaluates to true; otherwise, it is not calculated (returns empty).
Analyzing the Outputs:
HE (Hyperon Environment) Output:
Frogs:
Sam: (Sam is-not Frog) because he doesn’t eat flies.
Hank: (if (and True (eats_flies Hank)) (Hank is Frog) (Hank is-not Frog)) reflects a partial evaluation since eats_flies Hank isn't explicitly defined. This is consistent with the notion of returning partial evaluations.
Fritz: (Fritz is Frog) correctly identifies Fritz as a frog since both conditions (croaks and eats flies) are satisfied.
Green:
Hank: (if (if (and True (eats_flies Hank)) True (empty)) (Hank is Green) (Hank is-not Green)) again reflects a partial evaluation dependent on the earlier undefined condition for Hank eating flies.
Fritz: (Fritz is Green) correctly, since Fritz is a frog.
ML (Metta Language) Output:
Simplifies directly to:
Frogs: (Hank is Frog), (Fritz is Frog) — assumes completeness where partiality should exist (Hank's eats_flies status isn't known).
Green: (Hank is Green), (Fritz is Green) — again assumes that Hank is a frog without knowing if he eats flies.
Conclusion:
Correctness:
HE: The partial evaluations are consistent with the logic that if information is incomplete (e.g., whether Hank eats flies), results should also reflect this uncertainty. Thus, HE's behavior is appropriate under a system designed to handle and indicate incomplete information.
ML: By not returning partial evaluations and assuming all predicates are fully satisfied, ML appears to simplify or overlook the need for certain facts (like Hank's eating flies), leading to potentially incorrect conclusions. This can be seen as a simplification or an error, depending on the intended behavior of the system.
Recommendation:
It appears that HE's behavior aligns more closely with a cautious and logically consistent approach, especially useful in environments where data may be incomplete or arriving incrementally. ML’s approach might be seen as overly deterministic, potentially leading to errors in reasoning under uncertainty.
Consistency in logical inference under partial knowledge is desired.
HE:
ML:
The text was updated successfully, but these errors were encountered: