-
Reading the release notes for 0.2.0 I see talk of llguidance which says it supports context-free grammars. What about token restriction at a level more powerful than context-free? In particular, what about restricting token generation to semantically valid tokens, for example only referring to identifiers which have already been defined? Is this possible with this project? Thank you for your time. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @ahelwer! While lm = llama2 + f"Do you want a joke or a poem? A {select(['joke', 'poem'], name='answer')}.\n"
# make a choice based on the model's previous selection
if lm["answer"] == "joke":
lm += f"Here is a one-line joke about cats: " + gen('output', stop='\n')
else:
lm += f"Here is a one-line poem about dogs: " + gen('output', stop='\n') Making it more natural to express (and efficient to run) context-sensitive languages is an open research topic, but there is a lot of activity on this, especially in the programming-language space! |
Beta Was this translation helpful? Give feedback.
Hi @ahelwer!
While
llguidance
works on context-free grammars only, you can "break out" of context-free by manually keeping track of state. See a very simple example (from the readme) here:Making it more natural to express (and efficient to run) context-sensitive languages is an open research topic, but there is a lot of activity on this, especially in the …