-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very High Token Consumption #2
Comments
I am glad you like it :-)
There is a few ways to reduce the token cost:
Please note that bamboo ai only sends the headers and the first row of the dataset to the llm, so the very large datasets will incur the same cost as the small ones. |
I forgot to mention one more thing. You can set 'exploratory=False'. BambooAI will skip the break down of the question into task list and will go straight to code generation. This will result in significant token usage reduction particularly if you also reduce the max_conversations. |
@pgalko Your code runs like a charm, it sometimes feels like I'm using OpenAI's own CI API it is soo damn good,
|
@Murtuza-Chawala great to hear that :-).
|
@Murtuza-Chawala The issue has now been resolved, and the fix has been committed to the repository. A new version, 0.3.21, has been pushed to PyPi. You can install this updated version via pip, and you should experience significantly improved performance and reduced token usage. |
Wow that's great ! So this also would mean that the code generated would have a higher chance of success rather than it going in a loop again right? |
Yes, that is correct. You should see a lot less error corrections, hence reduced token usage. |
use this https://twitter.com/raunakdoesdev/status/1700215444542750923 or ZEP memory https://www.getzep.com/ |
Thanks mate I will take a look. |
Hi, there first of all this is an amazing project which you have built
But it seems that for each query even the basic ones token usage is in 40-50k
fopr queries based on personal .csv data (.csv data record contains 30 records only0
Any suggestions on how to reduce the token usage
The text was updated successfully, but these errors were encountered: