Conversation
Summary of ChangesHello @Etherll, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive renaming effort across numerous Jupyter notebooks and Python scripts. The primary goal is to standardize and clarify the naming conventions for saved machine learning models, specifically for merged 16-bit, merged 4-bit, and LoRA adapter versions. By appending descriptive suffixes like '-merged', '-merged-4bit', and using 'lora_model', the changes ensure that the type of saved model is immediately apparent from its directory or repository name, enhancing project organization and maintainability. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses an important issue by renaming the output directories for saving models, which prevents different model versions from overwriting each other. The change is applied consistently across a large number of notebooks and scripts.
However, this highlights a significant amount of code duplication across the repository. The same saving logic is repeated in dozens of files. This makes the codebase difficult to maintain and error-prone, as any change to the saving logic needs to be manually applied to every file.
For a long-term solution, I would strongly recommend abstracting this saving logic into a shared utility function that can be imported and used by all notebooks and scripts.
In the short term, and within the scope of this PR, you can improve maintainability by defining the model paths as constants at the top of each saving cell/block. This avoids using 'magic strings' and makes it easier to manage paths. I've added a couple of specific comments to illustrate this.
| "# Merge to 16bit\n", | ||
| "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_16bit\",)\n", | ||
| "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n", | ||
| "if False: model.save_pretrained_merged(\"model-merged\", tokenizer, save_method = \"merged_16bit\",)\n", | ||
| "if False: model.push_to_hub_merged(\"hf/model-merged\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n", | ||
| "\n", | ||
| "# Merge to 4bit\n", | ||
| "if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_4bit\",)\n", | ||
| "if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n", | ||
| "if False: model.save_pretrained_merged(\"model-merged-4bit\", tokenizer, save_method = \"merged_4bit\",)\n", | ||
| "if False: model.push_to_hub_merged(\"hf/model-merged-4bit\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n", | ||
| "\n", | ||
| "# Just LoRA adapters\n", | ||
| "if False:\n", | ||
| " model.save_pretrained(\"model\")\n", | ||
| " tokenizer.save_pretrained(\"model\")\n", | ||
| " model.save_pretrained(\"lora_model\")\n", | ||
| " tokenizer.save_pretrained(\"lora_model\")\n", | ||
| "if False:\n", | ||
| " model.push_to_hub(\"hf/model\", token = \"\")\n", | ||
| " tokenizer.push_to_hub(\"hf/model\", token = \"\")\n" | ||
| " model.push_to_hub(\"hf/lora_model\", token = \"\")\n", | ||
| " tokenizer.push_to_hub(\"hf/lora_model\", token = \"\")\n" |
There was a problem hiding this comment.
While the change to use unique names is good, this cell has a lot of duplicated 'magic strings' for model paths. This can be improved for better maintainability by defining constants at the top of the cell and reusing them. This would also make it easier to update paths in the future. This principle applies to all similar saving cells across the modified notebooks.
For example, you could refactor this cell's source code to look something like this:
# Define constants for model paths
MODEL_MERGED_16BIT_PATH = "model-merged"
HF_HUB_MERGED_16BIT_PATH = f"hf/{MODEL_MERGED_16BIT_PATH}"
MODEL_MERGED_4BIT_PATH = "model-merged-4bit"
HF_HUB_MERGED_4BIT_PATH = f"hf/{MODEL_MERGED_4BIT_PATH}"
LORA_MODEL_PATH = "lora_model"
HF_HUB_LORA_PATH = f"hf/{LORA_MODEL_PATH}"
# Merge to 16bit
if False: model.save_pretrained_merged(MODEL_MERGED_16BIT_PATH, tokenizer, save_method = "merged_16bit",)
if False: model.push_to_hub_merged(HF_HUB_MERGED_16BIT_PATH, tokenizer, save_method = "merged_16bit", token = "")
# Merge to 4bit
if False: model.save_pretrained_merged(MODEL_MERGED_4BIT_PATH, tokenizer, save_method = "merged_4bit",)
if False: model.push_to_hub_merged(HF_HUB_MERGED_4BIT_PATH, tokenizer, save_method = "merged_4bit", token = "")
# Just LoRA adapters
if False:
model.save_pretrained(LORA_MODEL_PATH)
tokenizer.save_pretrained(LORA_MODEL_PATH)
if False:
model.push_to_hub(HF_HUB_LORA_PATH, token = "")
tokenizer.push_to_hub(HF_HUB_LORA_PATH, token = "")References
- Magic strings (and numbers) should be replaced by named constants to improve readability and maintainability. This makes the code easier to understand and modify, as the value is defined in one place.
| # Merge to 16bit | ||
| if False: model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit",) | ||
| if False: model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_16bit", token = "") | ||
| if False: model.save_pretrained_merged("model-merged", tokenizer, save_method = "merged_16bit",) | ||
| if False: model.push_to_hub_merged("hf/model-merged", tokenizer, save_method = "merged_16bit", token = "") | ||
|
|
||
| # Merge to 4bit | ||
| if False: model.save_pretrained_merged("model", tokenizer, save_method = "merged_4bit",) | ||
| if False: model.push_to_hub_merged("hf/model", tokenizer, save_method = "merged_4bit", token = "") | ||
| if False: model.save_pretrained_merged("model-merged-4bit", tokenizer, save_method = "merged_4bit",) | ||
| if False: model.push_to_hub_merged("hf/model-merged-4bit", tokenizer, save_method = "merged_4bit", token = "") | ||
|
|
||
| # Just LoRA adapters | ||
| if False: | ||
| model.save_pretrained("model") | ||
| tokenizer.save_pretrained("model") | ||
| model.save_pretrained("lora_model") | ||
| tokenizer.save_pretrained("lora_model") | ||
| if False: | ||
| model.push_to_hub("hf/model", token = "") | ||
| tokenizer.push_to_hub("hf/model", token = "") | ||
| model.push_to_hub("hf/lora_model", token = "") | ||
| tokenizer.push_to_hub("hf/lora_model", token = "") |
There was a problem hiding this comment.
To improve maintainability and avoid magic strings, it's a good practice to define these model paths as constants at the beginning of this block. This makes it easier to read and update the paths in one place. This principle applies to all similar saving blocks across the modified files.
# Define constants for model paths
MODEL_MERGED_16BIT_PATH = "model-merged"
HF_HUB_MERGED_16BIT_PATH = f"hf/{MODEL_MERGED_16BIT_PATH}"
MODEL_MERGED_4BIT_PATH = "model-merged-4bit"
HF_HUB_MERGED_4BIT_PATH = f"hf/{MODEL_MERGED_4BIT_PATH}"
LORA_MODEL_PATH = "lora_model"
HF_HUB_LORA_PATH = f"hf/{LORA_MODEL_PATH}"
# Merge to 16bit
if False: model.save_pretrained_merged(MODEL_MERGED_16BIT_PATH, tokenizer, save_method = "merged_16bit",)
if False: model.push_to_hub_merged(HF_HUB_MERGED_16BIT_PATH, tokenizer, save_method = "merged_16bit", token = "")
# Merge to 4bit
if False: model.save_pretrained_merged(MODEL_MERGED_4BIT_PATH, tokenizer, save_method = "merged_4bit",)
if False: model.push_to_hub_merged(HF_HUB_MERGED_4BIT_PATH, tokenizer, save_method = "merged_4bit", token = "")
# Just LoRA adapters
if False:
model.save_pretrained(LORA_MODEL_PATH)
tokenizer.save_pretrained(LORA_MODEL_PATH)
if False:
model.push_to_hub(HF_HUB_LORA_PATH, token = "")
tokenizer.push_to_hub(HF_HUB_LORA_PATH, token = "")References
- Magic strings (and numbers) should be replaced by named constants to improve readability and maintainability. This makes the code easier to understand and modify, as the value is defined in one place.
No description provided.