Skip to content

not able to Load the Fine Tuned Model and Run Inference in Fine_Tune_Llama_2_by_generating_data_from_the_LLM_OpenAI #7

@Opperessor

Description

@Opperessor

Hi, very helpful tutorial, i followed all the steps but im not able to do step 12 Load the Fine Tuned Model and Run Inference on GPU in Fine_Tune_Llama_2_by_generating_data_from_the_LLM_OpenAI.
its throwing out of memory error.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions