Skip to content

Commit 21f9b4e

Browse files
committed
콘텍스트 길이별 모델 선택 로직 주석 처리
콘텍스트 길이가 길면서 저렴한 gpt-4o-mini 모델을 사용하도록 바꿨으므로, 기존에 콘텍스트 길이에 따라 구모델을 선택하는 로직이 불필요해져서 주석 처리함
1 parent 0c96e96 commit 21f9b4e

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

8_streamlit/transcribe_summary.py

+12-12
Original file line numberDiff line numberDiff line change
@@ -417,19 +417,19 @@ def generate_content(content_type, content_language, transcript_language_code, t
417417
if content_type == "Translation":
418418
return translate(transcript, transcript_language_code, content_language)
419419

420-
tokenizer = get_tokenizer("gpt-4o-mini")
421-
num_tokens = len(tokenizer.encode(transcript))
422-
423-
margin = 3000 if content_type in ["Detailed Summary", "Essay", "Blog article"] else 1000
424-
425-
preferred_model = ""
426-
if num_tokens < 16385 - margin:
427-
preferred_model = "gpt-4o-mini"
428-
elif num_tokens < 32768 - margin:
429-
preferred_model = "solar-mini"
430-
else:
431-
preferred_model = "gpt-4o"
420+
# 콘텍스트 길이에 맞는 모델 선택
421+
# tokenizer = get_tokenizer("gpt-4o-mini")
422+
# num_tokens = len(tokenizer.encode(transcript))
423+
# margin = 3000 if content_type in ["Detailed Summary", "Essay", "Blog article"] else 1000
424+
# preferred_model = ""
425+
# if num_tokens < 16385 - margin:
426+
# preferred_model = "gpt-3.5-turbo"
427+
# elif num_tokens < 32768 - margin:
428+
# preferred_model = "solar-mini"
429+
# else:
430+
# preferred_model = "gpt-4o"
432431

432+
preferred_model = "gpt-4o-mini"
433433
fallback_model = "gpt-4o"
434434

435435
temperature = 0.5

0 commit comments

Comments
 (0)