forked from microsoft/BitNet
-
Notifications
You must be signed in to change notification settings - Fork 0
Feature implementation from commits caf17ec..c9e752c #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
yashuatla
wants to merge
15
commits into
feature-base-2
Choose a base branch
from
feature-head-2
base: feature-base-2
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
4f2e41a
add support for bitnet2b_2501 model
potassiummmm 09f9106
add conversion logic for new model
potassiummmm fa854cf
Merge pull request #167 from potassiummmm/bitnet-25
tsong-ms fd3f355
update readme and setup script to support official BitNet b1.58 model…
sd983527 0e7dadb
Update README.md
sd983527 8f75f99
Update README.md (#172)
sd983527 1c77bd8
Update README.md
sd983527 71fdd94
add third-party demo
tsong-ms 034b34c
Merge pull request #175 from microsoft/readme-dev
tsong-ms 874e6bd
refine readme
tsong-ms fd9f1d6
Merge pull request #176 from microsoft/readme-dev
tsong-ms 488dc1e
Fix model architecture name
c17d1c5
Merge pull request #212 from microsoft/arch-name-dev
1792346
Add run_inference_server.py for Running llama.cpp Built-in Server (#204)
Benjamin-Wegener c9e752c
Fix build error with GCC by forcing Clang compiler in CMake on androi…
Benjamin-Wegener File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Submodule llama.cpp
updated
3 files
| +24 −0 | gguf-py/gguf/constants.py | |
| +5 −0 | gguf-py/gguf/tensor_mapping.py | |
| +332 −1 | src/llama.cpp |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,64 @@ | ||
| import os | ||
| import sys | ||
| import signal | ||
| import platform | ||
| import argparse | ||
| import subprocess | ||
|
|
||
| def run_command(command, shell=False): | ||
| """Run a system command and ensure it succeeds.""" | ||
| try: | ||
| subprocess.run(command, shell=shell, check=True) | ||
| except subprocess.CalledProcessError as e: | ||
| print(f"Error occurred while running command: {e}") | ||
| sys.exit(1) | ||
|
|
||
| def run_server(): | ||
| build_dir = "build" | ||
| if platform.system() == "Windows": | ||
| server_path = os.path.join(build_dir, "bin", "Release", "llama-server.exe") | ||
| if not os.path.exists(server_path): | ||
| server_path = os.path.join(build_dir, "bin", "llama-server") | ||
| else: | ||
| server_path = os.path.join(build_dir, "bin", "llama-server") | ||
|
|
||
| command = [ | ||
| f'{server_path}', | ||
| '-m', args.model, | ||
| '-c', str(args.ctx_size), | ||
| '-t', str(args.threads), | ||
| '-n', str(args.n_predict), | ||
| '-ngl', '0', | ||
| '--temp', str(args.temperature), | ||
| '--host', args.host, | ||
| '--port', str(args.port), | ||
| '-cb' # Enable continuous batching | ||
| ] | ||
|
|
||
| if args.prompt: | ||
| command.extend(['-p', args.prompt]) | ||
|
|
||
| # Note: -cnv flag is removed as it's not supported by the server | ||
|
|
||
| print(f"Starting server on {args.host}:{args.port}") | ||
| run_command(command) | ||
|
|
||
| def signal_handler(sig, frame): | ||
| print("Ctrl+C pressed, shutting down server...") | ||
| sys.exit(0) | ||
|
|
||
| if __name__ == "__main__": | ||
| signal.signal(signal.SIGINT, signal_handler) | ||
|
|
||
| parser = argparse.ArgumentParser(description='Run llama.cpp server') | ||
| parser.add_argument("-m", "--model", type=str, help="Path to model file", required=False, default="models/bitnet_b1_58-3B/ggml-model-i2_s.gguf") | ||
| parser.add_argument("-p", "--prompt", type=str, help="System prompt for the model", required=False) | ||
| parser.add_argument("-n", "--n-predict", type=int, help="Number of tokens to predict", required=False, default=4096) | ||
| parser.add_argument("-t", "--threads", type=int, help="Number of threads to use", required=False, default=2) | ||
| parser.add_argument("-c", "--ctx-size", type=int, help="Size of the context window", required=False, default=2048) | ||
| parser.add_argument("--temperature", type=float, help="Temperature for sampling", required=False, default=0.8) | ||
| parser.add_argument("--host", type=str, help="IP address to listen on", required=False, default="127.0.0.1") | ||
| parser.add_argument("--port", type=int, help="Port to listen on", required=False, default=8080) | ||
|
|
||
| args = parser.parse_args() | ||
| run_server() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🐛 Correctness Issue
Hardcoded compiler breaks cross-platform compatibility.
Hardcoding clang/clang++ will cause build failures on systems where these compilers aren't available or where different compilers are required.
Current Code (Diff):
📝 Committable suggestion