diff --git a/README.md b/README.md index 3f72ef0..bcaff9e 100644 --- a/README.md +++ b/README.md @@ -179,9 +179,6 @@ pip install -r requirements.txt huggingface-cli download microsoft/BitNet-b1.58-2B-4T-gguf --local-dir models/BitNet-b1.58-2B-4T python setup_env.py -md models/BitNet-b1.58-2B-4T -q i2_s -# Or you can download a model from Hugging Face, convert it to quantized gguf format, and build the project -python setup_env.py --hf-repo tiiuae/Falcon3-7B-Instruct-1.58bit -q i2_s - ```
usage: setup_env.py [-h] [--hf-repo {1bitLLM/bitnet_b1_58-large,1bitLLM/bitnet_b1_58-3B,HF1BitLLM/Llama3-8B-1.58-100B-tokens,tiiuae/Falcon3-1B-Instruct-1.58bit,tiiuae/Falcon3-3B-Instruct-1.58bit,tiiuae/Falcon3-7B-Instruct-1.58bit,tiiuae/Falcon3-10B-Instruct-1.58bit}] [--model-dir MODEL_DIR] [--log-dir LOG_DIR] [--quant-type {i2_s,tl1}] [--quant-embd]