diff --git a/README.md b/README.md index 8b6fce8..95f6096 100644 --- a/README.md +++ b/README.md @@ -88,6 +88,104 @@ This project is based on the [llama.cpp](https://github.com/ggerganov/llama.cpp) ✔ ✘ + + Falcon3-1B-Instruct-1.58bit + 1.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + + + Falcon3-3B-1.58bit + 3.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + + + Falcon3-3B-Instruct-1.58bit + 3.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + + + Falcon3-7B-1.58bit + 7.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + + + Falcon3-7B-Instruct-1.58bit + 7.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + + + Falcon3-10B-1.58bit + 10.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + + + Falcon3-10B-Instruct-1.58bit + 10.0B + x86 + ✔ + ✘ + ✔ + + + ARM + ✔ + ✔ + ✘ + @@ -160,11 +258,6 @@ optional arguments: ```bash # Run inference with the quantized model python run_inference.py -m models/Falcon3-7B-Instruct-1.58bit/ggml-model-i2_s.gguf -p "You are a helpful assistant" -cnv - -# Output: -# Daniel went back to the the the garden. Mary travelled to the kitchen. Sandra journeyed to the kitchen. Sandra went to the hallway. John went to the bedroom. Mary went back to the garden. Where is Mary? -# Answer: Mary is in the garden. - ```
 usage: run_inference.py [-h] [-m MODEL] [-n N_PREDICT] -p PROMPT [-t THREADS] [-c CTX_SIZE] [-temp TEMPERATURE] [-cnv]