* Update CMakeLists.txt
I added a CMake option to compile the Llama.cpp server. This update allows us to easily build and deploy the server using BitNet
* Create run_inference_server.py
same as run_inference, but for use with llama.cpp's built in server, for some extra comfort
In particular:
- The build directory is determined based on whether the system is running on Windows or not.
- A list of arguments (`--model`, `-m` etc.) is created.
- The main argument list is parsed and passed to the `subprocess.run()` method to execute the system command.