[VicPiMakers Projects] Running the new llamafile (llama.cpp) app
Craig Miller
cvmiller at gmail.com
Sat Apr 25 08:31:18 PDT 2026
Hi All,
We were chatting before the most recent NetSIG about the new Llamafile
app, which has excellent support for IPv6. The app runs a webserver
(which is IPv6 accessible). The new llamafile app takes a -m parameter
which points to the gguf LLM model.
* Old way*
./google_gemma-3-4b-it-Q6_K.llamafile --server -v2 --host
lxcllama.example.com
*New way*
llamafile -m model.gguf --server --port 8080
Find the new llamafile at:
https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0
You can find gguf (LLM models) at:
https://huggingface.co/models?library=gguf
I start my llamafile using this command:
./llamafile-0.10.0 -m Qwen3.5-9B.Q4_K_M.gguf --server --port 8080
--host lxcllama.example.com
This way any webbrowser at my house, can access the LLM.
Happy LLM-ing,
Craig...
--
IPv6 is the future, the future is here
http://ipv6hawaii.org/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://vicpimakers.ca/pipermail/projects_vicpimakers.ca/attachments/20260425/22b3a39d/attachment.htm>
More information about the Projects
mailing list