[VicPiMakers Projects] Running the new llamafile (llama.cpp) app

Deid Reimer deid at drtr.net
Sun Apr 26 07:13:17 PDT 2026


Hey Craig, 

Why did you pick that particular LLM?

⁣Deid   VA7REI​

On Apr 25, 2026, 8:32 a.m., at 8:32 a.m., Craig Miller <cvmiller at gmail.com> wrote:
>Hi All,
>
>We were chatting before the most recent NetSIG about the new Llamafile 
>app, which has excellent support for IPv6. The app runs a webserver 
>(which is IPv6 accessible).  The new llamafile app takes a -m parameter
>
>which points to the gguf LLM model.
>
>* Old way*
>      ./google_gemma-3-4b-it-Q6_K.llamafile --server -v2 --host 
>lxcllama.example.com
>*New way*
>      llamafile -m model.gguf --server --port 8080
>
>Find the new llamafile at:
>
>https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0
>
>You can find gguf (LLM models) at:
>
>https://huggingface.co/models?library=gguf
>
>I start my llamafile using this command:
>
>     ./llamafile-0.10.0 -m Qwen3.5-9B.Q4_K_M.gguf --server --port 8080 
>--host lxcllama.example.com
>
>This way any webbrowser at my house, can access the LLM.
>
>Happy LLM-ing,
>
>Craig...
>
>-- 
>IPv6 is the future, the future is here
>http://ipv6hawaii.org/
>
>
>------------------------------------------------------------------------
>
>-- 
>Projects mailing list
>Projects at vicpimakers.ca
>http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://vicpimakers.ca/pipermail/projects_vicpimakers.ca/attachments/20260426/543295eb/attachment.htm>


More information about the Projects mailing list