<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hi Deid,</p>
    <p>Looking at the gguf models on HuggingFace:</p>
    <p><a class="moz-txt-link-freetext" href="https://huggingface.co/models?library=gguf">https://huggingface.co/models?library=gguf</a></p>
    <p>There were a couple of parameters I was looking at:</p>
    <ol>
      <li>Not too big, somewhere between 5 and 10 GB in size</li>
      <li>Relatively recent</li>
      <li>Doesn't core dump right away</li>
    </ol>
    <p>I had the best luck at running the Qwen models. I am running
      Qwen2.5-VL-7B-Instruct-abliterated.Q4_K_M.gguf on my PN-50, and it
      seems to run reasonably fast. Some of the other models were quite
      slow on the PN-50.</p>
    <p>Have fun!</p>
    <p>Craig...</p>
    <div class="moz-cite-prefix">On 4/26/26 07:13, Deid Reimer wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:af03078a-062d-4443-af9c-9ff760cd00e9@drtr.net">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="auto">Hey Craig, <br>
        <br>
      </div>
      <div dir="auto">Why did you pick that particular LLM?<br>
        <br>
      </div>
      <div dir="auto"><!-- tmjah_g_1299s -->Deid   VA7REI<!-- tmjah_g_1299e --></div>
      <div class="gmail_quote">On Apr 25, 2026, at 8:32 a.m., Craig
        Miller &lt;<a href="mailto:cvmiller@gmail.com" target="_blank"
          moz-do-not-send="true" class="moz-txt-link-freetext">cvmiller@gmail.com</a>&gt;
        wrote:
        <blockquote class="gmail_quote"
style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
          <p>Hi All,</p>
          <p>We were chatting before the most recent NetSIG about the
            new Llamafile app, which has excellent support for IPv6. The
            app runs a webserver (which is IPv6 accessible).  The new
            llamafile app takes a -m parameter which points to the gguf
            LLM model.</p>
          <p><b> Old way</b><br>
                 ./google_gemma-3-4b-it-Q6_K.llamafile --server -v2
            --host lxcllama.example.com<br>
             <b>New way</b><br>
                 llamafile -m model.gguf --server --port 8080</p>
          <p>Find the new llamafile at:</p>
          <p>    <a class="moz-txt-link-freetext"
href="https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0"
              moz-do-not-send="true">https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0</a></p>
          <p>You can find gguf (LLM models) at:</p>
          <p>     <a class="moz-txt-link-freetext"
              href="https://huggingface.co/models?library=gguf"
              moz-do-not-send="true">https://huggingface.co/models?library=gguf</a></p>
          <p>I start my llamafile using this command:</p>
          <p>    ./llamafile-0.10.0 -m Qwen3.5-9B.Q4_K_M.gguf --server
            --port 8080 --host lxcllama.example.com </p>
          <p>This way any webbrowser at my house, can access the LLM.</p>
          <p>Happy LLM-ing,</p>
          <p>Craig...</p>
          <pre class="moz-signature" cols="72">-- 
IPv6 is the future, the future is here
<a class="moz-txt-link-freetext" href="http://ipv6hawaii.org/"
          moz-do-not-send="true">http://ipv6hawaii.org/</a></pre>
          <pre class="blue">-- 
Projects mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Projects@vicpimakers.ca">Projects@vicpimakers.ca</a>
<a href="http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca"
          moz-do-not-send="true" class="moz-txt-link-freetext">http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca</a>
</pre>
        </blockquote>
      </div>
      <br>
      <fieldset class="moz-mime-attachment-header"></fieldset>
    </blockquote>
    <pre class="moz-signature" cols="72">-- 
IPv6 is the future, the future is here
<a class="moz-txt-link-freetext" href="http://ipv6hawaii.org/">http://ipv6hawaii.org/</a></pre>
  </body>
</html>