<div dir="ltr">Thanks Craig. I have it on my list to experiment more with self-hosting LLMs. I think there will be calls for self-hosting once AI fervor has peaked and labs have to show profitability.<br><br>Not on topic, but related to our NetSIG discussion on odd industry behaviours around LLM resource consumption:<br><br><a href="https://newsletter.pragmaticengineer.com/p/the-pulse-tokenmaxxing-as-a-weird-6b2">https://newsletter.pragmaticengineer.com/p/the-pulse-tokenmaxxing-as-a-weird-6b2</a><div><br></div><div>We&#39;re back to the days of  &quot;more K-LOCs!&quot; <br><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, Apr 26, 2026 at 8:21\u202fAM Craig Miller &lt;<a href="mailto:cvmiller@gmail.com">cvmiller@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>

  
    
  
  <div>
    <p>Hi Greg,</p>
    <p>No I haven&#39;t. I think you could run &#39;strace&#39; to see what the
      model was doing at the time, but it would be slow, and I am not
      sure it would tell you much.</p>
    <p>I don&#39;t think it was a RAM issue, since the container I am
      running the LLMs is unrestricted (can use all the host&#39;s memory,
      which is 32 GB), and the kernel is fairly recent (6.18.19-0-lts).</p>
    <p>I didn&#39;t spend much time on it, because, my objective was to get
      a local LLM running, not debug the model at the time.</p>
    <p>Craig...</p>
    <div>On 4/26/26 07:56, Greg H wrote:<br>
    </div>
    <blockquote type="cite">
      
      <div dir="ltr">I was curious if you do any troubleshooting for the
        models that core dump. I don&#39;t have any experience with this and
        I&#39;m wondering if there&#39;s much that you can do other than
        increase the resources (i.e. more RAM). Maybe upgrade the
        kernel? Guessing some models need the latest / greatest kernel
        versions to do their thing. 
        <div> </div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Sun, Apr 26, 2026 at
          7:34\u202fAM Craig Miller &lt;<a href="mailto:cvmiller@gmail.com" target="_blank">cvmiller@gmail.com</a>&gt;
          wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
          <div>
            <p>Hi Deid,</p>
            <p>Looking at the gguf models on HuggingFace:</p>
            <p><a href="https://huggingface.co/models?library=gguf" target="_blank">https://huggingface.co/models?library=gguf</a></p>
            <p>There were a couple of parameters I was looking at:</p>
            <ol>
              <li>Not too big, somewhere between 5 and 10 GB in size</li>
              <li>Relatively recent</li>
              <li>Doesn&#39;t core dump right away</li>
            </ol>
            <p>I had the best luck at running the Qwen models. I am
              running Qwen2.5-VL-7B-Instruct-abliterated.Q4_K_M.gguf on
              my PN-50, and it seems to run reasonably fast. Some of the
              other models were quite slow on the PN-50.</p>
            <p>Have fun!</p>
            <p>Craig...</p>
            <div>On 4/26/26 07:13, Deid Reimer wrote:<br>
            </div>
            <blockquote type="cite">
              <div dir="auto">Hey Craig, <br>
                <br>
              </div>
              <div dir="auto">Why did you pick that particular LLM?<br>
                <br>
              </div>
              <div dir="auto">Deid   VA7REI</div>
              <div class="gmail_quote">On Apr 25, 2026, at 8:32 a.m.,
                Craig Miller &lt;<a href="mailto:cvmiller@gmail.com" target="_blank">cvmiller@gmail.com</a>&gt;
                wrote:
                <blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                  <p>Hi All,</p>
                  <p>We were chatting before the most recent NetSIG
                    about the new Llamafile app, which has excellent
                    support for IPv6. The app runs a webserver (which is
                    IPv6 accessible).  The new llamafile app takes a -m
                    parameter which points to the gguf LLM model.</p>
                  <p><b> Old way</b><br>
                         ./google_gemma-3-4b-it-Q6_K.llamafile --server
                    -v2 --host <a href="http://lxcllama.example.com" target="_blank">lxcllama.example.com</a><br>
                     <b>New way</b><br>
                         llamafile -m model.gguf --server --port 8080</p>
                  <p>Find the new llamafile at:</p>
                  <p>    <a href="https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0" target="_blank">https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0</a></p>
                  <p>You can find gguf (LLM models) at:</p>
                  <p>     <a href="https://huggingface.co/models?library=gguf" target="_blank">https://huggingface.co/models?library=gguf</a></p>
                  <p>I start my llamafile using this command:</p>
                  <p>    ./llamafile-0.10.0 -m Qwen3.5-9B.Q4_K_M.gguf
                    --server --port 8080 --host <a href="http://lxcllama.example.com" target="_blank">lxcllama.example.com</a> </p>
                  <p>This way any webbrowser at my house, can access the
                    LLM.</p>
                  <p>Happy LLM-ing,</p>
                  <p>Craig...</p>
                  <pre cols="72">-- 
IPv6 is the future, the future is here
<a href="http://ipv6hawaii.org/" target="_blank">http://ipv6hawaii.org/</a></pre>
                  <pre>-- 
Projects mailing list
<a href="mailto:Projects@vicpimakers.ca" target="_blank">Projects@vicpimakers.ca</a>
<a href="http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca" target="_blank">http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca</a>
</pre>
                </blockquote>
              </div>
              <br>
              <fieldset></fieldset>
            </blockquote>
            <pre cols="72">-- 
IPv6 is the future, the future is here
<a href="http://ipv6hawaii.org/" target="_blank">http://ipv6hawaii.org/</a></pre>
          </div>
          -- <br>
          Projects mailing list<br>
          <a href="mailto:Projects@vicpimakers.ca" target="_blank">Projects@vicpimakers.ca</a><br>
          <a href="http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca" rel="noreferrer" target="_blank">http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca</a><br>
        </blockquote>
      </div>
      <br>
      <fieldset></fieldset>
    </blockquote>
    <pre cols="72">-- 
IPv6 is the future, the future is here
<a href="http://ipv6hawaii.org/" target="_blank">http://ipv6hawaii.org/</a></pre>
  </div>

-- <br>
Projects mailing list<br>
<a href="mailto:Projects@vicpimakers.ca" target="_blank">Projects@vicpimakers.ca</a><br>
<a href="http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca" rel="noreferrer" target="_blank">http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca</a><br>
</blockquote></div>