<div dir="ltr">I was curious if you do any troubleshooting for the models that core dump. I don't have any experience with this and I'm wondering if there's much that you can do other than increase the resources (i.e. more RAM). Maybe upgrade the kernel? Guessing some models need the latest / greatest kernel versions to do their thing. <div> </div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, Apr 26, 2026 at 7:34\u202fAM Craig Miller <<a href="mailto:cvmiller@gmail.com">cvmiller@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>
<div>
<p>Hi Deid,</p>
<p>Looking at the gguf models on HuggingFace:</p>
<p><a href="https://huggingface.co/models?library=gguf" target="_blank">https://huggingface.co/models?library=gguf</a></p>
<p>There were a couple of parameters I was looking at:</p>
<ol>
<li>Not too big, somewhere between 5 and 10 GB in size</li>
<li>Relatively recent</li>
<li>Doesn't core dump right away</li>
</ol>
<p>I had the best luck at running the Qwen models. I am running
Qwen2.5-VL-7B-Instruct-abliterated.Q4_K_M.gguf on my PN-50, and it
seems to run reasonably fast. Some of the other models were quite
slow on the PN-50.</p>
<p>Have fun!</p>
<p>Craig...</p>
<div>On 4/26/26 07:13, Deid Reimer wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">Hey Craig, <br>
<br>
</div>
<div dir="auto">Why did you pick that particular LLM?<br>
<br>
</div>
<div dir="auto">Deid VA7REI</div>
<div class="gmail_quote">On Apr 25, 2026, at 8:32 a.m., Craig
Miller <<a href="mailto:cvmiller@gmail.com" target="_blank">cvmiller@gmail.com</a>>
wrote:
<blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<p>Hi All,</p>
<p>We were chatting before the most recent NetSIG about the
new Llamafile app, which has excellent support for IPv6. The
app runs a webserver (which is IPv6 accessible). The new
llamafile app takes a -m parameter which points to the gguf
LLM model.</p>
<p><b> Old way</b><br>
./google_gemma-3-4b-it-Q6_K.llamafile --server -v2
--host <a href="http://lxcllama.example.com" target="_blank">lxcllama.example.com</a><br>
<b>New way</b><br>
llamafile -m model.gguf --server --port 8080</p>
<p>Find the new llamafile at:</p>
<p> <a href="https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0" target="_blank">https://github.com/mozilla-ai/llamafile/releases/tag/0.10.0</a></p>
<p>You can find gguf (LLM models) at:</p>
<p> <a href="https://huggingface.co/models?library=gguf" target="_blank">https://huggingface.co/models?library=gguf</a></p>
<p>I start my llamafile using this command:</p>
<p> ./llamafile-0.10.0 -m Qwen3.5-9B.Q4_K_M.gguf --server
--port 8080 --host <a href="http://lxcllama.example.com" target="_blank">lxcllama.example.com</a> </p>
<p>This way any webbrowser at my house, can access the LLM.</p>
<p>Happy LLM-ing,</p>
<p>Craig...</p>
<pre cols="72">--
IPv6 is the future, the future is here
<a href="http://ipv6hawaii.org/" target="_blank">http://ipv6hawaii.org/</a></pre>
<pre>--
Projects mailing list
<a href="mailto:Projects@vicpimakers.ca" target="_blank">Projects@vicpimakers.ca</a>
<a href="http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca" target="_blank">http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca</a>
</pre>
</blockquote>
</div>
<br>
<fieldset></fieldset>
</blockquote>
<pre cols="72">--
IPv6 is the future, the future is here
<a href="http://ipv6hawaii.org/" target="_blank">http://ipv6hawaii.org/</a></pre>
</div>
-- <br>
Projects mailing list<br>
<a href="mailto:Projects@vicpimakers.ca" target="_blank">Projects@vicpimakers.ca</a><br>
<a href="http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca" rel="noreferrer" target="_blank">http://vicpimakers.ca/mailman/listinfo/projects_vicpimakers.ca</a><br>
</blockquote></div>