13 min read
Running local models on an M4 with 24GB memory
Experiments with getting usable outputs out of local models on a standard Macbook
elixir
llm
qwen
llmstudio