- Robin van Veen - AI Automation
- Posts
- I bought a second Mac Mini for a weird reason
I bought a second Mac Mini for a weird reason
Last month I did something slightly ridiculous.
I bought a second Mac Mini just to see if I could run AI models across two machines at once.
Turns out: you can.
I’m using Exo Labs to pool both machines together so they act like one larger system.
Instead of being limited by the memory of a single Mac Mini, the models can use the combined memory across both.

Which means you can run bigger models locally.
No API costs. No sending data to external servers.
That gets interesting fast for AI agents.
Cloud APIs are still great. I use them every day.
But local models start making sense when:
• You're handling sensitive client data
• You're running thousands of agent requests
• You need extremely low latency
• Or you simply want full control over the stack
The setup still has rough edges.
Some models run beautifully. Others… not so much.
But the direction is obvious.
Running capable models locally across two $700 machines would've sounded unrealistic 18 months ago.
Now it's possible.
I'm currently testing where local models actually fit inside real agent workflows.
Which models are worth hosting yourself. Which are still better in the cloud. Where the tradeoffs break.
One thing is very clear though:
Local AI is improving insanely fast.
If you're experimenting with local setups, reply to this email. Curious what you're running.
Robin