Apple para la experiencia. x86 para la escala.
No vendes cajas. Vendes una gama coordinada de nodos. Apple ofrece la experiencia local más limpia y plug-and-play. x86 aporta flexibilidad de precio en la base y expansión con GPU en la parte alta. Necesitas ambos.
Obsidian Personal
Apple Mac mini M2 · 16 GB unified · 1 TB
Best Apple entry
Affordable entry node that runs small local LLMs smoothly with almost no setup friction.
Unified memory behaves like VRAM, so Ollama and LM Studio feel unusually smooth for the size and price.
- Plataforma
- Apple Silicon
- Aceleración IA
- Unified memory
- Ajuste del modelo
- 7B-8B local models
- Ampliaciones
- None
Silent desktop AI for personal assistants, Q&A and offline drafting.
Obsidian Flex
GMKtec Ryzen 7 8845HS · 32 GB
Flexible non-Apple budget node
A lower-cost Windows/Linux node with more RAM flexibility and a strong integrated GPU for local AI.
This is the strongest value-oriented alternative if you want higher RAM headroom without moving into Apple pricing.
- Plataforma
- x86 Ryzen
- Aceleración IA
- Radeon 780M iGPU
- Ajuste del modelo
- 7B-13B local models
- Ampliaciones
- Moderate
Budget-conscious local AI, Linux-first setups and users who want more tweakability.
Obsidian Core
Apple Mac mini M4 · 32 GB unified
Minimum serious sellable configuration
Balanced Apple-first node for continuous local AI workloads, retrieval and multi-agent prototypes.
The base M4 exists around 929 EUR, but the 32 GB build is the first configuration worth selling for sustained local AI.
- Plataforma
- Apple Silicon
- Aceleración IA
- Neural Engine
- Ajuste del modelo
- 7B-13B local models
- Ampliaciones
- None
The best balance of simplicity, silence and useful local model headroom.
Obsidian Pro
Apple Mac mini M4 Pro · 64 GB unified
Best Apple high-end
High-memory compact node for larger local models, multi-agent orchestration and heavier professional workflows.
Apple Silicon is fixed at purchase time. RAM and GPU are not field-upgradeable, so this is the tier to buy once and size correctly.
- Plataforma
- Apple Silicon
- Aceleración IA
- Neural Engine + stronger GPU
- Ajuste del modelo
- 13B-30B local models
- Ampliaciones
- None
Team-level local AI, orchestration-heavy work and advanced secure inference on the desk.
Obsidian Edge
Minisforum AI X1 Pro · Ryzen AI 9
Scale-first non-Apple path
Expandable node for heavier inference, shared usage and future GPU-backed scaling beyond the Mac mini ceiling.
This is the node to choose when you care more about expansion, eGPU support and scale than absolute simplicity.
- Plataforma
- x86 Ryzen AI
- Aceleración IA
- NPU + eGPU path
- Ajuste del modelo
- 13B-70B+ with expansion
- Ampliaciones
- High
Server-grade local AI, multi-user deployments and the path toward 70B-scale setups.
Punto de realidad
Los Mac mini son grandes appliances de IA precisamente porque son silenciosos, estables y sin fricción. La contrapartida es que no se amplían: RAM y GPU quedan fijadas en la compra. Por eso la línea Apple debe venderse con la memoria correcta desde el inicio.
Por eso la configuración M4 de 32 GB es la primera Core realmente seria, y por eso existe el nodo Edge x86 en la parte alta del stack.
Regla de empaquetado
Apple es el mejor camino cuando la prioridad es silencio, simplicidad y una experiencia premium de escritorio. x86 es el camino correcto cuando la prioridad es expansión, uso compartido y escalado futuro con GPU.
- Apple = experiencia + simplicidad
- x86 = potencia + escalabilidad
- La línea Obsidian debe presentar ambos como nodos de primera clase
Resumen de capacidades
| Capacidad | Personal Apple Mac mini M2 · 16 GB unified · 1 TB 579,00 € | Flex GMKtec Ryzen 7 8845HS · 32 GB 359,99 € | Core Apple Mac mini M4 · 32 GB unified 1143,97 € | Pro Apple Mac mini M4 Pro · 64 GB unified 1903,99 € | Edge Minisforum AI X1 Pro · Ryzen AI 9 799,00 € |
|---|---|---|---|---|---|
| Architecture | Apple Silicon | x86 Ryzen | Apple Silicon | Apple Silicon | x86 Ryzen AI |
| Memory | 16 GB | 32 GB | 32 GB | 64 GB | 96 GB |
| GPU path | Integrated only | Integrated only | Integrated only | GPU-ready | GPU-ready |
| AI acceleration | Unified memory | Radeon 780M iGPU | Neural Engine | Neural Engine + stronger GPU | NPU + eGPU path |
| Model capability | 7B-8B local models | 7B-13B local models | 7B-13B local models | 13B-30B local models | 13B-70B+ with expansion |
| Upgradeability | None | Moderate | None | None | High |
| Noise / efficiency | 5/5 quiet | 4/5 quiet | 5/5 quiet | 5/5 quiet | 3/5 workstation |
| Best use | Silent desktop AI for personal assistants, Q&A and offline drafting. | Budget-conscious local AI, Linux-first setups and users who want more tweakability. | The best balance of simplicity, silence and useful local model headroom. | Team-level local AI, orchestration-heavy work and advanced secure inference on the desk. | Server-grade local AI, multi-user deployments and the path toward 70B-scale setups. |