MatX: faster chips for LLMs
- Be the compute platform for AGI.
- Make AI better, faster, and cheaper by building more powerful hardware.
- We target just LLMs, whereas GPUs target all ML models.
- LLMs are different. Our hardware and software can be much simpler.
- We combine deep domain experience, a few key ideas, and a lot of careful engineering.
- Strong support from prominent LLM companies.
- Outset Capital
- SV Angel
- Rajiv Khemani: Intel, Cavium, Innovium, AIspace, Auradine
- Amit Singh: Oracle, Google, Palo Alto Networks
- Leading LLM and AI researchers:
- Swyx: Latent.Space, AI.Engineer, AWS, Netlify, Temporal