MatX: faster chips for LLMs
Mission:
- Be the compute platform for AGI.
- Make AI better, faster, and cheaper by building more powerful hardware.
Approach:
- We target just LLMs, whereas GPUs target all ML models.
- LLMs are different. Our hardware and software can be much simpler.
- We combine deep domain experience, a few key ideas, and a lot of careful engineering.
Team:
Traction:
- Strong support from prominent LLM companies.
Investors include:
- Outset Capital
- SV Angel
- Homebrew
- Rajiv Khemani: Sun, Intel, Cavium, Innovium, Auradine
- Amit Singh: Oracle, Google, Palo Alto Networks
- Leading LLM and AI researchers:
- Swyx: Latent.Space, AI.Engineer, AWS, Netlify, Temporal
We're hiring!