Blog
Phind's new "glow up"
The details behind making the new Phind frontend fast, beautiful, and easy to use.

How we built the models for Phind 2
Our goal was to build a system that can handle tricky queries, incorporate multiple data sources, display rich, interactive outputs, and act like a robust, next-gen search-and-answer platform. Here is an inside look at why we built it this way, what we learned, and how we overcame the big, gnarly challenges that come with post-training our own large language model.

Phind 2: Reinventing AI search with visual answers and multi-step reasoning
Phind now seeks out additional information as it needs and renders answers in a visual format with images, diagrams, cards, and interactive widgets.

Introducing Phind-405B and faster, high quality AI answers for everyone
We're introducing a new flagship model, Phind-405B, along with a new Phind Instant model that offers extremely fast search speeds for all of your programming and curiosity questions.

Introducing Phind-70B – closing the code quality gap with GPT-4 Turbo while running 4x faster
We're excited to announce Phind-70B, our largest and most performant model to date. Running at up to 80 tokens per second, it offers the best overall user experience for developers amongst state-of-the-art models.

Phind Model beats GPT-4 at coding, with GPT-3.5-like speed and 16k context
We're excited to announce that Phind now defaults to our own model that matches and exceeds GPT-4's coding abilities while running 5x faster. You can now get high quality answers for technical questions in 10 seconds instead of 50.

Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
We've fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieve 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieves 67%. We've applied OpenAI's decontamination methodology to our dataset to ensure result validity.
