Forestwalk Labs is helping teams build better products on LLMs, faster.

Allen and Jenn are obsessed with making remarkable products, but today’s workflows for iterating LLM-powered software are slow and janky.

So, we’re researching how to do better. We’re learning from the best AI engineering teams, finding gaps, sharing what we learn, and building product experiments.

To start, we’ve prototyped a tool for running evals and grading LLM-powered products, codenamed ScoutEvals. If evals, testing, and prompt iteration are important to your team, email us or book a time to chat.

We’re just getting started. There sure is a lot to do.

If you’d like to be notified when we have more to share, sign up.