I want to know the tools and methods you use for the observability and monitoring of your ML (LLM) performance and responses in production.
I want to know the tools and methods you use for the observability and monitoring of your ML (LLM) performance and responses in production.
Hey, we recently rolled out Nebuly, a tool focused on LLM observability in production. Thought it might be of interest to some here, and potentially useful for your needs. Here are some highlights:
- Deep User Analytics: More insightful than thumbs up/down, it delves into LLM user interactions.
- Easy Integration: Simply include our API key and a user_id parameter in your model call.
- User Journeys: Gain insights into user interactions with LLMs using autocapture.
- FAQ Insights: Identifies the most frequently asked questions by LLM users.
- Cost Monitoring: Strives to find the sweet spot between user satisfaction and ROI.
For a deeper dive, here’s our latest blog post on the topic: What is User Analytics for LLMs.