I want to know the tools and methods you use for the observability and monitoring of your ML (LLM) performance and responses in production.

    • synthphreak@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      TBH I didn’t completely understand the question, but it is clear that you didn’t either.

  • kennysong@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you’re open to using an open source library, you can use LangCheck to monitor and visualize text quality metrics in production.

    For example, you can compute & plot toxicity of users prompts and LLM responses from your logs. (A very simple example here.)

    (Disclaimer: I’m one of the contributors of LangCheck)

  • scorpfromhell@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just trying to understand the term “observability” here, in the context of LLMs.

    What is observed?

    Why is it observed?

  • Traditional_Swan_326@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hi there, Langfuse founder here. We’re building observability & analytics in open source (MIT). You can instrument your LLM via our SDKs (JS/TS & Python) or integrations (e.g. LangChain) and collect all the data you want to observe. The product is model-agnostic & customizable.

    We’ve pre-built dashboards you can use to analyze e.g. cost, latency and token usage in detailled breakdowns.

    Now, we’re starting to build (model-based) evaluations right now to get a grip on quality. You can manually ingest scores via our SDKs, too. + export as .csv and via get API.

    Would love to hear feedback from folks on this reddit on what we’ve built and feel free to message me here or at contact at langfuse dot com

    We have an open demo so you can have a look around a project with sample data.

  • Serquet1@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hey, we recently rolled out Nebuly, a tool focused on LLM observability in production. Thought it might be of interest to some here, and potentially useful for your needs. Here are some highlights:
    - Deep User Analytics: More insightful than thumbs up/down, it delves into LLM user interactions.
    - Easy Integration: Simply include our API key and a user_id parameter in your model call.
    - User Journeys: Gain insights into user interactions with LLMs using autocapture.
    - FAQ Insights: Identifies the most frequently asked questions by LLM users.
    - Cost Monitoring: Strives to find the sweet spot between user satisfaction and ROI.

    For a deeper dive, here’s our latest blog post on the topic: What is User Analytics for LLMs.

  • Commercial_Baker_463@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I am a data scientist at Fiddler. Fiddler AI (https://www.fiddler.ai) provides a nice set of tools for LLMOps and MLOps observability. It supports pre-production and post-production monitoring for both predictive models and generative AI.
    Specifically, Fiddler Auditor (https://github.com/fiddler-labs/fiddler-auditor) is an open source package that can be used to evaluate LLMs and NLP Models. In additional to that, Fiddler provides helpful tools for monitoring and visualization of NLP data (eg text embeddings) which can be used for data drift detection, user/model feedback analysis, and evaluation of safety metrics as well as custom metrics.