Starts with evaluation metric

The starts with evaluation metric measures whether the rows in the prediction start with the specified substring

Metric details

Starts with is a content validation metric that uses string-based functions to analyze and validate generated LLM output text. The metric is available only when you use the Python SDK to calculate evaluation metrics.

Scope

The starts with metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks:
    • Text summarization
    • Content generation
    • Question answering
    • Entity extraction
    • Retrieval augmented generation (RAG)
  • Supported languages: English

Scores and values

The starts with metric score indicates whether the rows start with the speciified substring.

  • Range of values: 0.0-1.0
  • Ratios:
    • At 0: The prediction rows do not start with the specified substring.
    • At 1: The prediction rows start with the specified substring.

Settings

  • Thresholds:
    • Lower limit: 0
    • Upper limit: 1

Parent topic: Evaluation metrics