Length greater than evaluation metric
The Length greater than metric measures whether the length of each row in the prediction is greater than a specified maximum value.
Metric details
Length greater than is a content validation metric that uses string-based functions to analyze and validate generated LLM output text. The metric is available only when you use the Python SDK to calculate evaluation metrics.
Scope
The length greater than metric evaluates generative AI assets only.
- Types of AI assets: Prompt templates
- Generative AI tasks:
- Text summarization
- Content generation
- Question answering
- Entity extraction
- Retrieval augmented generation (RAG)
- Supported languages: English
Scores and values
The length greater than metric score indicates whether the length of each row in the prediction is greater than a specified maximum value.
- Range of values: 0.0-1.0
- Ratios:
- At 0: The row length is not greater than the specified value.
- At 1: The row length in the prediction is greater than the specified value.
Settings
- Thresholds:
- Lower limit: 0
- Upper limit: 1
Parent topic: Evaluation metrics