
gittech. site
for different kinds of informations and explorations.
IR_evaluation β Information retrieval evaluation metrics in pure Python
Information Retrieval Evaluation
This project provides simple and tested pure python implementations of popular information retrieval metrics without any library dependencies (not even numpy!). The source code is clear and easy to understand. All functions have pydoc help strings.
The metrics can be used to determine the quality of rankings that are returned by a retrieval or recommender system.
Installation
Requires: Python >=3.11
ir_evaluation
can be installed from pypi with:
pip install ir_evaluation
Usage
Metric functions will generally accept the following arguments:
actual
(list[int]): An array of ground truth relevant items.
predicted
(list[int]): An array of predicted items, ordered by relevance.
k
(int): The number of top predictions to consider.
Functions will return a float
value as the computed metric value.
Unit tests
Unit tests with easy to follow scenarios and sample data are included.
Run unit tests
uv run pytest
Metrics
- Recall
- Precision
- F1 Score
- Average Precision (AP)
- Mean Average Precision (MAP)
- Normalized Discounted Cumulative Gain (nDCG)
- Reciprocal Rank (RR)
- Mean Reciprocal Rank (MRR)
Recall
Recall is defined as the ratio of the total number of relevant items retrieved within the top-k predictions to the total number of relevant items in the entire database.
Usage scenario: Prioritize returning all relevant items from database. Early retrieval stages where many candidates are returned should focus on this metric.
from ir_evaluation.metrics import recall
Precision
Precision is defined as the ratio of the total number of relevant items retrieved within the top-k predictions to the total number of returned items (k).
Usage scenario: Minimize false positives in predictions. Later ranking stages should focus on this metric.
from ir_evaluation.metrics import precision
F1 Score
The F1-score is calculated as the harmonic mean of precision and recall. The F1-score provides a balanced view of a system's performance by taking into account both precision and recall.
Usage scenario: Use when where finding all relevant documents is just as important as minimizing irrelevant ones (eg in information retrieval).
from ir_evaluation.metrics import f1_score
Average Precision (AP)
Average Precision is calculated as the mean of precision values at each rank where a relevant item is retrieved within the top k
predictions.
Usage scenario: Evaluates how well relevant items are ranked within the top-k returned list.
from ir_evaluation.metrics import average_precision
Mean Average Precision (MAP)
MAP is the mean of the Average Precision (AP - see above) scores computed for multiple queries.
Usage scenario: Reflects overall performance of AP for multiple queries. A good holistic metric that balances the tradeoff between recall and precision.
from ir_evaluation.metrics import mean_average_precision
Normalized Discounted Cumulative Gain (nDCG)
nDCG evaluates the quality of a predicted ranking by comparing it to an ideal ranking (i.e., perfect ordering of relevant items). It accounts for the position of relevant items in the ranking, giving higher weight to items appearing earlier.
Usage scenario: Prioritize returning relevant items higher in the returned top-k list. A good holistic metric.
from ir_evaluation.metrics import ndcg
Reciprocal Rank (RR)
Reciprocal Rank (RR) assigns a score based on the reciprocal of the rank at which the first relevant item is found.
Usage scenario: Useful when the topmost recommendation holds siginificant value. Use this when users are presented with one or very few returned results.
from ir_evaluation.metrics import reciprocal_rank
Mean Reciprocal Rank (MRR)
MRR calculates the mean of the Reciprocal Rank (RR) scores for a set of queries.
Usage scenario: Reflects overall performance of RR for multiple queries.
from ir_evaluation.metrics import mean_reciprocal_rank
Online Resources
Pinecone - Evaluation Measures in Information Retrieval
Spot Intelligence - Mean Average Precision