A Benchmark Framework for Anomaly Detection
Welcome to Open Anomaly Benchmark!
Open Anomaly Benchmark (OAB) is a perfect fit for evaluating your anomaly detection algorithm on standard datasets or your own datasets in a reproducible way. It focusses on tabular and image data in unsupervised and semisupervised anomaly detection.

OAB’s features include:
- Python Package: Simply pip-install OAB in your environment, and you can test your algorithm or dataset in no time.
- Reproducibility: All operations performed on the dataset are stored in a file, so that they can easily be reproduced – just as the experiment itself.
- Datasets: Standard datasets as well as other real-life datasets are easily available and loaded for you with one command.
- Metrics: OAB comes with all important anomaly detection metrics.
- Extensibility: Even if your functionality is not provided by OAB, they can simply be included into the pipeline without compromising reproducibility.
- Benchmarking: Submit your algorithm’s results and see how it performs compared to other SOTA anomaly detection algorithms!
Our Leaderboard keeps track of the best anomaly detection algorithms. You can submit your algorithm’s results and compete for the best performance ranking below.