Back to projects
Python package · Actively running

FIMeval

Project BriefUpdated May 8, 2026

FIMeval is a framework for evaluating flood-inundation predictions over benchmark databases. It helps compare model outputs with reference products using standardized metrics and repeatable preprocessing logic.

PythonBenchmarkingEvaluationFIM
FIMeval thumbnail

What It Does

  • Standardizes FIM model evaluation with multi-metric benchmarking (CSI, FAR, POD, and related skill scores).
  • Supports benchmark comparison across 200+ real and synthetic flood events.
  • Reduces manual preprocessing and evaluation errors.
  • Provides consistent quality assurance for flood inundation predictions at scale.

Why It Matters

Evaluation is where flood models become trustworthy. FIMeval makes comparison workflows more transparent, repeatable, and easier to connect to peer-reviewed research. With over 15,000 downloads, it has established itself as a community standard for systematic quality assurance across a wide range of flood scenarios.

Related Work