diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 55d2281..0271ff3 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -93,6 +93,13 @@ jobs: uv run coverage report -m --fail-under=80 uv run coverage html uv run coverage xml + - name: Upload coverage to Codecov + uses: codecov/codecov-action@v5 + with: + token: ${{ secrets.CODECOV_TOKEN }} + files: ./coverage.xml + fail_ci_if_error: true + slug: euranova/muppet - name: Upload HTML report uses: actions/upload-artifact@v4 with: diff --git a/README.md b/README.md index 1f14191..6374c01 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@
+
+ Principle • + Quick Start • + Installation • + Docs +
+ [](https://pypi.org/project/Muppet-XAI/) [](https://pypi.org/project/Muppet-XAI/) [](https://codecov.io/gh/euranova/muppet) @@ -31,7 +38,7 @@ The perturbation-based methods follow four steps: 4. Finally aggregate the attributions to find the final local explanation such as feature importance, heat-maps, ...
+
+
+
-
+
+
Muppet is designed for simplicity and transparency, it allows easy explanation of ML models with minimal coding. As shown in the figure below, one simply initiates the desired explainer with the model to investigate, and by providing the input data, Muppet does the rest to generate an explanation. The generated explanation's format depends on the chosen explainer (e.g., saliency map for images). Muppet components are implemented to support diverse data modalities and, therefore, could be shared among different explainers.
-
+
As an open-source project, Muppet is designed in modularity to take advantage of the theoretical decomposition of PXAI methods and provide the research community with an easy-to-contribute-to framework. It offers a standardized API to ensure that contributions are not only straightforward but also adhere to a well-defined structure. Integrating a new method into the framework involves its decomposition into the four components: Explorer, Perturbator, Attributor, and Aggregator.
Each one represents a distinct step in the explanation process; each step corresponds to an independent module in our API, and as such can be easily swapped and re-used as building blocks for other methods.
As depicted in Figure below, once the modules are implemented or selected from the library, Muppet seamlessly manages the internal communication between components to create a ready-to-use explainer. This modular approach enhances components' reusability, allowing for experimental analysis of their behavior. It also enables the instantiation of different but compatible modules to yield new PXAI methods and variants.
-
+
Moreover, we acknowledge the crucial role of benchmarking capabilities of XAI methods and its demand from the research community. For this reason, Muppet offers, at the time of this writing, only some functionalities to evaluate the XAI methods such as faithfulness and robustness metrics. Muppet is currently in active development of a full benchmarking toolkit to evaluate XAI methods based on state-of-the-art models and datasets.
@@ -200,7 +207,7 @@ For support please write to :
# Contributions
-Yes! We welcome all kinds of contributions. Check out our guide to contributing to [Muppet-XAI](docs/dev_documentation.md).
+Yes! We welcome all kinds of contributions. Check out our guide to contributing to [Muppet-XAI](https://euranova.github.io/muppet/dev_documentation/).
# 📝 Citations
If you find MUPPET-XAI useful in your research or projects, please cite our work: