-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.json
More file actions
1 lines (1 loc) · 43.3 KB
/
index.json
File metadata and controls
1 lines (1 loc) · 43.3 KB
1
[{"authors":null,"categories":null,"content":"I am a machine learning researcher and received my PhD in 2022 from the Institute of Mathematics of the Technische Universität Berlin. My research focuses on topics in applied and computational mathematics and scientific machine learning with a specialization in problems related to computer vision. I am particularly interested in studying the accuracy and robustness of deep learning methods for inverse problems and the interpretability of deep learning classifier decisions.\n","date":1672531200,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":1672531200,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"","publishdate":"0001-01-01T00:00:00Z","relpermalink":"","section":"authors","summary":"I am a machine learning researcher and received my PhD in 2022 from the Institute of Mathematics of the Technische Universität Berlin. My research focuses on topics in applied and computational mathematics and scientific machine learning with a specialization in problems related to computer vision.","tags":null,"title":"Jan Macdonald","type":"authors"},{"authors":["Martin Genzel","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1672531200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1672531200,"objectID":"964bcae7f2fa592a1859f20b799964a3","permalink":"https://jmaces.github.io/publication/genzel-solving-2023/","publishdate":"2023-01-23T17:38:07+01:00","relpermalink":"/publication/genzel-solving-2023/","section":"publication","summary":"In the past five years, deep learning methods have become state-of-the-art in solving various inverse problems. Before such approaches can find application in safety-critical fields, a verification of their reliability appears mandatory. Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks. In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts. The present article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems. This covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging (using the NYU-fastMRI dataset). Our main focus is on computing adversarial perturbations of the measurements that maximize the reconstruction error. A distinctive feature of our approach is the quantitative and qualitative comparison with total-variation minimization, which serves as a provably robust reference method. In contrast to previous findings, our results reveal that standard end-to-end network architectures are not only resilient against statistical noise, but also against adversarial perturbations. All considered networks are trained by common deep learning techniques, without sophisticated defense strategies.","tags":["Deep Neural Networks","MRI","Compressed Sensing","l1-Regularization","Iterative Reconstruction Algorithm","CT Reconstruction","Adversarial Examples","Inverse Problems"],"title":"Solving Inverse Problems With Deep Neural Networks - Robustness Included?","type":"publication"},{"authors":["Theophil Trippe","Martin Genzel","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1668729600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1668729600,"objectID":"47116903ad0d99c998d2abdf239e089c","permalink":"https://jmaces.github.io/publication/trippe-lets-enhance-2022/","publishdate":"2023-01-22T21:11:34.703609Z","relpermalink":"/publication/trippe-lets-enhance-2022/","section":"publication","summary":"This work presents a novel deep-learning-based pipeline for the inverse problem of image deblurring, leveraging augmentation and pre-training with synthetic data. Our results build on our winning submission to the recent Helsinki Deblur Challenge 2021, whose goal was to explore the limits of state-of-the-art deblurring algorithms in a real-world data setting. The task of the challenge was to deblur out-of-focus images of random text, thereby in a downstream task, maximizing an optical-character-recognition-based score function. A key step of our solution is the data-driven estimation of the physical forward model describing the blur process. This enables a stream of synthetic data, generating pairs of ground-truth and blurry images on-the-fly, which is used for an extensive augmentation of the small amount of challenge data provided. The actual deblurring pipeline consists of an approximate inversion of the radial lens distortion (determined by the estimated forward model) and a U-Net architecture, which is trained end-to-end. Our algorithm was the only one passing the hardest challenge level, achieving over 70% character recognition accuracy. Our findings are well in line with the paradigm of data-centric machine learning, and we demonstrate its effectiveness in the context of inverse problems. Apart from a detailed presentation of our methodology, we also analyze the importance of several design choices in a series of ablation studies. The code of our challenge submission is available under https://github.com/theophil-trippe/HDC_TUBerlin_version_1.","tags":["Deep Neural Networks","Image Deblurring","Inverse Problems"],"title":"Let's Enhance: A Deep Learning Approach to Extreme Deblurring of Text Images","type":"publication"},{"authors":["Jan Macdonald"],"categories":null,"content":"","date":1665619200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1665619200,"objectID":"e26a9faa852caed624628a9055217cca","permalink":"https://jmaces.github.io/publication/macdonald-thesis-2022/","publishdate":"2023-01-07T15:49:22.532263Z","relpermalink":"/publication/macdonald-thesis-2022/","section":"publication","summary":"This thesis investigates several aspects of using data-driven methods for image and signal processing tasks, particularly those aspects related to the reliability of approaches based on deep learning. It is organized in two parts. The first part studies the interpretability of predictions made by neural network classifiers. A key component for achieving interpretable classifications is the identification of relevant input features for the predictions. While several heuristic approaches towards this goal have been proposed, there is yet no generally agreed-upon definition of relevance. Instead, these heuristics typically rely on individual (often not explicitly stated) notions of interpretability, making comparisons of results difficult. The contribution of the first part of this thesis is the introduction of an explicit definition of relevance of input features for a classifier prediction and an analysis thereof. The formulation is based on a rate-distortion trade-off and derived from the observation and identification of common questions that practitioners would like to answer with relevance attribution methods. It turns out that answering these questions is extremely challenging: A computational complexity analysis reveals the hardness of determining the most relevant input features (even approximately) for Boolean classifiers as well as for neural network classifiers. This hardness in principle justifies the adoption of heuristic strategies and the explicit rate-distortion formulation inspires a novel approach that specifically aims at answering the identified questions of interest. Furthermore, it allows for a quantitative evaluation of relevance attribution methods, revealing that the newly proposed heuristic performs best in identifying the relevant input features compared to previous methods. The second part studies the accuracy and robustness of deep learning methods for the reconstruction of signals from undersampled indirect measurements. Such inverse problems arise for example in medical imaging, geophysics, communication, or astronomy. While widely used classical variational solution methods come with reconstruction guarantees (under suitable assumptions), the underlying mechanisms of data-driven methods are mostly not well understood from a mathematical perspective. Nevertheless, they show promising results and frequently empirically outperform classical methods in terms of reconstruction quality and speed. However, several doubts remain regarding their reliability, in particular questions concerning their robustness to perturbations. Indeed, for classification tasks it is well known that neural networks are vulnerable to adversarial perturbations, i.e., tiny modifications that are visually imperceptible but mislead the neural network to make a wrong prediction. This raises the question if similar effects also occur in the context of signal recovery. The contribution of the second part of this thesis is an extensive numerical study of the robustness of a representative selection of end-to-end neural networks for solving inverse problems. It is demonstrated that for such regression problems (in contrast to classification) neural networks can be remarkably robust to adversarial and statistical perturbations. Furthermore, they show state-of-the-art performance resulting in highly accurate reconstructions: In the idealistic scenario of synthetic and perturbation-free data neural networks have the potential to achieve near-perfect reconstructions, i.e., their reconstruction error is close to numerical precision.","tags":["Deep Neural Networks","Explainable Neural Networks","Inverse Problems","Adversarial Examples","MRI","CT Reconstruction"],"title":"The Reliability of Deep Learning for Signal and Image Processing: Interpretability, Robustness, and Accuracy","type":"publication"},{"authors":["Jan Macdonald","Mathieu Besançon","Sebastian Pokutta"],"categories":null,"content":"","date":1658534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1658534400,"objectID":"e809a4517e05e569e0a333676f44e7fc","permalink":"https://jmaces.github.io/publication/macdonald-interpretable-2022/","publishdate":"2023-01-23T17:00:03.310664Z","relpermalink":"/publication/macdonald-interpretable-2022/","section":"publication","summary":"We study the effects of constrained optimization formulations and Frank-Wolfe algorithms for obtaining interpretable neural network predictions. Reformulating the Rate-Distortion Explanations (RDE) method for relevance attribution as a constrained optimization problem provides precise control over the sparsity of relevance maps. This enables a novel multi-rate as well as a relevance-ordering variant of RDE that both empirically outperform standard RDE in a well-established comparison test. We showcase several deterministic and stochastic variants of the Frank-Wolfe algorithm and their effectiveness for RDE.","tags":["Deep Neural Networks","Explainable Neural Networks"],"title":"Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings","type":"publication"},{"authors":["Martin Genzel","Ingo Gühring","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1658534400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1658534400,"objectID":"96bc3e9b6b99f56eefcac9c71611699a","permalink":"https://jmaces.github.io/publication/genzel-near-exact-2022/","publishdate":"2023-01-23T11:00:08.896161Z","relpermalink":"/publication/genzel-near-exact-2022/","section":"publication","summary":"This work is concerned with the following fundamental question in scientific machine learning: Can deep-learning-based methods solve noise-free inverse problems to near-perfect accuracy? Positive evidence is provided for the first time, focusing on a prototypical computed tomography (CT) setup. We demonstrate that an iterative end-to-end network scheme enables reconstructions close to numerical precision, comparable to classical compressed sensing strategies. Our results build on our winning submission to the recent AAPM DL-Sparse-View CT Challenge. Its goal was to identify the state-of-the-art in solving the sparse-view CT inverse problem with data-driven techniques. A specific difficulty of the challenge setup was that the precise forward model remained unknown to the participants. Therefore, a key feature of our approach was to initially estimate the unknown fanbeam geometry in a data-driven calibration step. Apart from an in-depth analysis of our methodology, we also demonstrate its state-of-the-art performance on the open-access real-world dataset LoDoPaB CT.","tags":["Deep Neural Networks","CT Reconstruction","Inverse Problems"],"title":"Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning","type":"publication"},{"authors":["Martin Genzel","Ingo Gühring","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1658433600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1658433600,"objectID":"d730f44a5d6839d7b0223d1f0250fc1b","permalink":"https://jmaces.github.io/talk/near-exact-recovery-for-tomographic-inverse-problems-via-deep-learning/","publishdate":"2022-07-03T13:06:54+02:00","relpermalink":"/talk/near-exact-recovery-for-tomographic-inverse-problems-via-deep-learning/","section":"event","summary":"**Long Oral \u0026 Poster:** Presentation of results from our [paper](/publication/genzel-near-exact-2022).","tags":["Deep Neural Networks","Inverse Problems","CT Reconstruction"],"title":"Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning","type":"event"},{"authors":["Jan Macdonald","Mathieu Besançon","Sebastian Pokutta"],"categories":null,"content":"","date":1658327400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1658327400,"objectID":"473787ba5f2908845e8655be1d1607e1","permalink":"https://jmaces.github.io/talk/interpretable-neural-networks-with-frank-wolfe-sparse-relevance-maps-and-relevance-orderings/","publishdate":"2022-07-03T13:07:01+02:00","relpermalink":"/talk/interpretable-neural-networks-with-frank-wolfe-sparse-relevance-maps-and-relevance-orderings/","section":"event","summary":"**Spotlight \u0026 Poster:** Presentation of results from our [paper](/publication/macdonald-interpretable-2022).","tags":["Deep Neural Networks","Explainable Neural Networks"],"title":"Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings","type":"event"},{"authors":["Martin Genzel","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1653551400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1653551400,"objectID":"08a633cdc417b50c5df4e851e5c5f91a","permalink":"https://jmaces.github.io/talk/solving-inverse-problems-with-deep-neural-networks-robustness-included/","publishdate":"2022-04-10T13:26:58+02:00","relpermalink":"/talk/solving-inverse-problems-with-deep-neural-networks-robustness-included/","section":"event","summary":"**Invited Talk:** Presentation of results from our [paper](/publication/genzel-solving-2023).","tags":["Deep Neural Networks","MRI","Compressed Sensing","l1-Regularization","Iterative Reconstruction Algorithm","CT Reconstruction","Adversarial Examples","Inverse Problems"],"title":"Solving Inverse Problems With Deep Neural Networks - Robustness Included?","type":"event"},{"authors":["Jan Macdonald","Stephan Wäldchen"],"categories":null,"content":"","date":1651363200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1651363200,"objectID":"0fe132833571785e7047a4411ddc8101","permalink":"https://jmaces.github.io/publication/macdonald-complete-2022/","publishdate":"2022-05-14T09:29:40.555887Z","relpermalink":"/publication/macdonald-complete-2022/","section":"publication","summary":"We give a complete characterisation of families of probability distributions that are invariant under the action of ReLU neural network layers (in the same way that the family of Gaussian distributions is invariant to affine linear transformations). The need for such families arises during the training of Bayesian networks or the analysis of trained neural networks, e.g., in the context of uncertainty quantification (UQ) or explainable artificial intelligence (XAI). We prove that no invariant parametrised family of distributions can exist unless at least one of the following three restrictions holds: First, the network layers have a width of one, which is unreasonable for practical neural networks. Second, the probability measures in the family have finite support, which basically amounts to sampling distributions. Third, the parametrisation of the family is not locally Lipschitz continuous, which excludes all computationally feasible families. Finally, we show that these restrictions are individually necessary. For each of the three cases we can construct an invariant family exploiting exactly one of the restrictions but not the other two.","tags":["Deep Neural Networks","Probability Distributions"],"title":"A Complete Characterisation of ReLU-Invariant Distributions","type":"publication"},{"authors":["Jan Macdonald","Stephan Wäldchen"],"categories":null,"content":"","date":1648629900,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1648629900,"objectID":"822a272c66c631ef8e3163970e9bc847","permalink":"https://jmaces.github.io/talk/a-complete-characterisation-of-relu-invariant-distributions/","publishdate":"2022-04-10T13:26:34+02:00","relpermalink":"/talk/a-complete-characterisation-of-relu-invariant-distributions/","section":"event","summary":"**Contributed Talk \u0026 Poster:** Presentation of results from our [paper](/publication/macdonald-complete-2022).","tags":["Deep Neural Networks","Probability Distributions"],"title":"A Complete Characterisation of ReLU-Invariant Distributions","type":"event"},{"authors":["Theophil Trippe","Martin Genzel","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1639554600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1639554600,"objectID":"70ecf07c71ff312372e91bd4ff366cd1","permalink":"https://jmaces.github.io/talk/learning-to-invert-defocus-blur-a-data-driven-approach-to-the-helsinki-deblur-challenge/","publishdate":"2022-01-07T17:37:11+01:00","relpermalink":"/talk/learning-to-invert-defocus-blur-a-data-driven-approach-to-the-helsinki-deblur-challenge/","section":"event","summary":"**Invited Talk:** Presentation of our 1st place winning submission to the Helsinki Deblur Challenge.","tags":["Deep Learning","Deep Neural Networks","Inverse Problems","Challenge Submission","Defocus Deblurring"],"title":"Learning to Invert Defocus Blur: A Data-Driven Approach to the Helsinki Deblur Challenge","type":"event"},{"authors":["Martin Genzel","Ingo Gühring","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1639428300,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1639428300,"objectID":"bf82600ea77206fd2fd638757028e180","permalink":"https://jmaces.github.io/talk/near-exact-recovery-for-sparse-view-ct-via-data-driven-methods/","publishdate":"2022-01-07T17:20:47+01:00","relpermalink":"/talk/near-exact-recovery-for-sparse-view-ct-via-data-driven-methods/","section":"event","summary":"**Poster:** Presentation of a detailed [analysis](/publication/genzel-near-exact-2021) of our 1st place winning AAPM DL-Sparse-View CT challenge [submission](/publication/genzel-aapm-2021).","tags":["Deep Learning","Deep Neural Networks","Inverse Problems","Challenge Submission","CT Reconstruction"],"title":"Near-Exact Recovery for Sparse-View CT via Data-Driven Methods","type":"event"},{"authors":["Martin Genzel","Ingo Gühring","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1633046400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1633046400,"objectID":"19c2f8cf4bb07f3e305486ff5688aa94","permalink":"https://jmaces.github.io/publication/genzel-near-exact-2021/","publishdate":"2022-01-07T16:10:35.55163Z","relpermalink":"/publication/genzel-near-exact-2021/","section":"publication","summary":"This work presents an empirical study on the design and training of iterative neural networks for image reconstruction from tomographic measurements with unknown geometry. It is based on insights gained during our participation in the recent AAPM DL-Sparse-View CT challenge and a further analysis of our winning submission (team name: robust-and-stable) subsequent to the competition period. The goal of the challenge was to identify the state of the art in sparse-view CT with data-driven techniques, thereby addressing a fundamental research question: Can neural-network-based solvers produce near-perfect reconstructions for noise-free data? We answer this in the affirmative by demonstrating that an iterative end-to-end scheme enables the computation of near-perfect solutions on the test set. Remarkably, the fanbeam geometry of the used forward model is completely inferred through a data-driven geometric calibration step.","tags":["Deep Neural Networks","CT Reconstruction","Inverse Problems","Challenge Report"],"title":"Near-Exact Recovery for Sparse-View CT via Data-Driven Methods","type":"publication"},{"authors":["Luis Oala","Cosmas Heiß","Jan Macdonald","Maximilian März","Gitta Kutyniok","Wojciech Samek"],"categories":null,"content":"","date":1630454400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1630454400,"objectID":"e9a4957788b5ae32234a585c0d7488d4","permalink":"https://jmaces.github.io/publication/oala-interval-2021/","publishdate":"2021-09-19T14:53:28.745387Z","relpermalink":"/publication/oala-interval-2021/","section":"publication","summary":"**Purpose** The quantitative detection of failure modes is important for making deep neural networks reliable and usable at scale. We consider three examples for common failure modes in image reconstruction and demonstrate the potential of uncertainty quantification as a fine-grained alarm system. \n\n**Methods** We propose a deterministic, modular and lightweight approach called Interval Neural Network (INN) that produces fast and easy to interpret uncertainty scores for deep neural networks. Importantly, INNs can be constructed post hoc for already trained prediction networks. We compare it against state-of-the-art baseline methods (MCDROP, PROBOUT). \n\n**Results** We demonstrate on controlled, synthetic inverse problems the capacity of INNs to capture uncertainty due to noise as well as directional error information. On a real-world inverse problem with human CT scans, we can show that INNs produce uncertainty scores which improve the detection of all considered failure modes compared to the baseline methods. \n\n**Conclusion** Interval Neural Networks offer a promising tool to expose weaknesses of deep image reconstruction models and ultimately make them more reliable. The fact that they can be applied post hoc to equip already trained deep neural network models with uncertainty scores makes them particularly interesting for deployment.","tags":["Deep Neural Networks","CT Reconstruction","Uncertainty Quantification","Adversarial Examples","Inverse Problems"],"title":"Detecting Failure Modes in Image Reconstructions with Interval Neural Network Uncertainty","type":"publication"},{"authors":["Martin Genzel","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1627482600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1627482600,"objectID":"02f685ec14e968e8ccdbd30394a3ec2b","permalink":"https://jmaces.github.io/talk/aapm-dl-sparse-view-ct-challenge-submission-report-designing-an-iterative-network-for-fanbeam-ct-with-unknown-geometry/","publishdate":"2021-07-21T21:19:03+02:00","relpermalink":"/talk/aapm-dl-sparse-view-ct-challenge-submission-report-designing-an-iterative-network-for-fanbeam-ct-with-unknown-geometry/","section":"event","summary":"**Invited Talk:** Presentation of our 1st place winning AAPM DL-Sparse-View CT challenge [submission](/publication/genzel-aapm-2021).","tags":["Deep Learning","Deep Neural Networks","Inverse Problems","Challenge Submission","CT Reconstruction"],"title":"AAPM DL-Sparse-View CT Challenge Submission Report: Designing an Iterative Network for Fanbeam-CT with Unknown Geometry","type":"event"},{"authors":["Martin Genzel","Jan Macdonald","Maximilian März"],"categories":null,"content":"","date":1622505600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1622505600,"objectID":"35a2bf54d72bca6a9427f6dc761d975a","permalink":"https://jmaces.github.io/publication/genzel-aapm-2021/","publishdate":"2021-06-04T10:16:25.200837Z","relpermalink":"/publication/genzel-aapm-2021/","section":"publication","summary":"This report is dedicated to a short motivation and description of our contribution to the AAPM DL-Sparse-View CT Challenge (team name: \"robust-and-stable\"). The task is to recover breast model phantom images from limited view fanbeam measurements using data-driven reconstruction techniques. The challenge is distinctive in the sense that participants are provided with a collection of ground truth images and their noiseless, subsampled sinograms (as well as the associated limited view filtered backprojection images), but not with the actual forward model. Therefore, our approach first estimates the fanbeam geometry in a data-driven geometric calibration step. In a subsequent two-step procedure, we design an iterative end-to-end network that enables the computation of near-exact solutions.","tags":["Deep Neural Networks","CT Reconstruction","Inverse Problems","Challenge Report"],"title":"AAPM DL-Sparse-View CT Challenge Submission Report: Designing an Iterative Network for Fanbeam-CT with Unknown Geometry","type":"publication"},{"authors":["Jan Macdonald","Luis Oala","Maximilian März","Wojciech Samek"],"categories":null,"content":"","date":1615293900,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1615293900,"objectID":"23035d6df1963c2fc574590c0addca08","permalink":"https://jmaces.github.io/talk/interval-neural-networks-as-instability-detectors-for-image-reconstructions/","publishdate":"2021-02-01T18:05:33+01:00","relpermalink":"/talk/interval-neural-networks-as-instability-detectors-for-image-reconstructions/","section":"event","summary":"**Contributed Talk:** Presentation of results from our [paper](/publication/macdonald-interval-2021).","tags":["Deep Learning","Deep Neural Networks","Inverse Problems","Uncertainty Quantification","Adversarial Perturbations"],"title":"Interval Neural Networks as Instability Detectors for Image Reconstructions","type":"event"},{"authors":["Jan Macdonald","Maximilian März","Luis Oala","Wojciech Samek"],"categories":null,"content":"","date":1612137600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1612137600,"objectID":"97795915887dcd78169e1f43e4a7000d","permalink":"https://jmaces.github.io/publication/macdonald-interval-2021/","publishdate":"2021-05-18T19:52:03.364909Z","relpermalink":"/publication/macdonald-interval-2021/","section":"publication","summary":"This work investigates the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Indeed, in a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-of-distribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates, how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. Such an ability is crucial to ensure a safe use of deep learning-based methods for medical image reconstruction.","tags":["Deep Neural Networks","CT Reconstruction","Uncertainty Quantification","Adversarial Examples","Inverse Problems"],"title":"Interval Neural Networks as Instability Detectors for Image Reconstructions","type":"publication"},{"authors":["Stephan Wäldchen","Jan Macdonald","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1609459200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1609459200,"objectID":"0dd6496c5e0433d2f1fbda0af1ffd04a","permalink":"https://jmaces.github.io/publication/waldchen-computational-2021/","publishdate":"2021-02-01T21:41:42.910644Z","relpermalink":"/publication/waldchen-computational-2021/","section":"publication","summary":"For a $d$-ary Boolean function $\\Phi\\colon\\\\{0,1\\\\}^d\\to\\\\{0,1\\\\}$ and an assignment to its variables $\\mathbf{x}=(x\\_1, x\\_2, \\dots, x\\_d)$ we consider the problem of finding those subsets of the variables that are sufficient to determine the function value with a given probability $\\delta$. This is motivated by the task of interpreting predictions of binary classifiers described as Boolean circuits, which can be seen as special cases of neural networks. We show that the problem of deciding whether such subsets of relevant variables of limited size $k \\leq d$ exist is complete for the complexity class $\\mathsf{NP}^\\mathsf{PP}$ and thus, generally, unfeasible to solve. We then introduce a variant, in which it suffices to check whether a subset determines the function value with probability at least $\\delta$ or at most $\\delta-\\gamma$ for $0\u003c\\gamma\u003c\\delta$. This promise of a probability gap reduces the complexity to the class $\\mathsf{NP}^\\mathsf{BPP}$. Finally, we show that finding the minimal set of relevant variables cannot be reasonably approximated, i.e. with an approximation factor $d^{1−\\alpha}$ for $\\alpha \u003e 0$, by a polynomial time algorithm unless $\\mathsf{P}=\\mathsf{NP}$. This holds even with the promise of a probability gap.","tags":["Complexity Theory","Explainable Neural Networks"],"title":"The Computational Complexity of Understanding Binary Classifier Decisions","type":"publication"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1594944e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1594944e3,"objectID":"8339032ad54fe96f86a9621a429b5d83","permalink":"https://jmaces.github.io/talk/explaining-neural-network-decisions-is-hard/","publishdate":"2020-08-17T22:02:01+02:00","relpermalink":"/talk/explaining-neural-network-decisions-is-hard/","section":"event","summary":"**Poster:** Presentation of results based on our papers [A Rate-Distortion Framework for Explaining Neural Network Decisions](/publication/macdonald-rate-distortion-2019) and [The Computational Complexity of Understanding Network Decisions](/publication/waldchen-computational-2021).","tags":["Deep Learning","Deep Neural Networks","Explainable Neural Networks"],"title":"Explaining Neural Network Decisions Is Hard","type":"event"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1593561600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1593561600,"objectID":"b1984c047c90720956f73fbfe433c66a","permalink":"https://jmaces.github.io/publication/macdonald-explaining-2020/","publishdate":"2022-01-07T15:49:22.532263Z","relpermalink":"/publication/macdonald-explaining-2020/","section":"publication","summary":"We connect the widespread idea of interpreting classifier decisions to probabilistic prime implicants. A set of input features is deemed relevant for a classification decision if the classifier score remains nearly constant when randomising the remaining features. This introduces a rate-distortion trade-off between the set size and the deviation of the score. We explain how relevance maps can be interpreted as a greedy strategy to calculate the rate-distortion function. For neural networks we show that approximating this function even in a single point up to any non-trivial approximation factor is NP-hard. Thus, no algorithm will provably find small relevant sets of input features even if they exist. Finally, as a numerical comparison we express a Boolean function, for which the prime implicant sets are known, as a neural network and investigate which relevance mapping methods are able to highlight them.","tags":["Deep Neural Networks","Explainable Neural Networks"],"title":"Explaining Neural Network Decisions Is Hard","type":"publication"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1571875200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1571875200,"objectID":"258daa4655b0fdd4d3076fd3d0d2fd4a","permalink":"https://jmaces.github.io/talk/a-rate-distortion-framework-for-explaining-neural-network-decisions/","publishdate":"2020-08-17T22:00:58+02:00","relpermalink":"/talk/a-rate-distortion-framework-for-explaining-neural-network-decisions/","section":"event","summary":"**Poster:** Presentation of results from our [paper](/publication/macdonald-rate-distortion-2019).","tags":["Deep Learning","Deep Neural Networks","Explainable Neural Networks"],"title":"A Rate-Distortion Framework for Explaining Neural Network Decisions","type":"event"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1567987200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1567987200,"objectID":"a00761dfdb315af664432ef9e3ef8038","permalink":"https://jmaces.github.io/talk/a-rate-distortion-framework-for-explaining-neural-network-decisions/","publishdate":"2020-08-17T21:59:48+02:00","relpermalink":"/talk/a-rate-distortion-framework-for-explaining-neural-network-decisions/","section":"event","summary":"**Poster:** Presentation of results from our [paper](/publication/macdonald-rate-distortion-2019).","tags":["Deep Learning","Deep Neural Networks","Explainable Neural Networks"],"title":"A Rate-Distortion Framework for Explaining Neural Network Decisions","type":"event"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1563384600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1563384600,"objectID":"8c7d7bb249881ccef61611070960432c","permalink":"https://jmaces.github.io/talk/a-rate-distortion-framework-for-explaining-deep-neural-network-decisions/","publishdate":"2020-08-17T21:59:26+02:00","relpermalink":"/talk/a-rate-distortion-framework-for-explaining-deep-neural-network-decisions/","section":"event","summary":"**Invited Talk:** Presentation of results from our [paper](/publication/macdonald-rate-distortion-2019).","tags":["Deep Learning","Deep Neural Networks","Explainable Neural Networks"],"title":"A Rate-Distortion Framework for Explaining Deep Neural Network Decisions","type":"event"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1562112e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1562112e3,"objectID":"a8f474332e6d696b867792c6283d0640","permalink":"https://jmaces.github.io/talk/a-rate-distortion-framework-for-explaining-neural-network-decisions/","publishdate":"2020-08-17T21:59:21+02:00","relpermalink":"/talk/a-rate-distortion-framework-for-explaining-neural-network-decisions/","section":"event","summary":"**Poster:** Presentation of results from our [paper](/publication/macdonald-rate-distortion-2019).","tags":["Deep Learning","Deep Neural Networks","Explainable Neural Networks"],"title":"A Rate-Distortion Framework for Explaining Neural Network Decisions","type":"event"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1556668800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1556668800,"objectID":"ef22235b1128580709e65d4b52800523","permalink":"https://jmaces.github.io/publication/macdonald-rate-distortion-2019/","publishdate":"2020-08-13T21:11:34.703609Z","relpermalink":"/publication/macdonald-rate-distortion-2019/","section":"publication","summary":"We formalise the widespread idea of interpreting neural network decisions as an explicit optimisation problem in a rate-distortion framework. A set of input features is deemed relevant for a classification decision if the expected classifier score remains nearly constant when randomising the remaining features. We discuss the computational complexity of finding small sets of relevant features and show that the problem is complete for $\\mathsf{NP}^\\mathsf{PP}$, an important class of computational problems frequently arising in AI tasks. Furthermore, we show that it even remains $\\mathsf{NP}$-hard to only approximate the optimal solution to within any non-trivial approximation factor. Finally, we consider a continuous problem relaxation and develop a heuristic solution strategy based on assumed density filtering for deep ReLU neural networks. We present numerical experiments for two image classification data sets where we outperform established methods in particular for sparse explanations of neural network decisions.","tags":["Deep Neural Networks","Explainable Neural Networks"],"title":"A Rate-Distortion Framework for Explaining Neural Network Decisions","type":"publication"},{"authors":["Jan Macdonald","Stephan Wäldchen","Sascha Hauch","Gitta Kutyniok"],"categories":null,"content":"","date":1554206400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1554206400,"objectID":"e44fa3531f86623b767789e2df974e1c","permalink":"https://jmaces.github.io/talk/a-rate-distortion-framework-for-explaining-deep-neural-network-decisions/","publishdate":"2020-08-17T21:59:16+02:00","relpermalink":"/talk/a-rate-distortion-framework-for-explaining-deep-neural-network-decisions/","section":"event","summary":"**Contributed Talk:** Presentation of results from our [paper](/publication/macdonald-rate-distortion-2019).","tags":["Deep Learning","Deep Neural Networks","Explainable Neural Networks"],"title":"A Rate-Distortion Framework for Explaining Deep Neural Network Decisions","type":"event"},{"authors":["Dominik Alfke","Weston Baines","Jan Blechschmidt","Mauricio J. del Razo Sarmina","Amnon Drory","Dennis Elbrächter","Nando Farchmin","Matteo Gambara","Silke Glas","Philipp Grohs","Peter Hinz","Danijel Kivaranovic","Christian Kümmerle","Gitta Kutyniok","Sebastian Lunz","Jan Macdonald","Ryan Malthaner","Gregory Naisat","Ariel Neufeld","Philipp Christian Petersen","Rafael Reisenhofer","Jun-Da Sheng","Laura Thesing","Philipp Trunschke","Johannes von Lindheim","David Weber","Melanie Weber"],"categories":null,"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1546300800,"objectID":"56abb997a8f45e80f6d02e7bd7319be9","permalink":"https://jmaces.github.io/publication/alfke-oracle-2019/","publishdate":"2020-08-13T21:11:34.710235Z","relpermalink":"/publication/alfke-oracle-2019/","section":"publication","summary":"We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results. Having access to a sufficiently large amount of labeled training data, our methodology is capable of predicting the labels of the test data almost always even if the training data is entirely unrelated to the test data. In other words, we prove in a specific setting that as long as one has access to enough data points, the quality of the data is irrelevant.","tags":["Deep Neural Networks"],"title":"The Oracle of DLphi","type":"publication"},{"authors":["Jan Macdonald","Raffael Raisenhofer"],"categories":null,"content":"","date":1539648e3,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1539648e3,"objectID":"2e936cdb92816f13f703679c5622ec1a","permalink":"https://jmaces.github.io/talk/practical-session-on-approximations-with-deep-neural-networks/","publishdate":"2020-08-17T21:58:45+02:00","relpermalink":"/talk/practical-session-on-approximations-with-deep-neural-networks/","section":"event","summary":"**Practical Session:** Introduction to Tensorflow and hands-on tutorial on approximating smooth functions with neural networks *(joint with Raffael Raisenhofer)*.","tags":["Deep Learning","Deep Neural Networks","Approximation Theory"],"title":"Practical Session on Approximations with (Deep) Neural Networks","type":"event"},{"authors":["Jan Macdonald"],"categories":null,"content":"","date":1517648400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517648400,"objectID":"61b3034f92aa4b469cbd1cc9058583eb","permalink":"https://jmaces.github.io/talk/image-classification-using-wavelet-und-shearlet-based-scattering-transforms/","publishdate":"2018-02-03T09:00:00Z","relpermalink":"/talk/image-classification-using-wavelet-und-shearlet-based-scattering-transforms/","section":"event","summary":"**Contributed Talk:** An analysis of the generalization error for multi-class multinomial logistic regression classifiers.","tags":["Statistical Learning","Image Classification"],"title":"Image Classification Using Wavelet und Shearlet Based Scattering Transforms","type":"event"},{"authors":["Jan Macdonald","Lars Ruthotto"],"categories":null,"content":"","date":1517443200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517443200,"objectID":"aaf3a22349bc7eb0793de016e6679b2e","permalink":"https://jmaces.github.io/publication/macdonald-improved-2018/","publishdate":"2020-08-13T21:11:34.702131Z","relpermalink":"/publication/macdonald-improved-2018/","section":"publication","summary":"We present an improved technique for susceptibility artifact correction in echo-planar imaging (EPI), a widely used ultra-fast magnetic resonance imaging (MRI) technique. Our method corrects geometric deformations and intensity modulations present in EPI images. We consider a tailored variational image registration problem incorporating a physical distortion model and aiming at minimizing the distance of two oppositely distorted images subject to invertibility constraints. We derive a novel face-staggered discretization of the variational problem that renders the discretized distance function and constraints separable. Motivated by the presence of a smoothness regularizer, which leads to global coupling, we apply the alternating direction method of multipliers (ADMM) to split the problem into simpler subproblems. We prove the convergence of ADMM for this non-convex optimization problem. We show the superiority of our scheme compared to two state-of-the-art methods both in terms of correction quality and time-to-solution for 13 high-resolution 3D imaging datasets.","tags":["MRI","ADMM","Non-convex Optimization","Inverse Problems","Echo-Planar MRI"],"title":"Improved Susceptibility Artifact Correction of Echo-Planar MRI using the Alternating Direction Method of Multipliers","type":"publication"},{"authors":["Jan Macdonald"],"categories":null,"content":"","date":1495065600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1495065600,"objectID":"ff6d610fbf7b32ec65da36efb3daf55c","permalink":"https://jmaces.github.io/publication/macdonald-thesis-2017/","publishdate":"2023-01-21T15:49:22.532263Z","relpermalink":"/publication/macdonald-thesis-2017/","section":"publication","summary":"","tags":["Deep Neural Networks","Image Classification","Scattering Transforms"],"title":"Image Classification with Wavelet and Shearlet Based Scattering Transforms","type":"publication"},{"authors":["Jan Macdonald"],"categories":null,"content":"","date":1410825600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1410825600,"objectID":"30c080868b4e903fbb62efd8d94fd6b2","permalink":"https://jmaces.github.io/publication/macdonald-thesis-2014/","publishdate":"2023-01-20T15:49:22.532263Z","relpermalink":"/publication/macdonald-thesis-2014/","section":"publication","summary":"","tags":["Graph Algorithms","Combinatorial Optimization","Shortest Paths Algorithms"],"title":"Preprocessing for Shortest Path Algorithms on Road Networks","type":"publication"}]