Cart
Free Shipping in the UK
Proud to be B-Corp

Explainable AI with Python Leonida Gianfagna

Explainable AI with Python By Leonida Gianfagna

Explainable AI with Python by Leonida Gianfagna


£49.99
Condition - New
Only 3 left

Summary

Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are opaque. Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future.

Explainable AI with Python Summary

Explainable AI with Python by Leonida Gianfagna

This book provides a full presentation of the current concepts and available techniques to make machine learning systems more explainable. The approaches presented can be applied to almost all the current machine learning models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others.

Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are opaque to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.

Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce human understandable explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are opaque. Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.

About Leonida Gianfagna

Leonida Gianfagna (Phd, MBA) is a theoretical physicist that is currently working in Cyber Security as R&D director for Cyber Guru. Before joining Cyber Guru he worked in IBM for 15 years covering leading roles in software development in ITSM (IT Service Management). He is the author of several publications in theoretical physics and computer science and accredited as IBM Master Inventor (15+ filings).

Antonio Di Cecco is a theoretical physicist with a strong mathematical background that is fully engaged on delivering education on AIML at different levels from dummies to experts (face to face classes and remotely). The main strength of his approach is the deep-diving of the mathematical foundations of AIML models that open new angles to present the AIML knowledge and space of improvements for the existing state of art. Antonio has also a Master in Economics with focus innovation and teaching experiences. He is leading School of AI in Italy with chapters in Rome and Pescara

Table of Contents

Contents1. The Landscape1.1 Examples of what Explainable AI is1.1.1 Learning Phase1.1.2 Knowledge Discovery1.1.3 Reliability and Robustness1.1.4 What have we learnt from the 3 examples1.2 Machine Learning and XAI1.2.1 Machine Learning tassonomy1.2.2 Common Myths1.3 The need for Explainable AI1.4 Explainability and Interpretability: different words to say the same thing or not?1.4.1 From World to Humans1.4.2 Correlation is not causation1.4.3 So what is the difference between interpretability and explainability?1.5 Making Machine Learning systems explainable1.5.1 The XAI flow1.5.2 The big picture1.6 Do we really need to make Machine Learning Models explainable?1.7 Summary1.8 References2. Explainable AI: needs, opportunities and challenges2.1 Human in the loop2.1.1 Centaur XAI systems2.1.2 XAI evaluation from Human in The Loop perspective2.2 How to make Machine Learning models explainable2.2.1 Intrinsic Explanations2.2.2 Post-Hoc Explanations2.2.3 Global or Local Explainability2.3 Properties of Explanations2.4 Summary2.5 References3 Intrinsic Explainable Models3.1.Loss Function3.2.Linear Regression3.3.Logistic Regression3.4.Decision Trees3.5.K-Nearest Neighbors (KNN)3.6.Summary3.7 References4. Model-agnostic methods for XAI4.1 Global Explanations: permutation Importance and Partial Dependence Plot4.1.1 Ranking features by Permutation Importance4.1.2 Permutation Importance on the train set4.1.3 Partial Dependence Plot4.1.4 Properties of Explanations4.2 Local Explanations: XAI with Shapley Additive explanations4.2.1 Shapley Values: a game-theoretical approach4.2.2 The first use of SHAP4.2.3 Properties of Explanations4.3 The road to KernelSHAP4.3.1 The Shapley formula4.3.2 How to calculate Shapley values4.3.3 Local Linear Surrogate Models (LIME)4.3.4 KernelSHAP is a unique form of LIME4.4 Kernel SHAP and interactions4.4.1 The NewYork Cab scenario4.4.2 Train the Model with preliminary analysis4.4.3 Making the model explainable with KernelShap4.4.4 Interactions of features4.5 A faster SHAP for boosted trees4.5.1 Using TreeShap4.5.2 Providing explanations4.6 A naive criticism to SHAP4.7 Summary4.8 References5. Explaining Deep Learning Models5.1 Agnostic Approach5.1.1 Adversarial Features5.1.2 Augmentations5.1.3 Occlusions as augmentations5.1.4 Occlusions as an Agnostic XAI Method5.2 Neural Networks5.2.1 The neural network structure5.2.2 Why the neural network is Deep? (vs shallow)5.2.3 Rectified activations (and Batch Normalization)5.2.4 Saliency Maps5.3 Opening Deep Networks5.3.1 Different layer explanation5.3.2 CAM (Class Activation Maps) and Grad-CAM5.3.3 DeepShap / DeepLift5.4 A critic of Saliency Methods5.4.1 What the network sees5.4.2 Explainability batch normalizing layer by layer5.5 Unsupervised Methods5.5.1 Unsupervised Dimensional Reduction5.5.2 Dimensional reduction of convolutional filters5.5.3 Activation Atlases: How to tell a wok from a pan5.6 Summary5.7 References6. Making science with Machine Learning and XAI6.1 Scientific method in the age of data6.2 Ladder of Causation6.3 Discovering physics concepts with ML and XAI6.3.1 The magic of autoencoders6.3.2 Discover the physics of damped pendulum with ML and XAI6.3.3 Climbing the ladder of causation6.4 Science in the age of ML and XAI6.5 Summary6.6 References7. Adversarial Machine Learning and Explainability7.1 Adversarial Examples (AE) crash course7.1.2 Hands-on Adversarial Examples7.2 Doing XAI with Adversarial Examples7.3 Defending against Adversarial Attacks with XAI7.4 Summary7.5 References8. A proposal for a sustainable model of Explainable AI8.1 The XAI fil rouge8.2 XAI and GDPR8.2.1 FAST XAI8.3 Conclusions8.4 Summary8.5 ReferencesIndex

Additional information

NGR9783030686390
9783030686390
3030686396
Explainable AI with Python by Leonida Gianfagna
New
Paperback
Springer Nature Switzerland AG
2021-04-29
202
N/A
Book picture is for illustrative purposes only, actual binding, cover or edition may vary.
This is a new book - be the first to read this copy. With untouched pages and a perfect binding, your brand new copy is ready to be opened for the first time

Customer Reviews - Explainable AI with Python