Most architectural tradeoffs are hard. So is the
one between the accuracy of Deep Machine Learning (ML) and
explainability/transparency of explainable AI (XAI). Therefore, DARPA’s initial
XAI program is expected to run through 2021.
Few explainability mechanisms have
been extensively tested on humans, but current R&D indicates at least some (hybrid)
tech to come in a couple of years.
Several well-tried ML
technologies work their way through (in steps) from a first random solution to
satisfactory ones, and where possible, to an optimal one:
·
Genetic Algorithms, by
creating additional generations
·
Forests, by creating
additional trees from the same data
·
Deep Neural Networks, by
a (cost) function applied (in training) to outputs, based on how they
differ from labeled data, and propagated back across all neurons/synapses, to
adjust weights
·
Hybrids, by combination
(the “new kid on the block”).
NASA Space Technology 5 antenna created by GA (source: NASA.gov)
GA and Forests are more than
semi-intelligible, by their nature. However, now that deep learning feeds big
data through NN that consist of multiple hidden layers of neurons (DNN), it
arrives at very accurate solutions to complex multidimensional problems, but at the same time at an inherent black
box.
A part of my post from August is about random
forests, which offer both more
transparency than DNN do plus quite a
degree of accuracy. Moreover, trees and NN are even cross-fertilized, to offer
explainability without impeding accuracy, and there are already several flavors
of this; to pick a handful:
·
adding extraction of simplified
explainable models (e.g. trees) onto black-box DNN
·
local (instance-based)
explanation of one use case at a time, with its input values, for example animating & explaining its path
layer-by layer (like most test tools do).
·
soft trees, with NN-based
leaf nodes, that perform better than trees induced directly from the same data
·
adaptive neural trees and
deep neural decision forests that create trees (edges, splits, and leafs) , to
outperform “standalone” NN as well as trees/forests that skip the combination.
Explainability, transparency,
and V&V are absolutely essential
to users’ reliance/confidence in mission-critical AI. Therefore, whichever path
or paths take us to up-and-running products, XAI is welcome.
Trainer at Informator, senior modeling and
architecture consultant at Kiseldalen’s,
main author: UML Extra Light (Cambridge
University Press) and Growing Modular (Springer), Advanced UML2 Professional (OCUP cert level 3/3). Milan and Informator collaborate since 1996 on
architecture, modelling, UML, requirements, rules/AI, and design. You can
meet him at public courses (in
English or Swedish) on AI, Architecture, and ML (T1913
, in December or February), Architecture (T1101, T1430)
or Modeling (T2715, T2716).