Developers, more often than architects, tend to get
frustrated by declarative programming, because it boosts expressive power at
the cost of less testability. Yet both the boost and the drop are a mild breeze
compared to self-modifying and sub-symbolic Machine Learning (ML).
Reviews by humans
You can hardly get away without ML in today’s AI. It
would be like a mainstream relational DB without SQL, or like a rule engine without
declarative rules. And, you’ll run into auditability issues in all three, but
they tend to be an order of magnitude bigger in “sub-symbolic” ML. It’s much
easier to check whether the learning works
accurately (a black-box test of a “black organism” ), than to see why or how. A clear-box view relates to
the generated logic almost as loosely as an MRI-scan relates to what a patient
is thinking.
Japanese industry robots started to assemble robots back
in 2001. Today, robotics vendors offer complete lights-out
manufacturing systems (in other sectors referred to as “no man in the loop”
earlier). So, needless to say, auditability and reviews by humans are a key architectural issue. Be discrete
precision manufacturing, avionics, automotive, power generation, medical
equipment, or any other application where approval criteria (ISO etc.) are
strict.
Decision-tree induction from data
When the work
of professor DonaldMichie (a distinguished codebreaker ahead of D-day) resulted in the first ML
algorithms for rule mining and decision-tree induction from data, those were
able to show the learnt-in logic as diagrams or rules, and even to generate conditional
constructs in a mainstream programming language. This became particularly
useful in the elicitation and processing of tacit knowledge (the implicit “knowwhy”
underlying the explicit knowhow of a human).
Many IT roles are quite familiar with declarative
rules, although in narrower contexts: In DB schemas (or UML models), the rules
are typically about referential integrity
along the lines of “on Delete cascade” in SQL DDL (see also the exclusive-or, {xor} , in the UML diagram). In design, rules are often invariants and
preconditions/postconditions. In modeling, a UML state diagram defines visually
the rules that govern event handling and state transitions. Etc.
A role model, UML pun intended, with an either-or
referential integrity.
From Informator course AvanceradModellering, T2716 (structure-models chapter).
Case Bases
The technology that followed in inductive ML (after
decision-tree induction), Case Based Reasoning (CBR), was instance-based and used
lazy generalization which expressed the induced commonalities (between cases in
the learning-data set) only later, at test time. Although less explicit than decision
trees or rules, this was still fairly comprehensible for humans to review.
Sub-symbolic “representation” of knowledge
However,
the commercial breakthrough of self-modifying technologies, mainly neural
networks and evolutionary algorithms, made audits both hot and tricky. In a
neural network, the logic learnt during training is sub-symbolic, “packed” into
weight values on connections and artificial neurons. Roughly speaking, a clear-box view relates to the generated
logic almost as loosely as an EEG record relates to what a patient is thinking.
Therefore,
many researchers argue that audit-related abilities of a subsymbolic-ML system,
like reasoning about itself and meaningfully visualizing/animating/explaining
its logic or its conclusions, improve greatly when its ML is combined with
other (likely symbolic) AI techniques. If so, this is one of the rare cases in
architecture where complexity and testability actually enhance each other
rather than restrict.
Summing up
Architecture
is not exempt from ML’s impact on AI, IT, and business processes. Although
auditability was brought to the table already when robots started to assemble other
robots, it is crucial today in all kinds of ML systems in a vast variety of
applications and industry sectors.
Trainer at Informator, senior modeling and
architecture consultant at Kiseldalen.com, Advanced UML2
Professional (OCUP cert
level 3/3). Main author: UML Extra
Light (Cambridge University Press) and Growing Modular (Springer).
2 kommentarer
Nice information thank you,if you want more information please visit our link machine learning online training
SvaraSkicka en kommentar
Trevligt att du vill dela med dig av dina åsikter! Tänk på att hålla på "Netiketten" och använda vårdat språk.