Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

CRC 901 – On-The-Fly Computing (OTF Computing) Show image information

CRC 901 – On-The-Fly Computing (OTF Computing)


Talk given by Prof. Plamen P. Angelov (Lancaster University, UK)

Begin: Thursday, 10. of Jan 2019 ( 6:00 PM)
Location: Warburger Str. 100, lecture hall L2.202

On January 10, 2019, Prof. Plamen P. Angelov, Lancaster University, UK will give a talk about "Explainable AI through Interpretable Deep Rule-based Learning".


We are witnessing an explosion of data (streams) being generated and growing exponentially. Nowadays we carry in our pockets Gigabytes of data in the form of USB flash memory sticks, smartphones, smartwatches etc. Extracting useful and human-intelligible/understandable information and knowledge from these big data streams is of immense importance for the society, economy and science. The mainstream Deep Learning quickly became a synonymous of a powerful method to enable items and processes with elements of AI in the sense that it makes possible human like performance in recognising images and speech. However, the currently used methods for deep learning which are based on neural networks (recurrent, belief, etc.) is opaque (not transparent), requires huge amount of training data and computing power (hours of training using GPUs), is offline and its online versions based on reinforcement learning has no proven convergence, does not guarantee same result for the same input (lacks repeatability) and, more importantly, it does not provide insight, transparency (is a “black-box” type).

In this talk a new, recently introduced approach will be presented which offers highly efficient classifiers, predictive models, etc. but is fully interpretable, transparent and human-intelligible. Moreover, the local optimality as well as the convergence (and respectively, stability) of the proposed systems was theoretically proven and illustrated with examples. The proposed method is prototype-based and non-iterative. It is based on the density and thus it is computationally very efficient (learning on a large amount of images takes few seconds and does not require GPUs or other accelerators as the mainstream deep learning does. However, the performance of the proposed method is on par or better than the competitive alternatives. The major advantages of this new paradigm is the liberation from the restrictive and often unrealistic assumptions and requirements concerning the nature of the data (random, deterministic, fuzzy), the need to formulate and assume a priori the type of distribution models, membership functions, the independence of the individual data observations, their large (theoretically infinite) number, etc. From a pragmatic point of view, this direct approach from data (streams) to complex, layered model representation is automated fully and leads to very efficient model structures. In addition, the proposed new concept learns in a way similar to the way people learn – it can start from a single example.

Thus, the proposed approach is in its nature anthropomorphic.

Further information:

The University for the Information Society