Machines we do not understand

Human beings have started to generate programs and machines whose functioning is not transparent or interpretable. It would be convenient to regulate their development

26 December 2017

An article published by Andrés Ortega Klein at the Spanish news media «eldiario.es» dwells on the future of Artificial Intelligence and, in particular, on the challenges posed by the questions of “interpretability” and “auditability” of computer programs and machines. In order to ensure their “transparency”, new machines should not be mere “black boxes” in which we know what goes in and what comes out, but not what goes on inside. Users should be able to understand the skill, intentions and situational constraints of the programs. The article mentions current research work on this topic by Rafael García, a Research Engineer at IMDEA Networks Institute.


More info


Source(s): eldiario.es
Categorized in:

Recent Comments

    Archives

    Categories