Human-AI Interaction: Designing the future Human-AI interfaces

Summer Term 2019 > Start Monday, March, 11, 2019, 14:00 Room IDEG 134 Inffeldgasse 16c – 8010 Graz
AK HCI, 19S, 706.046, 3 VU, 5 ECTS

Human-AI Interaction

See here a short (9 min) video intro: https://goo.gl/dB9heh

> TUG Online 706.046 19S 3SH 5ECTS

GOAL: In this research-based teaching course you will learn about some principles and differences between explainable AI  to design, develop and test how human and machine learning systems can interact and collaborate for effective decision making. You will learn about the differences between explainable AI and explanability and experiment with explanation user-interface frameworks.

MOTIVATION: Artificial Intelligence (AI) and Machine Learning (ML) demonstrate impressive success. Particularly deep learning  (DL) approaches hold great premises (see differences between AI/ML/DL here). Unfortuntately, the best performing methods turn out to be intransparent, so-called “black-boxes”. Such models have no explicit declarative knowledge representation, hence have difficulty in generating explanatory and contextual structures. This considerably limits the achievement of their full potential in certain application domains. Consequently, in safety critical systems and domains (e.g. in the medical domain) we may raise the question: “Can we trust these results?”,  “Can we explain how and why a result was achieved?”. This is not only crucial for user acceptance, e.g. in medicine the ultimate responsibility remains with the human, but it is mandatory since 25th May 2018 due to the European GDPR, which includes a “right for explanation”.

RELEVANCE: There is growing industrial demand in machine learning approaches, which are not only well performing, but transparent, interpretable and trustworthy, e.g. in medicine, but also in production (industry 4.0), robotics, automous driving, recommender systems, etc.

BACKGROUND: Methods to reenact the machine decision-making process, to reproduce and to comprehend the learning and knowledge extraction process need affective user interfaces. For decision support it is necessary to understand the causality of learned representations. If human intelligence is complemented by machine learning and at least in some cases even overruled, humans must still be able to understand, and most of all to be able to interactively influence the machine decision process – on demand. This needs context awareness and sensemaking to close the gap between human thinking and “machine thinking”.

SETTING: In this course the students will have the unique opportunity to work on mini-projects in real-world problems within our digital pathology project. Students will learn basic principles of human-computer interaction, interaction design, usability engineering and evaluation methods and get an introduction into causability research. On the basis of this course there are opportunities for further work (software developing tasks, bachelor, master’s and PhD positions, see open work.

Intro Slides 2019 > [Slide deck Mo, 11.3.2019] > [Slide deck Mo, 18.3.2019]

Last updated by Andreas Holzinger 18.03.2019, 19:00 CET

Human-Computer Interaction meets Artificial Intelligence

Intelligent User Interfaces (IUI) is where Human-computer interaction (HCI) meet Artificial Intelligence (AI). This is often defined as the design of intelligent agents, which is the core essence in Machine Learning (ML). In interactive Machine Learning (iML) this agents can also be humans:

Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Springer Brain Informatics (BRIN), 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.
Online: https://link.springer.com/article/10.1007/s40708-016-0042-6

Holzinger, A. 2016. Interactive Machine Learning (iML). Informatik Spektrum, 39, (1), 64-68, doi:10.1007/s00287-015-0941-6.
Online: https://link.springer.com/article/10.1007/s00287-015-0941-6

Holzinger, A., et al. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104.
Online: https://arxiv.org/abs/1708.01104

Holzinger, A., et al. 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.
Online: https://www.groundai.com/project/what-do-we-need-to-build-explainable-ai-systems-for-the-medical-domain

Holzinger, A. 2018. Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5.
Online: https://link.springer.com/article/10.1007/s00287-018-1102-5

Holzinger, A., et al. 2018. Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, doi:10.1007/s10489-018-1361-5.
Online: https://link.springer.com/article/10.1007/s10489-018-1361-5

In this practically oriented course, Software Engineering is seen as dynamic, interactive and cooperative process which facilitate an optimal mixture of standardization and tailor-made solutions. Here you have the chance to work on real-world problems (on the project digital pathology).

Previous knowledge expected

Interest in experimental Software Engineering in the sense of:
Science is to test crazy ideas – Engineering is to put these ideas into Business.

Interest in cross-disciplinary work, particularly in the HCI-KDD approach: Many novel discoveries and insights are found at the intersection of two domains, see: A. Holzinger, 2013. “Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together?“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin, New York: Springer, pp. 319-328.  [DOI] [Download pdf]

Objective

After successful completion of this course:

  • Students are autonomously able to apply a selection of the most important scientific HCI methods and practical methods of UE and understands the importance of causality research
  • Students understand the most essential problems which End-Users are faced in our modern, complex and dynamic environment
  • Students are able to apply the most important experimental designs
  • Students learn to deal with the problems in modern user interface design
  • Students are able to conduct elementary research experiments and carry out solid evaluations in HCI research
Grading

1. One scientific paper per group (50 %)
2. Project presentations during the semester – EVERY student of a group has to present one part of the work! (50%)

To contervail arguing against paper writing, please have a look at this from a Harvard Master (!) course:

A sample from a Harvard Master course

Schedule

Basically this VU is a very practice-led course, and therefore the majority of the work will take place at home and in the field (field work with end-users, guided by tutors). The course room is reserved from 14:00 to 18:00, but that does not mean that we always need the full time! (Room IDEG134, Inffeldgasse 16c). Please make sure you are on time that day, as we will be presenting the projects (principle: first come, first served!).

Syllabus-HOLZINGER-Human-AI-Interfaces-706046-TUGraz-2019

General guidelines for the scientific paper

Holzinger, A. (2010). Process Guide for Students for Interdisciplinary Work in Computer Science/Informatics. Second Edition. Norderstedt: BoD (128 pages, ISBN 978-3-8423-2457-2)

also available at Fachbibliothek Inffeldgasse.

Scientific paper templates

Please use the following templates for your scientific paper:

(new) A general LaTeX template can be found on overleaf > https://www.overleaf.com/4525628ngbpmv

Further information and templates available at: Springer Lecture Notes in Computer Science (LNCS)

Paper review template

Power-Point Template for the final presentation:

Some pointers to interesting sources in intelligent HCI:

  • Visual Turing Test, see: Lake, B. M., Salakhutdinov, R. [expertise] & Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science, 350, (6266), 1332-1338. [https://web.mit.edu/cocosci/Papers/Science-2015-Lake-1332-8.pdf]
    You can try out some online experiments  (“visual Turing tests”) to see if you can find out the difference between human and computer behavior. The code and images for running these experiments are available on github.
    https://cims.nyu.edu/~brenden/supplemental/turingtests/turingtests.html
  • The Human Kernel, see: Wilson, A. G. , Dann, C., Lucas, C. & Xing, E. P. The Human Kernel. Advances in Neural Information Processing Systems, 2015. 2836-2844. [papers.nips.cc/paper/5765-the-human-kernel.pdf]
    You can try out some online experiements for the Human Kernel here:
    https://functionlearning.com/
  • Hernández-Orallo, J. 2016. The measure of all minds: evaluating natural and artificial intelligence, Cambridge University Press.
    https://allminds.org/
  • Trust building with explanation interfaces, see: https://hci.epfl.ch/members/pearl/index.html
    Pearl Pu & Li Chen 2006. Trust building with explanation interfaces. Proceedings of the 11th international conference on Intelligent user interfaces. Sydney, Australia: ACM. 93-100, doi:10.1145/1111449.1111475.
  • The importance of Human-Computer Interaction for Explainable AI: David Gunning 2016. Explainable artificial intelligence (XAI): Technical Report Defense Advanced Research Projects Agency DARPA-BAA-16-53, Arlington, USA, DARPA. check out the latest news entries on the HCI-KD blog: https://human-centered.ai/news