Janin Koch, Aalto University

Janin Koch, Aalto University

What is your opinion on Human – Centered AI?

Well, the thing is like when I look at collaborations with machines and humans there is a huge aspect on how human think and how humans represent actually the way they understand and make sense out of what they’re doing. If we want to have machines that actually work together with humans in this collaboration, then we actually have to respect that the process, that meaning is made and reflection is done in the way we are building this AI structures and the way we’re using machine learning for this purpose. In my work I actually use AI technology from very different fields and apply that, because it made sense in the process and not necessarily as a common way to do it with machine learning.

So, the human AI project, what I understand is very basically human centered AI. If we want to build collaborators that actually make meaningful improvements of the world, meaningful innovations, we have to look in how this can be meaningful for humans. That is kind of where I see, we should focus on what’s meaningful to human and not necessarily for the system. That has certain implications about explainability, how thus systems can explain themselves, how can systems explain its reasoning, because reasoning is important for example if you gather ideas into brainstorming in this case but also about the whole aspect of sense making, semantics that are related to that, especially when we talk about abstract ideas or concepts that we are actually developing. There is a lot of aspects on that. And this whole aspect of learning, what does it actually mean to learn in this context where we actually don’t know where we are going. We do some idea of brainstorming, we don’t know where we go, we learn over the course of actually talking to each other, how is that done in the system and all these aspects are very related on how humans actually understand this process and do it naturally. That doesn’t mean that AI’s has to do the same, but they have to acknowledge and be build towards that.

What would be your blue sky project in AI for Europe?

That’s a good question. I think there is certainly, I see, a need, especially when you talk about collaborative AI, I certainly see a need of integrating for example more psychologists, social scientists in the work, basically, working on how do we actually do collaboration with machines but also within humans. Projects could relate to what is actually grounding, what is actually sense making in human and in human AI interactions, and things like that. Also, kind of like, investing a lot of money on developing new approaches based on for example human behavior. I for example did a lot of design studies, before I start designing system to basically see how humans do it. If we are thinking look at several of these processes, we might actually find patterns that allow us to build better AI and machine learning approaches in this context.

Do you have a one-liner on that?

No. To be honest, no. I think there is so many work to be done on different aspects. On the machine learning side, we have to stop thinking about to have a goal and that’s what I want to ultimate, because the values is in basically developing ourselves to get a machines. Which means developing the human and the machine at the same time, so that we need new approaches there, but we also have to better understand how do we actually think, how do we actually get this information, how do we actually do innovation. Mapping these in interaction actually, is a very more complex that I have a one-liner for that.