Holger Hoos, Leiden University

Holger Hoos, Leiden University

In AI people are understandably pretty concerned about the fact that there might be adverse effects to their jobs, lives, they don’t really understand the technology very well, but they see that it brings big changes. People want to be reassured, and to be honest, I want to be reassured too, that AI is going to happen in a way that is socially compatible and that it makes our lives better, rather than just different, and maybe worse. Humane AI really aims at having a kind of AI that makes people’s lives better, rather than having than having the kind of AI that makes more profit. It’s very important for Humane AI to look very carefully at people’s needs, limitations, that maybe with the help of AI we can help overcome, like implicit bias for example. It’s also very important to help balance the interest of society as a whole, individuals and industry as we go and deploy AI broadly.

Bluesky project: I think the biggest problem we are facing right now is the critical shortage of expertise in AI and one of the things that needs to be done in order to change is of course to train more AI experts, but that is going to be too slow and will not fill the demand. Therefore, in my research what I would really like to do is automate to a large extent the development, customisation and deployment and running of AI. I think that will make sure that certain minimum standards in the quality of AI systems will be met much more easily. Automated AI would go all the way to making it easy for people that know the job that needs to be done, to allow them to do that job better with AI.