The jump into AI has to some extent jumped into a technology that can replace humans, a technology that can influence humans and to some extent undermine humans. So there has been obviously many important advances, and very exciting development in the technology, but it hasn’t always been thought through what the effect of this technology on humanity will be and to ensure that the potential negative effects are addressed or avoided. This is completely understandable as many of the negative effects were unforeseen.
The ability to influence people and the way we’ve seen this possibly misused in election processes is something that I think people would not have foreseen and for that reason there is no guilt on the part of companies for failing to address these issues upfront, perhaps they have been slow to respond in many cases, but that really comes down to their business model. A business is in many instances responsible in many respects for ensuring profit for their shareholders.
Of course it should take social responsibility seriously and perhaps that’s being sometimes taken on board a little later than it would be desirable. However, I think this HumaneAI project is to address these issues head on and is to take those questions through to the fundamental research that is required to address them to understand them properly and to think about their implications for humanity as a whole. And there I think we are potentially hitting on incredibly interesting questions.
Questions about “what it means to be a human being”, “what is now the essence of being human”, perhaps it isn’t just intelligence. If we can reproduce that intelligence in a machine, then why are we different from a machine? We have to ask that question, and I think that is one of the questions that Humane AI can shed light on by understanding the implications of the way humans think and do things, and experience and the way AI system do things and don’t experience, for example.
One of the beauties of Europe is its cultural diversity, and to some extent I am a great believer in not undermining that diversity by unification. I think we want to encourage that diversity, individuality in communities, obviously not at the expense of unity in the sense of understanding each other. But that’s again at the heart of hitting at one of the problems that AI has created, which is this potential for information bubbles, where people only ear the information the news the perspective that agrees with their current thinking. And that is not the kind of individuality that we want to grow in Europe.
We want to grow an individuality that respects differences, understands differences, celebrates differences, but equally celebrates individuality. So we don’t get rid of differences by creating a uniform mish-mash, we create a collaborative set of experience that is bringing together very different communities but ones that respect and encourage that individuality in others. That requires tools that will enable people to communicate across cultural divides and to understand different cultures and see what the other side thinks and see what is the difference. We can have different communities appreciated and supported in their individuality.