We envision a human-AI ecosystem in which AI-enabled devices act as proxies of humans and try to learn collectively a model in a decentralized way. Each device will learn a local model that needs to be combined with the models learned by the other nodes, in order to improve both the local and global knowledge. The challenge of doing so in a fully-decentralized AI system entails understanding how to compose models coming from heterogeneous sources and, in case of potentially untrustworthy nodes, decide who can be trusted and why. In this micro-project, we focus on the specific scenario of model “gossiping” for accomplishing a decentralized learning task and we study what models emerge from the combination of local models, where combination takes into account the social relationships between the humans associated with the AI. We will use synthetic graphs to represent social relationships, and large-scale simulation for performance evaluation.
Paper (most likely at conference/workshop, possibly journal)
Simulator (fallback plan if a paper cannot be produced at the end of the micro-project)
- Consiglio Nazionale delle Ricerche (CNR), Andrea Passarella
- Central European University (CEU), Gerardo Iniguez
Primary Contact: Andrea Passarella, CNR-IIT
Main results of micro project:
As of now, the micro project has developed a modular simulation framework to test decentralised machine learning algorithms on top of large-scale complex social networks. The framework is written in Python, exploiting state-of-the-art libraries such as networkx (to generate network models) and Pytorch (to implement ML models). The simulator is modular, as it accepts networks in the form of datasets as well as synthetic models. Local data are allocated on each node, which trains a local ML model of choice. Communication rounds are implemented, through which local models are aggregated and re-trained based on local data. Benchmarks are included, namely federated learning and centralised learning. Initial simulation results have been derived, to assess the accuracy of decentralised learning (social AI gossiping) on Barabasi-Albert networks, showing that social AI gossiping is able to achieve comparable accuracy with respect to centralised and federated learning versions (which rely on centralised elements, though).
Contribution to the objectives of HumaneAI-net WPs
The simulation engine is a modular one, that can be exploited (also by the other project partners) to test decentralised ML solutions. The weighted network used to connect nodes can represent social relationships between users, and thus one of the main objectives of the obtained results it to understand the social network effects on decentralised ML tasks.
- Program/code: SAIsim – Chiara Boldrini, Lorenzo Valerio, Andrea Passarella