Contact person: Andras Lörincz (lorincz@inf.elte.hu

Internal Partners:

  1. ELTE, Andras Lörincz  

External Partners:

  1. Siemens, Sonja Zillner, sonja.zillner@siemens.com
  2. Volkswagen, Patrick van der Smagt, smagt@argmax.ai

 

The goals of the workshop are as follows. We want to: (a) to bring together experts for designing a roadmap for Assistive AI that complements ongoing efforts of AI regulations and (b) to start new micro-projects along the lines determined by the workshop as soon as possible.

In the 2003 NSF call (NSF 03-611) that was ultimately canceled, it realized that “The future and well-being of the Nation depend on the effective integration of Information Technologies (IT) into its various enterprises and social fabric.” The call also stated, “We have great difficulty predicting or even clearly assessing social and economic implications and we have limited understanding of the processes by which these transformations occur.”

These arguments are even stronger in relation to AI as the transition is occurring rapidly, forecasting is prone to significant uncertainty and, even if they fail to a large extent, the impact will be serious and, beyond the regulatory (societal) policy, there will be a huge demand for assisting people and limiting societal turbulences.

Regulated AI

Regulation of public use of AI might succeed. Present efforts, such as “MEPs push to bring chatbots like ChatGPT in line with EU’s fundamental rights”, have, however, several shortcomings. Consider the easy-to-retrain Alpaca, a “A Strong, Replicable Instruction-Following Model” and similar open-source efforts that will follow and offer effective and inexpensive tools for “rule breakers”.

Rule breakers can use peer-to-peer BitTorrent methods for hiding the origin of the source, create artificial identities, enter social networks, find echo chambers, and spread fake information efficiently. Automation of misinformation (trolling, conspiracy theory, improved tools for influencing) and the social networks of different kinds cast doubt on the effectiveness of regulation efforts. While regulations seem necessary, still, as regulations are control means, delay in the control may cause instabilities.

Assistive AI

Regulations for a community-serving “Assistive AI” (AAI), however, can be developed. AAI could diminish the harmful effects provided that efficient verification methods are available. Our early method [1] is a promising starting point as it preserves anonymity for contributing participants who need it or would like it to stay anonymous and can stay anonymous as long as the rules of the community and the law allow it. Accountability can also be included to make contributors responsible.

Results Summary

Regulatory frameworks for the use of AI are emerging. However, they trail behind the fast-evolving malicious AI technologies that can quickly cause lasting societal damage. In response, we introduce a pioneering Assistive AI framework designed to enhance human decision-making capabilities. This framework aims to establish a trust network across various fields, especially within legal contexts, serving as a proactive complement to ongoing regulatory efforts. Central to our framework are the principles of privacy, accountability, and credibility. In our methodology, the foundation of reliability of information and information sources is built upon the ability to uphold accountability, enhance security, and protect privacy. This approach supports, filters, and potentially guides communication, thereby empowering individuals and communities to make well-informed decisions based on cutting-edge advancements in AI. Our framework uses the concept of Boards as proxies to collectively ensure that AI-assisted decisions are reliable, accountable, and in alignment with societal values and legal standards. Through a detailed exploration of our framework, including its main components, operations, and sample use cases, we show how AI can assist in the complex process of decision-making while maintaining human oversight. The proposed framework not only extends regulatory landscapes but also highlights the synergy between AI technology and human judgement, underscoring the potential of AI to serve as a vital instrument in discerning reality from fiction and thus enhancing the decision-making process. Furthermore, we provide domain-specific use cases to highlight the applicability of our framework.

Tangible Outcomes

  1. [arxiv] “Assistive AI for augmenting human decision-making” Natabara Máté Gyöngyössy, Bernát Török, Csilla Farkas, Laura Lucaj, Attila Menyhárd, Krisztina Menyhárd-Balázs, András Simonyi, Patrick van der Smagt, Zsolt Ződi, András Lőrincz https://arxiv.org/abs/2410.14353