We want to bring together experts for designing a roadmap for Assistive AI that complements ongoing efforts of AI regulations to start new micro-projects along the lines determined by the workshop.

The goals of the workshop are as follows. We want to
(a) to bring together experts for designing a roadmap for Assistive AI that complements ongoing efforts of AI regulations and
(b) to start new micro-projects along the lines determined by the workshop as soon as possible.
In the 2003 NSF call (NSF 03-611) that was ultimately canceled, it realized that „The future and well-being of the Nation depend on the effective integration of Information Technologies (IT) into its various enterprises and social fabric.” The call also stated, „We have great difficulty predicting or even clearly assessing social and economic implications and we have limited understanding of the processes by which these transformations occur.”

These arguments are even stronger in relation to AI as the transition is occurring rapidly, forecasting is prone to significant uncertainty and even if they fail to a large extent, the impact will be serious, and beyond the regulatory (societal) policy, there will be a huge demand for assisting people and limiting societal turbulences.
Regulated AI.

Regulation of public use of AI might succeed. Present efforts, such as „MEPs push to bring chatbots like ChatGPT in line with EU's fundamental rights” have, however several shortcomings. Consider the easy-to-retrain Alpaca, a „A Strong, Replicable Instruction-Following Model” and similar open-source efforts that will follow and offer effective and inexpensive tools for “rule breakers”.

Rule breakers can use peer-to-peer BitTorrent methods for hiding the origin of the source, create artificial identities, enter social networks, find echo chambers, and spread fake information efficiently. Automation of misinformation (trolling, conspiracy theory, improved tools for influencing) and the social networks of different kinds cast doubt on the effectiveness of regulation efforts. While regulations seem necessary, still, as regulations are control means, delay in the control may cause instabilities.
Assistive AI.
Regulations for a community-serving “Assistive AI” (AAI), however, can be developed. AAI could diminish the harmful effects provided that efficient verification methods are available. Our early method [1] is a promising starting point as it preserves anonymity for contributing participants who need it or would like it to stay anonymous and can stay anonymous as long as the rules of the community and the law allow it. Accountability can also be included to make contributors responsible.

No matter if regulated AI succeeds or not, it is time to develop qualified AAI tools for society. To-do’s.
• Assistance in overcoming the trauma of unemployment and finding suitable activities,
• Improving our ability to filter out fake news and promoting high-quality (verifiable) sources.
• Assistance with training tailored to individual goals, skills, and realities.
• Help in overcoming stress-related problems, or at least mitigating the effects.
• Assistance in planning in uncertainties.
• Assisting in education and learning [2]
• Assisting in using AI and understanding the moral versus utilitarian consequences of AI-related decisions
• Development of inherently consistent LLMs e.g., by means of Composite AI [3] or autoformalization [4]
Plans
Eötvös University has a history of considering AI-related ethical and legal issues and has experts in social psychology and the labor market.

Planned method: SWOT analysis.
The planned place is Budapest, and the workshop can be monitored online.
Planned date: early September.
References
[1] Ziegler, G et al. "A framework for anonymous but accountable self-organizing communities." Inf. Softw. Tech. 48, (2006): 726
[2] Sal Kahn on Khanmingo. https://www.youtube.com/watch?v=hJP5GqnTrNo (2023)
[3] Gartner Res. Innovation Insight for Composite AI (2022)
[4] Wu, Y. et al. "Autoformalization with large language models." NeurIPS 35 (2022): 32353

Output

Expert opinion and plan for future micro-projects

Project Partners

  • Siemens, same
  • Volkswagen AG, same
  • Law School, Eötvös Loránd University, same
  • Faculty of Education and Psychology, Eötvös Loránd University, same
  • WP5 people, TBD

Primary Contact

András Lőrincz, Siemens