Involving human knowledge into the learning model can be done in diverse ways, e.g., by learning from expert data in imitation learning or reward engineering in deep reinforcement learning. In many applications, however, the expert data usually covers part of the search space or “normal” behaviors/scenarios. Learning a policy in the autonomous driving application under the limited dataset can make the policy vulnerable to novel or out of distribution (OOD) inputs and, thus, produce overconfident and dangerous actions. In this microproject, we aim to learn a policy based on the expert training data, while allowing the policy to go beyond data by interacting with an environment dynamics model and accounting uncertainty in the state estimation with virtual sensors. To avoid a dramatic shift of distribution, we propose to use the uncertainty of environment dynamics to penalize the policy for states that are different from human behavior.

Output

Paper https://2021.ieee-iv.org/ and/or

Paper http://www.auai.org/uai2020/

Yes

Project Partners:

  • German Research Centre for Artificial Intelligence (DFKI), Christian Müller
  • Volkswagen AG, Andrii Kleshchonok

Primary Contact: Christian Müller, DFKI

Main results of micro project:

The project has run for less than 50% of its allocated time. Until today, the partners have spent time refining the project goals. We intend to begin the actual project phase in November. We also had several meetings on the formulation of the project problem and possible approaches to solve it.

Contribution to the objectives of HumaneAI-net WPs

Our key idea in this project is to learn a representation with deep models in a way to incorporate rules (e.g., physics equations governing dynamics of the autonomous vehicle) or distributions that can be simply defined by humans in advance. The learned representations from the source domain (the domain whose samples are based on the defined equations/distributions) are then transferred to the target domain with different distributions/rules and the model adapts itself by including target-specific features that can best explain variations of target samples w.r.t. underlying source rules/distributions. In this way, human knowledge is considered implicitly in the feature space.

Tangible outputs

  • Other: TBD – TBD
    TBD