What are these reasons?
We have the worry of waking up in a smart day-to-day world where technology does it better than we do, but we gradually lose control. Part of the discomfort is that people are measured by smart machines, i.e. they have demands for perfection that we cannot meet. In a world like this, it is conceivable that more and more decisions will be transferred to artificial intelligence. In the USA, for example, there is a discussion about whether machines can be the better judges. They are not guided by prejudices or moods. At the moment, technology is still very vulnerable in many areas. It is therefore obvious that humans must have the last word. But we are only at the beginning of a development.
Who would be liable if a self-driving car caused an accident?
In principle, the answer is simple: If the vehicle controls itself, the manufacturer or the operator of the technical systems is liable. If a human takes control of the vehicle, the driver is liable. The demarcation is difficult. With automated driving at level 3, drivers are allowed to turn their attention away from the traffic temporarily. However, they must take over the steering again when the vehicle asks them to. From a legal point of view, it is important to define clear handover times. Presumably, cars will have to be equipped with a black box similar to airplanes that records driving data. After a collision, it will then be possible to analyze who was in charge at the decisive moment.