The development work involves neural networks, artificial intelligence, and machine learning (also known as deep learning). Developers feed the neural networks a huge number of different traffic situations. “In order to correctly assess traffic situations, the computer needs to have already seen many different situations and be able to correctly identify individual aspects of a given situation,” says Uwe Franke, Head of Image Understanding at Daimler. “Our engineers stipulate the curriculum here, so to speak, since the system doesn’t decide for itself that it should take a look at what’s beyond the next hill, for example.” With this approach, the systems learn which conclusions need to be drawn in each situation – exactly the way people do.
In one test, for example, the display on a developer’s computer shows a traffic situation that’s been detected and recognised by a lidar system during driving. Here, there are two pedestrians on a pavement, both of whom are displayed in red. A cyclist is also shown in red, while his bicycle is dark red. Cars are blue, trucks and other commercial vehicles are dark blue, and street lights are depicted in grey. All the other sensors also monitor the scene simultaneously and provide data.