Save as PDF

Night-blindness

ADAS fail to detect pedestrians at night

Published 2016-08-23

Emergency brake assistance systems claim to detect pedestrians and bicycles. The German ADAC automobilists’ club has tested several cars providing such systems and found them lacking.

(Photo: ADAC)

The results of this ADAC test are sobering: Especially at night, most of the tested cars failed to detect pedestrians. In daylight even slow moving cyclists (less than 10 km/h) were not safely detected. Of course, OEMs do not claim that the tested vehicles are driving autonomously. The driver has full responsibility of their car all the time. The advanced driver assistant systems (ADAS) only support the conductor. Subaru and Audi have passed the tests much better than the others (for detailed results see the table). At night, most of the systems are “blind” – a few of them even switch-off without any warning.

Sabaru’s ADAS based on the company’s Eyesight technology is the only system that is not night-blind. The system detected pedestrians even with dark clothes in total darkness. This is more than the human eye is able to do, said the ADAC tester. Nevertheless, Eyesight failed detecting moving bicycles. The cyclists are in danger, even if they drive slowly.

Table 1: ADAC test results; 100 % means that the dummy was not touched, 0 % means that the car has not reduced the speed

CarSensorsCrossing
adult
(up to
60 km/h)
Adult
along
(up to
60 km/h)
Child
behind car
(up to
50 km/h)
Slow
cyclist
(up to
40 km/h)
Night with
reflective
vest (up to
45 km/h)
Night with
dark cloth
(up to
45 km/h)
Audi A4 Mono
camera
72 % 88 % 93 % 50 % 71 % 17 %
Subaru Outback Stereo
camera
89 % 100 % 46 % 0 % 100 % 100 %
KIA Optima Radar and
camera
72 % 75 % 54 % 0 % 50 % 0 %
Daimler C-Class Stereo camera
and radar
67 % 75 % 43 % 25 % 0 % 0 %
Volvo V60 Radar and
camera
39 % 50 % 21 % 0 % 0 % 0 %
BMW 3er Mono
camera
28 % 38 % 7 % 13 % 0 % 0 %

In Germany, 30 % of lethal traffic accidences affected crossing pedestrians and cyclists. In other countries the share is higher. Half of these accidents happen during twilight and at night. If autonomous driving vehicles would use the currently available pedestrian detection systems, pedestrians and cyclist should stay at home after sunset and not leave home before sunrise. Children should come home when the street lights are turned on.

Emergency braking assistance systems or pre-collision braking systems are normally combined with other ADAS sub-systems such as lane keeping and departure assistant and adaptive cruise control. The pedestrian detection systems as part of these ADAS systems use different sensor technologies (radar, Lidar, or cameras). The simplest is just based on a mono camera. Others are based on radar and camera(s). Subaru has implemented a stereo camera. Sensor fusion combining several technologies is the future. Most of these ADAS provide connectivity to the CAN-based in-vehicle networks (IVN). In the end, it is the brake request that goes to the ECU that controls the brakes. Daimler (E-class), KIA, and Volvo are already using sensor fusion in their new platforms including radar sensors. This improves such systems.

Eyesight and Mobileye: Not perfect yet

There is not much information available on Subaru’s Eyesight: the two cameras can detect objects in a distance of up to 110 m. The two cameras superimpose the images and allow to “see” three dimensional. This speeds-up the determination of distances. Depending on movement, steering angle, and gear rate the system sends optical and acoustic warning signals in case of an expected collision. Additionally, the vehicle changes the characteristic of the stability program, called VDC (vehicle dynamics control). The system also initiates 3-level automatic braking. According to Subaru, the emergency braking system works up to 200 km/h and a speed difference between car and object of up to 50 km/h. The system was installed for the first time in the Outback Lineartronic model, introduced in 2015. Of course, this system also needs to be improved, in order to detect cyclists.

Mobileye, an Israeli company, headquartered in Netherlands, was a pioneer in providing vision systems for ADAS solutions. Several Tier-1 suppliers and OEMs have adapted the EyeQ system-on-chip. The second generation, EyeQ2, deputing in 2007, was combined with general-purpose CPUs and CAN interfaces. It supported vehicle detection (forward collision warning, adaptive headlight control), and lane detection (for lane departure warning or headway monitoring and warning). This approach to pedestrian detection is based on monocular cameras using advanced pattern recognition and classifiers with image processing and optic flow analysis.

Additionally, the system can support pedestrian recognition with visible light or IR images. The second generation was six-times more powerful than the first one, and the EyeQ3 adds an additional factor of 6 according to the provider. It allows processing of multiple high-resolution sensors in parallel, resulting in range extension and enhanced features. STMicroelectronics produces the chips.

Mobileye EyeQ2 architecture

Both static and moving pedestrians can be detected up to a range of around 30 m using VGA resolution imagers, said the supplier. As higher resolution imagers become available, range will scale with image resolution, making detection ranges of up to 60 m feasible. To achieve a 360° all-round pedestrian detection, eight sub-systems are needed. But in the ADAC test, this system failed not only in darkness but also during the day: slow driving cyclists were not detected. Mobileye still claims on its website: “In the Volvo system, the Mobileye pedestrian Collision Warning system takes the lead and alerts the driver if there is a potential collision between the host vehicle and the pedestrian. If the driver does not react and a collision is unavoidable, then the mono camera in fusion to the radar system will make a braking decision once the radar has confirmed the target.”

Mobileye states on its website that in its autonomous emergency braking application detected pedestrians are ‘held’ until the point of unavoidable impact. The new acquisition of targets is currently limited to fully visible pedestrians, but is being extended to detect pedestrians at ultra close range, where parts of the body are beyond the image boundaries. This is of particular importance for rear looking camera applications and for stop-and-go forward applications. Pedestrian detection faces four major challenges that require special technical developments:

  • Figure size: Far pedestrians appear very small in the image. For example, with VGA resolution and 36deg vertical FOV, the figure of a 1-m child at a distance of 30 m is only 25 pixels. The lateral figure dimension is even smaller.
  • Fast dynamics: The detection latency must be small, and decisions must be made within a few frames.
  • Heavy clutter: Pedestrian detection is typically taking place at urban scenes with a lot of background texture.
  • Articulation: Pedestrians are non-rigid objects, spanning high variability in appearance and cause tracking difficulties.

Early detection of people that run into the drive-path (“crossing pedestrians”) is associated with the fast dynamics challenge. Here Mobileye uses optical flow analysis in order to distinguish the laterally moving objects from their background. Background optical flow, as seen by a forward moving camera, is always expanding and directed outward from the focus of expansion toward the image boundaries. Hence detecting inward optical flow is strong evidence to the existence of a moving object, which might be a crossing pedestrian.

Optical flow is used as a secondary detection cue for close stationary objects, where it is possible to distinguish the motion pattern of a solid object from that of the road plane. In this case, the motion cue is not as strong as for crossing pedestrians and hence for stationary object detection it is associated with a delay and acts as a secondary mechanism.

Today’s system operates in daytime only (based on Mobileye’s core day-night decision mechanism), but development is underway to extend this to dusk environments. Moving forward and based on NIR filtering on the imager, Mobileye is developing nighttime pedestrian detection.

The current Mobileye EyeQ2 architecture consists of two floating point, hyper-thread 32-bit micro-controllers, five Vision Computing Engines (VCE), three Vector Microcode Processors, 64-bit DDR controller, 128-bit internal Sonics Interconnect, dual 16-bit video input, and 18-bit Video output controllers, 16 channels DMA, and several peripherals including two CAN controllers. In late 2009, the first 360° multi-camera used the EyeQ2 in pedestrian detection system. Mid 2010, the Volvo V60 was equipped with this hardware for different ADAS functions including automatic emergency braking, if pedestrian are in the crossway.

In the OEM market, Mobileye operates in the position of Tier-2 and cooperates with Tier-1 companies. These system integrators incorporate the EyeQ processors and provided algorithms for their cameras. Customers and partners include Autoliv, Calsonic, Delphi, Gentex, Kostal, Magna, Mando, and ZF (formerly TRW). Each Tier-1 brings a unique set of know-how, integration strategies and complementary technologies to the relationship. Subsequently, partners have won series development programs within six months of signing a development agreement, claims Mobileye. Mobileye owns all it’s Intellectual Property (IP) and is therefore free to cooperate with any Tier-1 partner it chooses with no restrictions or limitations.

No doubt, sensor fusion needs high-bandwidth networks to interconnect sensors such as cameras, radars, etc. to the ADAS controllers. Nevertheless, these ECUs providing autonomous emergency braking functions in order to avoid collisions with pedestrians and cyclists require CAN and in the near future CAN FD connectivity.

Lidar sensors may be another option to improve data fusion systems for pedestrian detection. Quanergy Systems, has developed a solid state Lidar sensor. Small enough to fit into a hand, the compact sensor can be mounted behind a grill, inside a bumper, inside a side-view mirror, or behind a rear-view mirror. The S3 accurately creates a real-time long-range 3D view of the environment and provides the ability to recognize objects. Quanergy and Daimler formed a strategic partnership in 2014 to develop, test and deploy advanced Lidar-based systems specifically designed to enable enhanced automotive safety and autonomous driving features.

The ADAS system with Quanergy’s Lidar sensor provides an interface to the CAN-based IVNs on the data fusion and analysis layer. Also Continental’s Lidar sensor, the SRL 1C, has been developed for emergency brake assistance systems. It features CAN connectivity to connect several Lidar sensors to the control and evaluation units. The sensor uses a laser source to send a signal in frontal direction. An obstacle in a close area in front of the vehicle reflects the laser signal and the reflected signal is measured by a photodiode. The time difference between the sent and received signal is used to calculate distance and closing velocity to obstacles. The information of the distance and the velocity of each channel are transmitted on CAN as well as status information. The maximum measured distance for a channel is 14 m, the minimum is 1 m. The sensor can be configured with 16 CAN-IDs (from 0 to 15), three thresholds for each channel and one hysteresis for each channel. The sensor issues a warning message on CAN, if at least one distance of the three channels is below its configured threshold.

But Lidar sensors can be manipulated, so that self-driving cars are tricked into taking evasive action, which is why sensor fusion is necessary. Currently installed collision avoidance systems are not safe enough to be used in self-driving vehicles. The fatal accident with a Tesla e-car this spring demonstrated this too. Tesla has since removed the “self-driving” advertisement from its Chinese website. On other websites, the carmaker had not advertised it. To avoid collisions with other cars and objects is only one thing; another thing is avoiding injuring other road users including pedestrians and cyclists.

Ford: Autonomous driving car in 2021

The US carmaker Ford has announced its intention to have a high-volume, fully autonomous SAE level 4-capable vehicle in commercial operation in 2021 in a ride-hailing or ride-sharing service. To get there, the company is investing in or collaborating with four startups to enhance its autonomous vehicle development, doubling its Silicon Valley team and more than doubling its Palo Alto campus. “The next decade will be defined by automation of the automobile, and we see autonomous vehicles as having as significant an impact on society as Ford’s moving assembly line did 100 years ago,” said Mark Fields, Ford president and CEO. “We’re dedicated to putting on the road an autonomous vehicle that can improve safety and solve social and environmental challenges for millions of people – not just those who can afford luxury vehicles.”

The self-driving car by Ford (Photo: Ford)

Autonomous vehicles in 2021 are part of Ford Smart Mobility, the company’s plan to be a leader in autonomous vehicles, as well as in connectivity, mobility, customer experience, and data and analytics. The carmaker plans to design a fully autonomous driving car, operating without a steering wheel, gas or brake pedal. “Ford has been developing and testing autonomous vehicles for more than 10 years,” said Raj Nair, Ford executive vice president, Global Product Development, and chief technical officer. “We have a strategic advantage because of our ability to combine the software and sensing technology with the sophisticated engineering necessary to manufacture high-quality vehicles. That is what it takes to make autonomous vehicles a reality for millions of people around the world.”

This year, Ford will triple its autonomous vehicle test fleet to be the largest test fleet of any automaker – bringing the number to about 30 self-driving Fusion Hybrid sedans on the roads in California, Arizona and Michigan, with plans to triple it again next year.

Ford was the first automaker to begin testing its vehicles at Mcity, University of Michigan’s simulated urban environment, the first OEM to publicly demonstrate autonomous vehicle operation in the snow. It is also the first automaker to test its autonomous research vehicles at night, in complete darkness, as part of Lidar sensor development.

Four key investments and collaborations

To deliver an autonomous vehicle in 2021, Ford is expanding its strong research in advanced algorithms, 3D mapping, Lidar, and radar and camera sensors:

  • Velodyne: Ford has invested in Velodyne, the Silicon Valley-based leader in light detection and ranging (Lidar) sensors. The aim is to quickly mass-produce a more affordable automotive Lidar sensor. Ford has a longstanding relationship with Velodyne, and was among the first to use Lidar for both high-resolution mapping and autonomous driving beginning more than 10 years ago. Recently, Ford and Baidu, the Chinese Google, invested US-$150 million in this 1993 established company.
  • SAIPS: Ford has acquired the Israel-based computer vision and machine learning company to further strengthen its expertise in artificial intelligence and enhance computer vision. The company has developed algorithmic solutions in image and video processing, deep learning, signal processing and classification. This expertise will help Ford autonomous vehicles learn and adapt to the surroundings of their environment.
  • Nirenberg Neuroscience LLC: Ford has an exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. This has led to a powerful machine vision platform for performing navigation, object recognition, facial recognition and other functions, with many potential applications. For example, it is already being applied by Dr. Nirenberg to develop a device for restoring sight to patients with degenerative diseases of the retina. Ford’s partnership with Nirenberg Neuroscience will help bring human-like intelligence to the machine learning modules of its autonomous vehicle virtual driver system.
  • Civil Maps: Ford has invested in California-based Civil Maps to further develop high-resolution 3D mapping capabilities. Civil Maps has pioneered a 3D mapping technique that is scalable and more efficient than existing processes. This provides Ford with another way to develop high-resolution 3D maps of autonomous vehicle environments.

Adding two new buildings adjacent to Ford's current Research and Innovation Center, the expanded campus grows the company’s local footprint and supports plans to double the size of the Palo Alto team by the end of 2017. “Our presence in Silicon Valley has been integral to accelerating our learning and deliverables driving Ford Smart Mobility,” said Ken Washington, Ford vice president, Research and Advanced Engineering. “Our goal was to become a member of the community. Today, we are actively working with more than 40 startups, and have developed a strong collaboration with many incubators, allowing us to accelerate development of technologies and services.”

Since the new Ford Research and Innovation Center Palo Alto opened in January 2015, the facility has grown to be one of the largest automotive manufacturer research centers in the region. Today, it is home to more than 130 researchers, engineers, and scientists, who are increasing Ford’s collaboration with the Silicon Valley ecosystem. Research and Innovation Center Palo Alto’s multi-disciplinary research and innovation facility is the newest of nearly a dozen of Ford’s global research, innovation, IT, and engineering centers. The expanded Palo Alto campus opens in mid-2017.

hz