Years ago, crash-test dummies were primarily built in traditional male figures.

That is, female-figure dummies or those mirroring the size and physique of young children were often excluded from critical vehicle safety testing.

“What ended up happening is that these safety settings, crash settings, airbag settings, were systematically suboptimally set for these other types of occupants because of the bias that was there in the design of the problem,” said Neda Cvijetic, senior manager of autonomous vehicles and computer vision at chipmaker Nvidia.

The industry seems to be long past the days of overlooking the needs of different body types in crash testing — but that doesn’t mean bias can’t show up in vehicles in other ways.

The amount of artificial intelligence integrated into vehicles is only increasing, as AI makes today’s advanced driver-assistance systems — and the fully autonomous vehicles of the future — possible.

But as with the crash-test dummies, bias integrated into AI, even if unintentional, could impact a safety-critical system.

Not only is it about the data itself, “it’s about who is training the data,” said Indu Vijayan, director of product management at lidar-maker AEye Inc.

“Different people have different ideas,” she said. “Bringing them together will give you a more holistic approach, thereby reducing any kind of biases in the system, which inadvertently will enable the system to be more safe and to take the right action when things happen.”

Cars are getting equipped with a growing amount of AI-heavy software and sensors that enable them to assess complex situations on the roads, detect and distinguish pedestrians from other moving objects and obstacles, and even monitor occupants inside a vehicle.

And the need for AI is only going to grow with industrywide ambitions for fully autonomous vehicles.

“The safety implications are huge, not only for the user, but for people who may come into contact with that vehicle,” said Cheryl Thompson, CEO of the nonprofit Center for Automotive Diversity, Inclusion & Advancement.

Using AI, an automated vehicle can recognize or predict unusual road scenarios as well as perceive changes in the driving environment and navigate around them — so long as the system is appropriately trained.

A 2019 Georgia Institute of Technology study is a case in point. It found that automated crash-avoidance systems were less likely to detect pedestrians with darker skin tones.

“It’s very critical for these autonomous vehicles to be able to recognize humans both inside of the car and outside the car,” Nvidia’s Cvijetic said. “There needs to be data for the car to be able to recognize pedestrians, all kinds of people across different genders, racial backgrounds, age groups. … We cannot bias a system towards detecting one type of pedestrian but not another type of pedestrian.”

Not only that, but “that handoff between the human and the machine is very important, which means that we also have to be able to recognize different types of drivers,” Cvijetic said.

Bias in AI can surface in a few different ways. It’s often born from a lack of understanding about the type of data needed to solve the problem at hand — or not supplying enough diversity of data or scenarios to the system.

“If you do not have data that accurately represents the real world, let’s say, in terms of weather conditions, in terms of different types of highway structures, in terms of different types of urban intersections, then that means that the vehicle will not be properly prepared to react in those situations,” Cvijetic said. “If your system has not been trained on this type of data, then you’re introducing ambiguity into the situation that the vehicle is not trained for, and that needs to be addressed.”

Bias is also born from a lack of diversity in the development team.

It could be as simple as a younger engineering team that might not consider the needs of a 100-year-old end user, or a team in San Francisco not considering that the technology also needs to be applicable in China — or as safety-critical as a room full of male engineers not considering the need for differently shaped dummies.

“It’s about who’s looking at this data, who’s annotating the data,” AEye’s Vijayan said. “It’s so important that the design happens in a way that it is adapted for different kinds of people.”

Reducing bias requires diverse engineering teams, frequent training about the possibilities of bias in AI and, in some ways, regulatory measures.

“The more diversified your team is, the better,” Vijayan said. “As people, we need to be aware: Every person is biased in his or her own way. Knowing that, acknowledging that and being conscious about it also enables these [biases] to be removed.”

German megasupplier Bosch, for instance, conducts frequent “lunch and learns” with key stakeholders across the company to educate its associates. Recently, the supplier addressed artificial intelligence and inclusion.

“Once we understand our own selves and our own self-perspectives, we can truly try to be conscious enough to mitigate that,” said Carmalita Yeizman, chief diversity, equity and inclusion officer for Bosch in North America.

The Center for Automotive Diversity, Inclusion & Advancement encourages “trying to build diversity into the team so that you don’t have that groupthink,” Thompson said, “but also building diversity in that design team so that you’re getting as much representation as possible to avoid blind spots.”

It’s a combination of “if you don’t have diversity on that team, you’re not even going to be aware of what those blind spots are,” she said, and “being aware of all of the different conditions [or use cases] that can come up.”

There are ongoing efforts in the European Union that would create regulatory frameworks to assess risk of bias in artificial intelligence. The efforts would propose best practices to ensure that the AI being implemented in systems, including those in vehicles, is comprehensive.

“This is so important to the core business that we do, and to doing it the right way, and to the success of the product, to aligning with regulation, to making our end customers comfortable and empowered to use these products,” Cvijetic said. “I think it underpins a lot of the reasons why we do this in the first place.”

Similar Posts