The National Institute of Standards and Technology (NIST) has a long list of companies participating in a special artificial intelligence advisory group called the AI Safety Institute Consortium. Members advise NIST on a variety of issues. And our newest members include the Human Factors and Ergonomics Society. For more information, Tom Temin and Federal Drive Dr. Mica Endsley, president of SA Technologies, said the company is the society's leader in outreach and government relations.
tom temin I should also point out that you are a former Air Force Chief Scientist. So you've been involved in automation and human augmentation and human-machine relationships for quite some time.
Micah Ensley exactly.
tom temin Go to the Human Factors and Ergonomics Association. What exactly does that mean? What was your experience there?
Micah Ensley The Human Factors Ergonomics Society is a professional organization. It is the leading professional society for human factors and ergonomics professionals in the United States and around the world. We focus on developing and designing systems that support human performance. This is extremely important in many areas, including driving, flight, military systems, and medicine. In areas where we don't want people to make mistakes, people process information and make decisions as opposed to systems that actually exploit our weaknesses.
tom temin In other words, it sounds like artificial intelligence, which is widely touted as a way to enhance human thinking and decision-making. As is often the case with situations like verdicts and judgments, dynamic situations like flying a plane or driving a bus can also bring about a lot of good, but you can also mess things up if you're not careful. There is also gender.
Micah Ensley yes. I've been thinking about the role of artificial intelligence and how it impacts human performance for about 35 years, ever since I started thinking about how to bring artificial intelligence into the cockpit early on in a program called Pilots Associate in the '80s. I've been researching about it. And what we found is that while AI can perform certain tasks and do certain things, traditional automation becomes even more disconnected from how the system is working and what's going on. Similarly, he said, they also have the problem of forcing humans outside the mosquito net. Turn it on in that situation. And we found that lack of engagement reduces people's situational awareness. This reduces your ability to understand what's going on, effectively monitor automation, and jump in when needed. And that's the real challenge we see with AI.
tom temin right. So I can be an advisor. But again, thinking about it in a motor or movement context doesn't necessarily mean it actually performs a function, so it just creates an example. Suppose you have a sensor system that knows when ice is about to form. Well, you could turn on the ice automatically, but it might be better not to warn the pilot too much. But the pilot turns on the switch to avoid falling asleep while the wings freeze.
Micah Ensley Well, it turns out to be very interesting how it works when you automate the actual control tasks. For example, it might actually involve flying an airplane or driving a car. What we've found is that people are left out the most when it comes to a range of advanced operations. So if you keep them out of the loop, the biggest problem is that they won't be able to figure out when something has gone wrong. So it's good to leave them in the loop so they can actually perform their tasks. As you say, many systems are designed as recommendation systems. They provide guidance and recommendations, and people think, “That would be better, because that person will be more informed and have a better understanding of what's going on.” . And if the system is correct, it turns out to be true. You will get many benefits. But when the system is wrong, it actually leads people further astray. So, for example, if it's misclassified as a situation, or depending on it and it's wrong, people are much more likely to make a mistake than if they were left alone. Therefore, even simple recommender systems can negatively impact human performance unless they are completely perfect. In reality, it's usually not perfect.
tom temin We're talking with Micah Ensley. She is president of her SA Technologies and leader of outreach and government relations for the Human Factors and Ergonomics Society. Let her talk about the NIST consortium. What does the Human Factors and Ergonomics Society bring to that? What do they want from you as we put this consortium together and learn what we need to learn? This all ultimately comes down to Congress. I think it's an obligation.
Micah Ensley yes. So Congress and the White House are very concerned about what's happening with AI. They believe this is a highly transformative technology in terms of how it could impact our society, employment, legal issues such as legal liability. And in fact, even if you're just looking at adding AI to help people at work, it can have a dramatic impact on human performance in terms of people's ability to do that job. So they're looking to the National Institute of Standards and Technology for guidance on this. It's a very technical issue. And NIST established this AI Safety Consortium. AI Safety Research Institute Consortium. So this consortium includes industry, academia, people doing research on this, and even non-profit organizations that are deeply involved in issues like his AI performance and ethics. And they brought all these groups together and told us what are the important issues that we need to consider here and how do we develop guidelines and testing standards to ensure that we can implement AI? We are providing the best guidance possible. Safe in our society?
tom temin We might talk about connections and what society brings in terms of ergonomics. From an ergonomics perspective, I think people tend to think about bending the keyboard to prevent things like carpal tunnel syndrome. This is probably the most banal example known to mankind. But how do we get it to the point where we can interact with AI?
Micah Ensley Therefore, human factors in ergonomics actually covers a wide range of areas where people interact with systems. We can talk about physical technology and ergonomics. So people are used to thinking about ergonomic chairs and ergonomic workstations. And we strive to prevent physical injury. However, human factors have long been involved in other types of problems, such as perceptual and cognitive performance. So we have a long history of working on issues like automation and artificial intelligence and how people interact with these technologies to influence decisions in their work performance. This dates back to World War II. At that time, people were flying planes and planes fell from the sky. And then I realized, oh, I hadn't really thought about how the cockpit instruments and dials should be designed. People understand them and can make very quick decisions even in complex and stressful situations. And it was truly the birthplace of the entire human factors ergonomics movement that has spawned a century of research.
tom temin Yeah. I remember years ago when a truck manufacturer advertised the fact that the many gauges on semis all had many dials. Zero or normal were all in the same relative position. So it was much easier for the driver to see that something was wrong than for the driver to randomly place the zero anywhere on the circumference.
Micah Ensley It was a very early human factors discovery. They did research to establish it in practice and showed that it actually improves human performance. Therefore, very small things about how technology is designed can have a huge impact on how people interact with it. That applies to instruments, and it also applies to things like artificial intelligence. So we're really focused on what we can do to improve the transparency and understandability of these systems. This turns out to be very important to how people interact with the system.
tom temin And also regarding human performance. Sometimes people have an expectation bias or think they know something and have the answer in advance. I was talking the other day with a federal practitioner whose agency is applying AI in many areas. So I asked, does this just confirm the decision that the decision maker makes, or does it sometimes make the AI say, “Oh, I never thought of it that way, and that's correct.” I said, “Are you going to throw something at me?” I was wrong. There are elements like that as well. Is it natural to say that?
Micah Ensley There is. And what we've found is that traditionally the way AI or automation is implemented is that it's put in place upfront before this goes on. And when that happens, it actually creates a decision-making bias. So, if you can be 100% confident that your automation is correct and accurate, that's a huge advantage. But if you get it wrong, it can actually lead people in the wrong direction. This makes them more dependent on the system and more prone to errors than if left alone. If implemented in other ways. And as you described, at the end of the decision-making process, if there's a question that says, “Have you thought about this? Have you thought about this? Have you thought about this?” Help, and perhaps help you consider something broader. Things than what they did. So even when implemented, AI makes a difference.
tom temin Therefore, operators of AI systems need to be properly trained to ensure that they do not produce results that are simply not applicable, and monitored over time to ensure that they remain within certain limits. It seems mandatory.
Micah Ensley That's one of the real challenges, the people actually using AI, how do they know how it works, how it was developed, or in what contexts it works? It means you know very little about what doesn't work in your situation. Even the AI developers who are responsible for it often don't know what's going on under the hood. The way AI works is basically it's just a pattern matcher, it's trained to recognize patterns, it's going to execute those patterns, but it's very opaque. The mechanism is a black box. So developers don't always know. Biases like those found in many employment screening systems can creep in, for example, they may be biased against women and minorities. And the developers didn't know that, and the people using it don't know that either. And that's a real challenge. And we need to be more transparent about what it does, how it operates, and how it functions. These are very important considerations.
tom temin And really, the short answer is that the consortium is interacting with NIST. Will there be face-to-face meetings? Huge Zoom meeting? How do you actually work with NIST?
Micah Ensley Yes, we're just getting started. So let's see what happens. But I think the majority of the work will be done virtually through his Slack channel, where people exchange information and ideas. We are very excited to be part of that consortium. There appears to be a large number of organizations participating, from people actually developing AI for different types of applications, to academic institutions, safety institutes and organizations, all with different concerns and concerns about how they can be implemented. There are various elements that need to be addressed. And we're really looking forward to being a part of that.
Copyright © 2024 Federal News Network. All rights reserved. This website is not directed to users within the European Economic Area.