When choosing a teammate to help accomplish a task, artificial intelligence agents may be viable candidates in the near future. Ongoing research by Nathan McNeese, the McQueen Quattlebaum Assistant Professor in Human Computing, suggests that such an option may soon become a reality.
At the Oct. 5 and 6 board of trustees quarterly meeting, McNeese presented his research on “Creating Human-Centered AI” to the board about the rise of AI and how Clemson is barreling forward to work with it, not against it.
The overarching goal for McNeese and his team is constructing human-centered AI systems that are designed with their human counterparts in mind.
“(AI systems) are traditionally built with the idea of optimizing performance, not with the impact that they’re going to have on humans and society,” McNeese told The Tiger in an interview on Tuesday.
Within the framework of human-centered AI, McNeese specifically focuses on improving human-AI teaming. While such a partnership may seem far-fetched now, it will likely become increasingly normal over time and must be managed properly.
“Everyone knows about human teamwork, but very few people have ever been on human-AI teams, and within the next decade, this is going to be something that a lot of people get experience with,” McNeese said. “Hopefully, our research will make that interaction much more seamless, effective and safe. We want to make that interaction work for humans.”
Part of the difficulty in creating efficient human-AI teams, according to McNeese, is that AI systems historically had no concept of collaboration, thus hindering their usefulness to humans.
“Until recently, we would take a human and say, ‘You’re working with an AI agent. Go team together.’ That’s not teamwork because one of those entities doesn’t understand what teamwork is,” McNeese told The Tiger. “We have to design AI agents that understand characteristics of teamwork: communication, awareness, shared understanding and information knowledge, to make sure AI agents are working effectively for the human.”
From the human standpoint, McNeese’s team is studying how people perceive AI agents’ role as a colleague and the degree of trust between the two.
“We’re really interested in studying people’s perceptions of AI teammates: what they think an AI teammate should look like, how it should be designed, what capacities and abilities it should have,” McNeese said. “The other area we’re really focused on is trust because we know that’s going to be paramount to the success of human-AI interactions moving forward; if you don’t have trust, humans are not going to accept AI systems, and if they don’t accept that system, they won’t interact with it.”
McNeese says his most challenging yet most meaningful study in human-AI partnership has centered on ethics. Ensuring that both sides conform to ethical behavior standards and are in a situation where they can easily do so ultimately improves the productivity of the relationship.
“We look at the impacts of unethical behavior from an AI agent on humans and how to avoid that unethical behavior, take the lessons from it, and study them for designing ethical behavior,” McNeese said. “Our goal is making sure AI agents acting as teammates are working and interacting in an ethical manner and that the whole environment of human-AI teaming is an ethical environment.”
The next generation of college graduates will be entering a world permeated with artificial intelligence unlike anything ever seen before. McNeese aims to improve that world and the human-AI interconnectivity coming with it.
“AI is going to be ingrained in many different facets of society,” McNeese told The Tiger. “You’re going to interact with AI systems moving forward, so hopefully, through my team’s research, those interactions are safe for humans, they’re good interactions, and they provide the proper benefits for humans.”
Ultimately, however, through all of their investigation, McNeese and his team strive to keep what truly matters at the forefront of their minds: humans.
“Artificial intelligence can do things that we as humans simply cannot do, but we don’t do (our research) because of AI,” McNeese said. “We do it because we want humans to hopefully experience a better life because of the benefits that are coming from artificial intelligence. It all comes back to humans and caring about humans.”