Community Conversation: “AI is as important to the future of national security as the invention of the airplane.”

On Wednesday, June 28, the Dayton Daily News held a Community Conversation on the Dayton Daily News Facebook Page to discuss how artificial intelligence might affect the future of the Miami Valley. The event was co-hosted by Community Impact Editor Nick Hrkman and reporter London Bishop. Panelists included:

  • Layla Akilan, cognitive systems engineer with Mile 2
  • Dr. Tanvi Banerjee, associate professor in the Department of Computer Science and Engineering at Wright State University
  • James Pate, artist and Black Palette Gallery owner
  • Steve Rogers, Dept. of the Air Force Senior Scientist for Artificial Intelligence
  • Dr. Hui Wang, assistant professor at the University of Dayton and Director of UD’s Applied Artificial Intelligence Lab Team

Editor’s Note: The transcript below has been edited for brevity and clarity. You can watch the full recording of the Community Conversation on our website or the Dayton Daily News Facebook Page.

First, what are we talking about when we talk about artificial intelligence?

Hui Wang: AI is a big concept that refers to any computational techniques that can make human progress based on rules and data. Machine learning is a subset of AI. Machine learning techniques can allow a system to optimize itself and extract meaningful patterns and interpret data. Deep learning is a small portion of machine learning that can extend certain machine learning capabilities by using “deep neural networks,” where we can use a very complicated, nonlinear surrogate function to approximate the functional relationship in a high dimensional space. A third type of machine learning is reinforcement learning, where we define a problem in terms of a game, and we defined the rewards, so the agent can play the game to maximize the reward.

Tanvi Banerjee: I see it as a tool. When we talk about AI in healthcare, we always say it’s a “clinical decision support system.” So something that can be automated, a part of the process of decision-making that can be automated by looking at the patterns in previous data, or even if we don’t have pre existing data, looking at some of the similarities in patient behavior, which can help inform the clinician. With ChatGPT, we are able to query it just as easily as we would do a Google search, and have it be a little be more informed, have it have a much richer knowledge base to explore and give a response that might be more applicable for the particular question.

Steve Rogers: The big thing is don’t think of AI as a thing. It’s a set of academic disciplines, which you can combine to create things. And most often those things are delegated unbounded authority to do something on behalf of a human. And it’s all in how we use those AI-created aids, if you will, to do useful things for humanity. There’s a whole range of courses that professors teach, all of which could be considered AI courses, but they they’re all wrapped around this idea of capturing and creating knowledge to do things on behalf of humans.

Why is it now that we’re suddenly hearing so much about AI?

Tanvi Banerjee: Among the various factors that may have led to its growing influence, I think the biggest one is the big data technologies, the ability for systems to be able to ingest humongous amounts of data to gather meaning. Anything that was weighed or put on the internet is now the knowledge base that it’s basing its response to. And that’s why we also have to be cautious about how the data is validated. Anything that’s been posted on the Internet is now fair game for a knowledge system, which now can be used for good and for evil.

Layla Akilan: When IBM Watson and Deep Blue were introduced, they were sensationalized in the media and it caught everyone’s attention. I think we’ve still had this fascination with robots and AI. And I think the important thing about the framing, people want to know how it will be used and whether it will be used for good or evil. But I’d like to reshape that framing. When you look at IBM Watson and Deep Blue, we showed everybody the power and the capability behind AI, but we framed the AI itself as our adversary, as the agent on the other side. But I think that now it actually makes a lot more sense to think about agents in terms of teaming. They are teammates, they’re not our opponent. And they’ll do exactly what we designed them to do.

As an artist, what are you excited for and what concerns you about AI?

James Pate: The fact that someone who’s a non-artist can generate artwork, submit it in a juried competition and win Best in Show, now you get people that are pretty upset about that because they’ve spent money on their education to be an artist, a painter, a designer and so on. And now you have this tool that could circumvent them and their profession and can allow people to enter into the art world who haven’t done all the studying and classes, shed tears from being critiqued by their instructors, etc. But personally, I see it as a tool. I know of artists that that utilize it as a tool, and they don’t fear it and they’re not panicking about how it’s going to affect their bottom line. I see it as a way to generate ideas. I don’t use it to make my art, but I do use it as part of the process. Traditionally, as an illustrator or designer, you would do the same thing that AI does, it’s just that AI does it a little quicker. I’m just looking forward to what unfolds, what this tool is and what it’s capable of.

Tanvi Banerjee: An AI system can create something that’s a combination of Van Gogh and Monet, but it cannot create an art that is “James Pate.” That creativity, it comes from us, and that cannot be replaced. There are a lot of limitations to what AI can do. It is always learning from the human generated data, and it cannot circumvent that particular path.

James Pate: We’re supposed to embrace this and start the process of application and be creative about how it can be utilized and how people flesh out ideas. This kind of put artists at ease and, and just to remind ourselves that we’re in a creative class and we have ideas and we should embrace technology. I know an artist in Indianapolis, he’s creating works of art that are very interactive, where you put a cell phone up to it and it changes the art. Music starts, the colors change and do different things. It’s that interactive component, you get participants to not just glance at your art anymore, not just walk by it, but use it, too. One tool begets another, so you pull out your cell phone and you and you interact with it. And that’s just the beginning. And that’s very exciting, that exploration of the human capacity to create and entertain, and push our imaginations forward. But I’m always going to be pro-artists, when it comes to generating an income. Your creativity, it has a value, and I’m all for protecting the value of one’s creativity. It can affect your livelihood, we have families to feed and whatnot. And I think that the people who are actually performing these kinds of rip offs, so to speak, I think they should be more honorable about, you know, wanting to support the artist and wanting to include the artists and in the monetary aspects has been generated. I’m all for trying to protect that and to try to make sure we get our fair share.

Layla Akilan: There’s a certain responsibility on the people who design these AI systems to design these systems in a way that creates trust with users. If it’s generating certain artwork, and you can’t trace where that artwork was generated from, you can’t trace the copyright. That’s something that we need to get better at. A lot of focus is being put on trust and explainability in these systems. If you’ve ever worked with an AI system, for example, you might have thought, how did it do that? How did it make that recommendation? What was that recommendation based off of? In some situations, the consequences are harmless when users don’t understand what these AI agents are doing. And in other scenarios, we’ve seen accidents happen that cost people their lives when operators are not aware of what the AI systems are doing. I work a lot for the DOD and for local companies designing different types of decision support tools. We care a lot about in the future, making sure that we are developing techniques and best practices for how we design these systems so that people trust them. If they don’t trust them, ultimately, we could have the best technology and nobody will use it.

How is the Air Force looking at AI? Are there any projects underway that you’d be able to share with us?

Steve Rogers: This is as important to the future of national security as the invention of the airplane. It’s as important as aerospace, space flight, nukes, cyber or biotech. It’s that important. And just like we don’t envision going against an adversary who has some advantage over us, we do not accept the premise that we might have to fight an adversary who’s better with AI than us. So our vision is an enduring asymmetric AI advantage, end of story. We can’t accept anything less. We’ve laid out a set of strategic goals around mission areas. We have a project called “autonomous air combat operations.” And in that we’ve publicly said that we’ve flown Air Force jets out at Edwards Air Force Base with AI that was created right here in Dayton, Ohio. AI is doing combat-related activities, controlling modern Air Force jets. The point is the future of national security is all wrapped around this.

We received a question from a reader: “We have an Amazon Alexa system that easily understood our then five year old was unable to understand our younger daughter who exhibited a minor speech impediment. The limitations and bias of AI can also affect kids in households where English may not be the primary language. And this could contribute to children feeling somehow broken or wrong. These are all risks all parents should be aware of. How do you recommend talking to young kids about the bias they may experience as AI becomes commonplace in the home?”

Tanvi Banerjee: I think this is such a great question. Because especially in the healthcare world, we deal with this on a daily basis. The crux of any AI system is the data that it learns from. We can blame Alexa, here, but the truth is that it has been exposed to only so much data. It has not had as much experience working with a speech impediment in this case. As a parent myself, I would caution that this has to be a learning experience. And that’s not on you, it’s on the developers. The thing with AI, it’s an iterative process, it is never going to be a completely done system. As AI educators, we are constantly learning. It’s our job to both from the device side, question it and report it to the developers, and then from the parent side, say, “Hey, we know now this is something that the system clearly doesn’t work on — and this is not the user’s fault.”

Steve Rogers: This is a conversation I’ve actually had with the Amazon chief scientists on issues they have with with people who have accidents or speech impediments, and something they’re working on. So I love the advice of reporting it — they love that data. But it’s part of a bigger problem. The bigger problem is the way we currently develop and field AI is on aggregate measures. So we try to have an objective function when we create the AI to reduce overall error. And the result of that is we don’t optimize for personal use. So whether that’s in a medical application where I need to know the best advice for this particular patient with these symptoms? Now, none of what I just said tells you what to say to that child.

Layla Akilan: It’s important to realize that there are limitations. Just like Tanvi had mentioned, it’s really about the data that that autonomous agent was trained on. So, as a mother, I would say if I was going to talk to any child about what’s going on with Alexa, I would educate them about the technology and hopefully inspire that kid to one day maybe grow up and become a machine learning engineer, and improve the technology that helps us get better.

How do you see the job market changing in the Miami Valley as AI becomes part of our daily lives?

Layla Akilan: A lot of people are worried about jobs. Do I think that AI is going to change the way that people think the way they live the way they work? Absolutely. Every time you introduce new technology of any kind, it is going to change the very nature of your work of the way you think about your work of your life, etc. But what I see, I’m optimistic about the future, because I think that, yes, we might eliminate some jobs. Let me give you an example. In the government, there’s a lot of people that we call “data wranglers” that work for the Air Force. I work with these people all the time. It’s really hard for them to get access to large amounts of data, and then take that data and put it all together in a picture that makes sense. We can build tools that allow that person to, instead of being a data wrangler who spends hours and hours of going into different databases and exporting and importing and concatenating, we can make that easy. And now what we can do for that person, instead of making them a data wrangler, we can make them a data analyst. So again, we might eliminate one position, but we’re going to create so many new jobs.

Hui Wang: With UD as an example, we have several research projects and some ongoing projects with local companies sponsored by Office of Naval Research, Air Force Base and Air Force Research Lab. We are actively developing new tech technologies that can be transferred to local companies. Just last week, a construction company come to my lab and asked me to help them develop a technique called “virtual construction.” Basically, we virtually create a construction site. We can use virtual reality to show the construction plan to the to clients. Before they put everything into action, we can identify potential risks and develop a mediation plan. This is a new opportunity that will create a new kind of job, we call them “virtual construction engineer.”

What advice would you give someone who might be interested in getting into AI?

Steve Rogers: I think the world is fundamentally changed, having been in this area for over 40 years. I think we have democratizing opportunity. As Wayne Gretzky used to say, “skate where the puck is going to be not where it is.” So you could dive in right now trying to study what AI is and how to program, but that’s a mistake. What you need to do is learn how to live in a world with AI. If I was an artist and had not gone to all the educational things that James had, I can, for the first time, create art that I like, without the educational background. I can create apps on my cell phone without learning how to program. Those are now available to the inner city kids of Dayton, Ohio. And if we’re not using them to address our social and economic and health care disparities, we’re missing a great opportunity.

Layla Akilan: And Dayton, in particular, is ripe with opportunities to enter into any one of these fields. Whether you’re an artist, whether you want to develop algorithms, but you don’t have to be in computer science to do this. Human factors engineering is a booming industry. Here we have the Air Force Base, there’s a ton of people in this community who not only want to continue this work, but we want to encourage people to come work with us, come learn. And my best advice to anybody who wants to do that? Yes, there’s tons of resources online that you have access to their educational programs like Wright State, University of Dayton, Sinclair Community College, you can get your foot in the door, but the best thing you can do is reach out to somebody who’s already doing what you want to do in this field and ask for help. Just like I did with with Steve Rogers five years ago. I said, “Hey, I think I’m really interested in this stuff.” And it just took one or two good mentors to help me break into this field and get into this community. So feel free to reach out to me, I’m more than happy to talk to anybody who wants to enter into this field.