It’s been more than five years since Mike Capps cut ties with Epic Games, a company he oversaw as president for a decade while hit titles like Fortnite were incubated and Gears of War and Infinity Blade demonstrated what Unreal Engine is capable of. Capps, understandably, wanted to focus on being a new dad and spend more time with his family. In his semi-retirement, he still served on the boards of the Academy of Interactive Arts & Sciences (AIAS) and GDC while advising companies and developers in unofficial capacities. Now, however, he’s felt the pull of something even greater than video games: moral responsibility.
Enter Diveplane, a company Capps co-founded with Dr. Chris Hazard and Mike Resnick to keep the humanity in artificial intelligence. In some ways, Capps’ mission is not all that different from non-profit organization OpenAI, whose team of neural nets is actively training and competing against players in Dota 2. Unlike OpenAI, Diveplane is looking at making AI that is understandable, easier to tweak, and therefore less dangerous.
“We built a type of machine learning that’s inherently understandable. Rather than the traditional neural net being a simulation of how we think brains might work and neurons might work in a brain and how that might make decisions and using that to make a decision process, ours is a bit more mathematically-based rather than simulation-based,” Capps explained on the phone.
If you follow pundits, you know as well as I do that it’s not very long from the time we have AGI to the time we have artificial super intelligence.
Capps said that the AI can give a full explanation of why it made a certain decision, and then it’s completely editable on the fly.
Using a driving simulation as an example, he continued: “You can ask it, ‘Why did you hit that rock in the middle of the road?” It will say, ‘Ah, here’s three training points where we went through puddles that weren’t rocks.’ You can say, “Ah, here’s the problem, let me fix this by adding a training data point.’ [You can] emulate the test and then fix it right on the fly.”
Diveplane is a startup with only 17 people currently, but it already has a dozen pilot projects across numerous industries, including NASCAR racing simulations, healthcare, agriculture, and drone training. Diveplane is not an altruistic endeavor. Capps wants the business to succeed, but he’s putting the moral imperative first.
“It’s probably the main reason I’m doing this,” he affirmed. “I believe that [within] the near future [we’ll have] an issue with bias in AI. I think that’s important to be able to look inside and fix it. I think autonomous vehicles based on current tech are great, but I don’t feel like we should stop with neural net based models that we don’t really know why it makes a mistake every now and then and it’s really hard to fix. The real reason I’m passionate about this is the concern that we’re charging as fast as we can towards artificial general intelligence (AGI). If you follow pundits, you know as well as I do that it’s not very long from the time we have AGI to the time we have artificial super intelligence.
“Our simple assumption is that if the people who are rushing towards AGI are writing that in a non-debuggable code, it’s going to be full of bugs. We’ve built a debuggable type of AI and if people building artificial general intelligence are at least building with a debugger, we’ll at least have some idea as to what’s going on inside, some notion of inspection. Maybe that makes for the better future. Ideally this is all just crazy science fiction and we just continue on our happy way and there’s no such thing as [AI] singularity. That’d be great.”
Many scientists in recent years, including the late Stephen Hawking, have issued stern warnings about the great threat AI poses to humanity. Capps sees mass extinction from an AI singularity as more likely than nuclear war or climate change.
“It’s something that’s worth my time, worth me being away from the kids this week for. That’s why I came out of retirement. If I have a chance to make a difference in that, how can I not?” he added.
The bias that’s present in systems driven by AI is accidental, but that doesn’t make it any less dangerous. Capps cited parole systems that have denied inmates parole due to built-in racial bias, and he’s seen it with loans too.
“I think it’s unnatural outcropping of using real training data. If you take 60 years of loan data from the great state of North Carolina for making residential loans… That’s before the Civil Rights movement. You’re going to find some racial bias in that training data. If you built an AI with all of that training data, you’re encoding bias accidentally,” he noted.
“The trick is how do you discover that other than empirically? Other than just well, try to give a lot of people loans, or not, then start noticing strange patterns. With our tech you can ask, ‘Why did you make a decision here?’ ‘Oh, based on their zip code and their street address, not their finances, not their age, not their credit rating.’ Right? Okay, well that’s wrong, let’s fix it. Dig right in, roll up your sleeves and fix it. As opposed to wondering. So, I think that’s really important, I think we can help a lot of folks. The end game is what gets me up every morning.”
Ironically, keeping the humanity in AI is what can lead to some of these biases. I’m reminded of the Microsoft chatbot, Tay, from a couple years ago. Within 24 hours of being released onto Twitter and absorbing the worst of humanity, Tay became an insufferable racist. It was a terrible show for both AI and humanity.
“Humanity is the best and worst of us, right?” Capps remarked. “Perhaps maybe we need the mission shift to be ‘keeping the best of humanity in AI’ … I hope alien visitors don’t start with Reddit when they’re trying to figure out how to communicate with us.”
He continued, “We’ve been chatting with representatives from the Vatican, actually. They’re excited about the potential of AI, but nervous about its inhuman calculation. Imagine you asked an AI how to solve the problem of hunger in Africa when they have a food supply that is half the size of the population. Well, a very simple computer approach would be to eliminate half of the population. That’s obviously not what’s desired. So there’s a system where there’s no bias, at all, it’s pure math. That’s a perfectly reasonable solution except for [the fact that] it’s inhumane.”
Diveplane isn’t seeking contracts with game companies just yet, but with Capps’ background in games, it’s certainly something that could happen once the firm is more established.
“Some of my excitement from this comes out of games,” Capps said. “My expectations of how AI works is you have a big logo over the monster’s head saying what it’s thinking and what it’s doing all the time, while you’re building the game, so you have some notion of how to make a fun experience. It’s frustrating that we don’t have that same transparency and visibility into what autonomous agents are doing in much more safety critical spaces. For feeding back in, we hooked up to Unreal and Unity really early on. We’re doing work in the military space with drones and we’ve done some driving space projects and that’s all being simulated in game engines.
“So, the notion of next step of bringing some of this to the games world, it’s entirely possible. Our focus right now is more on high value projects at enterprise. I don’t think we’re a good fit for video games right now, but as we learn how to scale wider and wider, then why not? Sure.”
Capps acknowledges that his deep contacts in the games industry would make it the easiest fit for Diveplane to pursue business, but he sees great potential to do good work in fields like healthcare especially.
“I think healthcare is perfect, because healthcare is an area where they really want to be using AI more than they are. It’s the biggest expense in our country, by far. It’s just going up, outpacing the economy. So we’ve got a real problem. They can’t use black box neural nets to make decisions; it just doesn’t fit either the Hippocratic oath or our legal system,” Capps noted.
“There’s no way to say, ‘Sorry, we denied this surgery because the black box said so.’ The liability from a mistaken decision from that would be awful. So, we’re able to come in and say, ‘No, we can provide you with clear explanation, assist the medical director making that decision, with a list of five patients who are very, very similar to this one and some counterexamples that would explain why it shouldn’t be the other answer.’ Any place we can get into the healthcare system and make it faster, more scalable, less expensive, I think just has so many great downstream effects.”
What I love about [Fortnite] is the gender balance of players. I think that’s huge. That was the intention all along. That guys who had kids who were not 18 and boys — which is the kind of games that we were mostly making — they wanted to be able to play with their 10-year-old daughter.
AI has begun, and will continue, to have a wide-ranging impact on video games too, as Spirit AI’s Mitu Khandaker discussed with us recently. While Capps is not pursuing AI in games at the moment, he remains in awe of how the field keeps progressing.
“Every time you fire up a game like an Assassins Creed Origins or something, you just have to stop and be flabbergasted about how the hell they did that,” he enthused. “It’s so amazing to think back to the covers for Unreal 1, where it had 250 polygon monsters with taglines saying, ‘Can this be reality?’ Just the fidelity of the experience [we see now], it’s so fun to not have to put four years of painful work in to be able to enjoy a work product like that.”
Watching Fortnite completely explode from afar has been very gratifying for Capps as well — not merely because it’s been monumentally successful, but because it’s achieved milestone after milestone while giving players proper representation. It’s proof positive that diversity is good for business.
“What I love about that game is the gender balance of players. I think that’s huge. That was the intention all along,” Capps said. “That guys who had kids who were not 18 and boys — which is the kind of games that we were mostly making — they wanted to be able to play with their 10-year-old daughter, to create an experience [they could share]. Obviously, they made many changes after I left, but it feel like that core remains, of making games look attractive to the whole population. I love seeing that.”
I think AGI is entirely weaponizable itself… It’s a lot cheaper to build than nuclear missiles, right? And easier to hide than all those other things. I think it’s a significant concern.
AI, gaming and tech firms in general have not had a great track record when it comes to diversity, but companies like Microsoft are starting to invest more in STEM to change that. And if America wants to lead on AI, then any push for STEM is welcomed, especially given the anti-science mindset sadly present within US government.
“It’s impossible to have people at the sociopolitical top of the country decrying science as false, and then also have a STEM priority for the country, right? You can’t do both at the same time,” Capps said. “So, it’s unfortunate when both our allies and our competitors are making it such a focus. When you read China’s focus on AI documents and the money they’re putting into it, they believe it’s a national priority. I think that makes perfect sense.
“We are cutting funding in the United States, hoping that private sector will do what needs to be done. I can’t imagine that’s going to end the way that we would like, with us always being on top because we’re America, but there you go.”
STEM is necessary for great games development, but before long it’ll become a critical component of national security. America needs more AI scientists and can’t afford to fall behind countries like North Korea.
“I think AGI is entirely weaponizable itself,” Capps warned. “It’s not a glory thing. It’s from a cyber warfare perspective having asymmetric capability. That’s power in and of itself. It’s a lot cheaper to build than nuclear missiles, right? And easier to hide than all those other things. I think it’s a significant concern.”