Niantic exec on mapping the future of gaming

AR creatures come to life in Niantic's Peridot game.

Almost everyone was swept up in the Pokémon GO craze when it launched in 2016. Niantic Inc, the company behind it, continues to push the boundaries of augmented reality.

Following the acquisition of web-based AR platform 8th Wall in March 2022, Niantic launched its first mixed-reality experience, Wol, in May 2023. More recently, the company partnered with games publisher Capcom to launch the AR mobile game Monster Hunter Now. 

Brian McClendon, Niantic’s SVP of Engineering, sat down with Frontier Enterprise to talk about his storied career, Niantic’s evolution, and the future of AR and gaming.

What exactly excites you about the intersection of mapping, AI, and robotics?

I got into computers, because I love computer graphics. When video games like Missile Command, Pac Man, and Space Invaders were invented, I was one of the first users in the arcade. My passion for computer graphics led me to build computers and work at Silicon Graphics. There, I was exposed to various applications in the VR area. We were probably the first company to provide the views that people see in their headsets. However, as I often tell people, our headsets cost US$100,000 per eye. As a result, experimenting with our high-end machines was costly, but large companies like Disney created some very interesting VR experiences on SGI back in the day.

Brian McClendon, Senior Vice President of Engineering, Niantic Inc. Image courtesy of Niantic Inc.

Once you have high-resolution screens and graphics, you can start to represent complex datasets. Maps and satellite imagery are particularly hard to render quickly and smoothly. One of my earliest projects, in the late ’80s, involved building a panning satellite map, prior to just focusing on graphics. This sparked my interest in the potential of satellite imagery, even though there was little available at the time.

By the late ’90s, satellite imagery had become more accessible, but it was still hard to use. We developed an app called Keyhole to facilitate the loading of satellite imagery, so that anyone with a regular PC can view any location worldwide. This changed how people perceive the world and satellite imagery. We added map data to that, got acquired by Google, and discovered that map data is not very accurate because, once you put satellite imagery on top of maps, you find all the mistakes where they don’t agree. And pictures are worth a thousand words if you know how to interpret them.

At Google, we started building maps based on these images, initially using satellite imagery and later incorporating Street View. This led to a large-scale mapping effort. After my time at Google, I joined Uber to work on self-driving cars, which require even more precise maps than humans do. But the maps for self-driving cars need to be much more precise. This taught me a lot about localisation, or the ability to find yourself on a map. You do this every day when you look at a map, see your blue dot, look around, and try to figure out where you’re standing—cars need to do the same. They need to do this even if the GPS doesn’t work. This ability to localise oneself precisely using map data is the basis of all self-driving cars. It’s also the foundation of what we’re building at Niantic with our visual positioning system. We are less interested in cars and more focused on pedestrians and places where humans can walk. We’re building these maps around many of the locations that Niantic games find interesting—the Poké stops of the world. We’re working on expanding those maps so that people can know exactly where they’re standing. If they want to see someone else in the scene, we’ll be able to provide an incredibly accurate map for them.

Could you share a bit about what Niantic is doing with this real-world mapping? How do you see that going forward in the future?

All our games depend on some form of mapping, and some require very precise mapping or a strong certainty that you’re at a specific location.

One game, in particular, Ingress, places great importance on whether you’re at a specific location and time, because you can take over that location. We aim to verify that you, standing in front of a place, are genuinely there. Our localisation system enables us to do this, providing greater security and certainty when you’re standing in front of a statue or one of our points of interest. However, I believe the long-term goal revolves around shared experiences and augmenting the world.

Augmented reality is really about understanding reality well enough that you can place objects on top of it and they’ll stay in place. What I mean is, the objects are rendered at such a high resolution that if you attach something to, say, the base of a statue right in the corner, the next person who comes along will see it exactly where you left it. You can hide things, or virtually write on the statue, and it will appear exactly the same for the next viewer. If multiple people are looking at the same time, they’ll have a shared experience, seeing the exact same thing simultaneously.

AI, particularly in the form of machine learning, is absolutely at the forefront of our semantic mapping efforts. We utilise neural networks for both semantic and localisation mapping. We have a neural, net-based mapping implementation that is both novel and powerful. The semantic extraction techniques we employ are used not only for mapping but also operate in real time.

If you’re familiar with the game Peridot, another one of our titles, you’ll notice that your phone can now recognise an array of objects—ranging from 15 to 20 different items. The Peridot character will then interact based on what it identifies. This capability is what we refer to as real-time semantics, and it’s also a component of the ARDK.

How is Niantic leveraging generative AI?

We developed a generative AI character named WOL, which you can interact with at meetwol.com. This character is built on 8th Wall and takes the form of an interactive owl. You can engage in conversation with it, and it will respond to you. The owl is particularly focused on trees in the forest. While the conversation occurs in AR, the underlying language model driving the interaction is built by a company called Inworld.

Peridot in action. Image courtesy of Niantic Inc.

We learned a lot from that experience. We’ve recently announced a module for 8th Wall that enables any of our developers to integrate ChatGPT, Dall-e, or Inworld into their 8th Wall applications. This allows anyone to shape what that product will look like.

We absolutely believe generative AI will change how everybody does business at some level. The key issue is identifying which problems generative AI can solve first, making it worthwhile to build a production system around them. Right now, there are many great demos like Meet Wol, but I think the real win will be when you can start solving interesting problems in gaming architecture or player interaction with generative AI. Deploying such solutions will ultimately enhance user experiences. At the end of the day, if these advancements don’t enhance user or player happiness, they’re essentially window dressing.

If you consider the sheer number of Poké stops worldwide, which are in the tens of millions, it’s impractical for any game designer or developer to custom design experiences for all 20 million of them. But generative AI can. With sufficient semantic and visual information about each location, generative AI can begin to create content tailored to each specific locale. In this way, I see generative AI facilitating localisation rather than personalisation of game experiences and environments in the future.

How do you see the gaming revolution move on from where it is right now?

I believe the future of gaming will be closely tied to the devices people use. Game developers are ultimately seeking users, and currently, there are 2 to 3 billion smartphones available as potential platforms. While the Apple and Google app ecosystems are well-defined but somewhat rigid, and China has its own set of app ecosystems, our 8th Wall web-based system doesn’t have any limitations. It allows anyone to access a game via their browser, a QR code, or a link, so I think there’s opportunity there.

We’re particularly excited about upcoming mixed reality headsets like the Quest Pro 3 and Vision Pro. These devices combine AR and VR capabilities. In the case of the Vision Pro, its cameras and screens are good enough that the experience closely mimics that of AR, to the extent that what you see could look very much like you’re looking straight through a window. What you see through these headsets can be augmented in the same way as with AR glasses. We view these mixed reality headsets as stepping stones to the true outdoor AR glasses that are on the horizon.