New Visualization Tool Helps Non-Experts Understand Neural Networks

“Visual analytics helps people make sense of complex systems that use large data and discover insights by effectively visualizing them and let people interact with them,” said Minsuk (Brian) Kahnga Ph.D. student in the School of Computational Science and Engineering (CSE).

Kahng has recently been working alongside Associate Professor Polo Chau to build visualization tools for deep learning models, with an emphasis on public accessibility. Two examples of his work are ActiVis, a visualization system for industry-scale deep neural network models that is deployed at Facebook, and the newly released GAN Lab, which is now available to the public online.

 

GAN Lab: Understanding Complex Deep Generative Models Using Interactive Visual Experimentation

GAN Lab is Kahng’s latest research project and was produced in collaboration with Google Brain and Chau. It is an open-source interactive tool created for a wide audience – including non-experts – to learn, experiment, and play with GANs. 

GT Computing Ph.D. student Minsuk (Brian) Kahng describes GAN Lab, an interactive tool for understanding generative adversarial networks.


 

“A GAN, or a generative adversarial network, is one of the popular, but hard-to-understand deep learning models that can be destructive to a machine learning (ML) system,” explained Kahng. “GANs take a small piece of input – such as a few random numbers – and produce a complex output, like an image of a realistic-looking face.”

A generative model or system that can produce realistic images or simulations can be extraordinarily useful for applications ranging from art to enhancing blurry images. However, these can be dangerous as well if maliciously altered for systems that are dependent image recognition, such as self-driving cars.

According to the GAN Lab creators, “GAN Lab enables people to learn about the training process of this complex generative model. Users can interactively train a GAN model by experimenting with several options, and the tool visualizes the model’s inner-workings, helping people understand the complex model training process.”

 

The how and why

“When I was a master’s student in South Korea, I worked on recommender systems, one of the many applications of machine learning,” said Kahng. 

Recommender systems are information filtering systems that aim to predict ratings or user preferences, such as the suggestions given on your favorite movie streaming site or monthly grab bag. Through his work in this area, Kahng found that beyond simply producing results, he was increasingly interested in helping others make sense of their results.

“I realized, for me, it’s equally as important to explain how such a system works and explain what is going on inside the model. That is why, as a Ph.D. student, I am focused on how people interact with these systems to help them interpret what’s going on behind-the-scenes and change it for themselves,” said Kahng. 

He said that interactive tools like GAN Lab would significantly broaden people’s access to the AI technologies.

Core Research Areas: 
Contact: 

Kristen Perez

Communications Officer I

College of Computing - School of Computational Science and Engineering