Several Georgia Tech faculty members and Ph.D. students recently took part in the thirty-first annual conference on Neural Information Processing Systems (NIPS2017), the premier conference for machine learning (ML) and computational neuroscience.
School of Computational Science and Engineering (CSE) Associate Professor and Center for Machine Learning at Georgia Tech (ML@GT) Associate Director Le Song and CSE Ph.D. students Hanjun Dai and Bo Dai, along with Steven Skiena and Yingtao Tian of Stony Brook University, won the best paper award at the NIPS2017 workshop, Machine Learning for Molecules and Materials, for their paper titled Syntax-Directed Variational Autoencoder for Molecule Generation.
When discussing the award-winning paper, Song said, “Although this is a molecule and material workshop, we have an extended version of this study for computer programs. The extended version has been accepted to the premier deep learning conference, ICLR 2018.”
“Much in the same that molecules can be valid or invalid for testing, programs too can be valid and invalid. This is why syntax and semantics are valid or not – and the same technique that is generated for the molecules can be applied to both.”
“We build upon deep learning models, something that has no prior knowledge and does not use logic – it simply has an input and output. We then augment deep learning model with graph structure, prior knowledge, and the ability to reason with logic,“ said Song.
This approach is proven to be significantly better when compared to other current methods for solving the issue of capturing representations for discrete structures with formal grammars and semantics. In other words, this approach uses a deep learning model with the combined methods of prior knowledge with graph structure to create a more efficient algorithm that requires less training data – the likes of which have seldom been combined effectively before.
Song became the associate director for ML@GT in 2016 and an associate professor for CSE in 2017. His primary research focuses on embedding, dynamic processes over networks, large scale machine learning, and interdisciplinary problems. He also co-authored and submitted – alongside several Georgia Tech faculty members and students – five papers to the main conference this year:
- Wasserstein Learning of Deep Generative Point Process Models
- Deep Hyperspherical Learning
- On the Complexity of Learning Neural Networks
- Predicting User Activity Level in Point Processes With Mass Transport Equation
- Learning Combinatorial Optimization Algorithms over Graphs
Learning Combinatorial Optimization Algorithms over Graphs, creates a framework for using deep learning to develop learning optimization algorithms. These frameworks are already being used by global industry titans. Tencent, China’s technology and investment holding conglomerate, which owns three of the world’s five biggest social networks, is using this algorithm for advertising placement to match ads to their most effective target audience. Similarly, Alibaba, China’s biggest online commerce company, is also using this algorithm for fraud detection.
However, this framework has uses ranging beyond that of fraud detection or social media marketing. Song said, “Many real-world problems are constrained on a graph, such as the traveling salesman problem appearing in logistics, and the assignment of computing tasks to a network of computing nodes.”
“We want to learn an intelligent algorithm which can become smarter as it solves more problems. [This applies to a wide variety of applications] For instance, we have a new algorithm for improving robot planning which can learn and become smarter.”