Research

Cybersecurity

As society becomes more connected to and reliant on modern technology, it also grows more vulnerable to cyber attacks. CSE research has improved cybersecurity in a broad range of settings—from the detection of abnormal and malicious activity, to the identification of fraud and malware. A prominent area of CSE research, emerging graph technology at Georgia Tech has the potential to quickly interact with massive amounts of data and respond in near real time to cyber threats.

In addition to advancing data analysis, we are also actively researching computer architectural requirements to maximize the performance of graph analyses across a variety of problem types. Our research can be applied to developing enterprise-level computing platforms that are more resistant and adaptive to cyber attack, and that can help organizations stop or predict attacks to protect customers and our nation’s critical infrastructure.

Data Science and Engineering

Georgia Tech is a prominent leader in the rapidly emerging field of big data, particularly in developing new methods to analyze or even transform large and complex data sets into value. For example, applying data analytics to social networks may help industries understand trends in consumer behaviors. Big data is also useful for addressing grand challenges in areas such as genomics, precision medicine, materials, manufacturing, and management of physical and cyber resources.

The focus of big data research in CSE spans foundational topics (machine learning, data analytics, high performance computing, visualization) and multiple scientific domains (computational biology, materials science, and urban infrastructure, among others).

Data Visualization

Massive amounts of data are created by Internet use, an expanding number of sensors in the environment, and scientific research such as large-scale simulations. Data visualization presents data in ways that best yield insight and support decisions—even as computational science pushes toward exascale capacity and new devices add to the data tsunami via the “Internet of Things.”

Developing visualizations requires creating a tractable representation of the data, then interactively manipulating and querying it. Often researchers must enable users to traverse data sets ranging from terabytes to petabytes. To design visualizations, researchers combine techniques from several disciplines, including data mining, machine learning, and human-computer interaction.

High Performance Computing

High performance computing (HPC) researchers devise computing solutions at the absolute limits of scale and speed to keep pace with the demanding computational needs of our rapidly evolving society. In this compelling field, technical knowledge and ingenuity combine to drive systems using the largest number of processors at the fastest speeds with the least amount of storage and energy. Attempting to operate at these scales pushes the limits of the underlying technologies, including the capabilities of the programming environment and the reliability of the system.

HPC researchers develop efficient, reliable, and fast algorithms, software, tools, and applications. The software that runs on these systems must be carefully constructed and balance many factors to achieve the best performance specific to the computing challenge.

Machine Learning

Machine learning (ML) focuses on the development of computer programs that can teach themselves and act without the need for explicit programming when encountering new information or examples. Research in this area explores the construction and study of algorithms that build models and make data-driven predictions or decisions.

There are a wide range of applications, including recognizing images, characters, and spoken language; categorizing messages; diagnosing and treating complex diseases such as asthma and cancer; detecting fraud; and predicting the responses of humans, natural events, and other dynamic processes.