Now, her human-centered approach to increase machine learning interpretability is set to be applied to the financial tech sector with a newly awarded 2021 JPMorgan Chase AI Research Ph.D. Fellowship.
According to Park, “Society faces fundamental barriers to learning, understanding, and ultimately trusting AI technologies.”
“Not only does a lack of transparency make people hesitant to trust and deploy them, when AI models do not perform satisfactorily or are harmed by malicious attacks, people lack actionable guidance for understanding their vulnerabilities and how to fix them,” she said.
This is where Park’s research in creating interactive visualization tools for machine learning models can help address some of the significant challenges seen across the fintech industry.
Specifically, the fellowship program aims to apply innovative AI and data science tools to support financial businesses. This support ranges from securing AI for privacy, cryptography in financial services, safe human-AI interaction, and more.
There are many use cases for how this research could be applied by a JPMorgan Chase customer. This includes providing explanations as to how an AI service makes a decision. With this, stakeholders can better understand, for example, if an algorithm is adequately capturing their needs or assessing their risk tolerance.
“In the risk management case, I could help a customer understand which features are considered most for the AI decision. For example, there could be many factors that decide that risk, but we can detail which features are higher priority than others and help a customer visualize their data by understanding how key factors are correlated or weighted,” she said.
Park continued, “Through my research in information visualization, machine learning, and data analytics over the past few years, I have realized that the key to promote trust in AI and protect the models is to bring humans into the loop.”
Since beginning her graduate research three years ago, Park has worked with advisor and CSE Associate Professor Polo Chau and Interactive Computing Assistant Professor Diyi Yang to create similar methods with that specific goal in mind.
Currently, the team is designing a visual user interface where users, such as AI experts, can quickly view and edit an AI model’s weakened area.
“Some models can be attacked,” explained Park. “So, what we want to do is help users easily identify and fix the model – which usually takes a considerable amount of time.”
While the audience of Park’s fellowship research may not include AI experts, the fundamental need to enhance AI model’s interpretability in a timely manner is the same.