Results for ""
Cynthia Rudin, a professor of computer science and of electrical and computer engineering at Duke, has become the second recipient of the $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI).
Rudin, who has spent 15 years, working at the intersection on solving important societal problems and applying machine learning for the same, has realised through her work that Artificial Intelligence (AI) is best utilised when people can closely observe what is it doing. This hard work towards advocating for and developing “interpretable” machine learning algorithms that allow humans to see inside AI, has earned Rudin the prestigious award, which is considered to be the 'new Nobel' in the field of AI advancements.
The award was started last year by the Association for the Advancement of Artificial Intelligence (AAAI), which was founded in 1979, serves at the international scientific society that brings together AI researchers, practitioners and educators. The award is funded by the online education company Squirrel AI, which seeks to acknowledge AI achievements along the same lines as similar awards for more traditional fields.
Rudin's first noticeable project was when she used machine learning (ML) to foresee manholes in the city that are at the risk of exploding due to degrading and overloaded electrical circuitry, in collaboration with Con Edison, the energy company responsible for powering New York City.
“We were getting more accuracy from simple classical statistics techniques and a better understanding of the data as we continued to work with it,” Rudin said in an official statement on the Duke University website. “If we could understand what information the predictive models were using, we could ask the Con Edison engineers for useful feedback that improved our whole process. It was the interpretability in the process that helped improve accuracy in our predictions, not any bigger or fancier machine learning model. That’s what I decided to work on, and it is the foundation upon which my lab is built.”
She went on to build new methodologies for interpretable ML - ML that explain themselves so that humans can understand their approach. For example, she has designed a system that can foresee which patients run the most risk of having destructive seizures after a stroke or other brain injury in collaboration with Brandon Westover and Aaron Struck at Massachusetts General Hospital, and her former student Berk Ustun.
She is also the brains behind the New York Police Department’s Patternizr algorithm, a powerful piece of code that determines whether a new crime committed in the city is related to past crimes.
She is being cited for “pioneering scientific work in the area of interpretable and transparent AI systems in real-world deployments, the advocacy for these features in highly sensitive areas such as social justice and medical diagnosis, and serving as a role model for researchers and practitioners.”
“Only world-renowned recognition, such as the Nobel Prize and the A.M. Turing Award from the Association of Computing Machinery, carry monetary rewards at the million-dollar level,” said AAAI awards committee chair and past president Yolanda Gil. “Professor Rudin's work highlights the importance of transparency for AI systems in high-risk domains. Her courage in tackling controversial issues calls out the importance of research to address critical challenges in the responsible and ethical use of AI."
“Cynthia’s commitment to solving important real-world problems, desire to work closely with domain experts, and ability to distil and explain complex models is unparalleled,” said Daniel Wagner, deputy superintendent of the Cambridge Police Department. “Her research resulted in significant contributions to the field of crime analysis and policing. More impressively, she is a strong critic of potentially unjust ‘black box’ models in criminal justice and other high-stakes fields, and an intense advocate for transparent interpretable models were accurate, just and bias-free results are essential.”
“Cynthia is changing the landscape of how AI is used in societal applications by redirecting efforts away from black-box models and toward interpretable models by showing that the conventional wisdom—that black boxes are typically more accurate—is very often false,” said Jun Yang, chair of the computer science department at Duke. “This makes it harder to justify subjecting individuals (such as defendants) to black-box models in high-stakes situations. The interpretability of Cynthia's models has been crucial in getting them adopted in practice, since they enable human decision-makers, rather than replace them.”
Rubin is an undergraduate in mathematical physics and music theory from the University at Buffalo and has a PhD in applied and computational mathematics from Princeton. She has worked with the National Science Foundation as a postdoctoral research fellow at New York University and an associate research scientist at Columbia University. She has been with Duke University since 2017, where she holds appointments in computer science, electrical and computer engineering, biostatistics and bioinformatics, and statistical science.