Can “Transparent” AI Eliminate Bias?

Researchers believe Bayesian techniques could someday replace the dark mystery of deep learning.

Deep learning has revolutionized many forms of science, and its influence will continue to grow as computers become more powerful and deep learning techniques become more accessible. However, deep learning poses a problem for many scenarios because it leaves so much hidden. Connections among the predictors that influence a decision are often difficult for a human to uncover and understand. In some cases, the hidden factors built into AI can introduce bias.

Scientists at Mederrata Research and Sound Prediction Inc. have developed a more “transparent” AI that “thinks” in a way that is easier for humans to understand. The research, which used the Bridges-2 system at the Pittsburgh Supercomputing Center (PSC), used multilevel Bayesian statistical models to simulate the properties of a conventional AI system. In the future, this method could make it easier to uncover bias and to build a better understanding of computer modeling results.

According to the report at the PSC website, the new technique compared favorably with alternative AI methods, with results that were more interpretable and understandable to humans.