Digital & Technology

Kim Polese: “To unlock AI’s true potential we need systems that can explain their decisions”

Dld19 blog kim polese

Computers are numbers guys. It took researchers decades to teach machines that can only handle zeros and ones how to deal with the uncertainties of the human world. But now, all of a sudden, there’s a burst of artificial intelligence applications everywhere – from voice assistants and self-driving cars to cancer-spotting algorithms and AI artists painting like Picasso.

At DLD 2019 in Munich, Kim Polese will discuss the implications of the AI revolution for business and society. A distinguished computer scientist and entrepreneur, Kim led the launch of Java while she was at Sun Microsystems and now serves as the Chairman of CrowdSmart. The San Francisco-based company aims to improve the success rate of startup investments by combining the intelligence of humans and machines – a strategy that could hold great promise for applications in many other areas as well.

Where do you see AI’s biggest potential?

A much-needed development will be the emergence of explanatory AI – that is, AI systems that can show how they are making decisions, enabling us to correct and adjust them as needed. In so doing, we can train machine learning systems to continually improve in order to better assist us, which will unlock the true potential of artificial intelligence.

Why do we need this next step?

The current wave of commercial AI – often called “deep learning” – has focused on generating insights from huge datasets, and iterating on these insights to form new conclusions. The results can be very impressive, often leading to a blind faith in what the “magic AI machine” says. The downside is that these systems are typically constrained, narrowly focused and require enormous amounts of data. They’re are also very complex and resemble an AI “black box” whose results heavily depend on the integrity of the data – and too often the data is flawed or incomplete. For example, automated decision-making on bank loans or job applications can be based on historically unfair practices or societal prejudices. If these flaws remain hidden from the AI developers, the system may integrate the historically unfair practices into present-day decisions.

What’s the best way to solve this problem?

We need systems where humans and machines can exchange information in order to complement each other. For example, humans are good at inferring results from small data sets, making leaps of insight and filling in the blanks – we make really quick judgments reading body language and using our five senses. Computers, on the other hand, excel at tasks like pattern recognition and reconstructing complete information. What’s new, and important, is that explanatory systems will make transparent how they reached a conclusion, so that humans can better understand the system and adjust assumptions when they are not accurate. That way, we can gradually turn qualitative information into quantitative information to improve human capabilities with key insights that are informed by users continually improving machines that assist them, not the other way around.

Which topic deserves more of our attention in 2019?

One area I’m interested in is how pervasive computing is affecting human brain development, social interactions and emotional health. While computing is revolutionizing and profoundly improving healthcare in countless ways, our continual consumption of information via smart phones and connected devices is also changing us in ways that may not be so healthy. We are just beginning to understand this, and I believe it deserves more attention.