AI, "black boxes" whose functioning researchers are trying to decipher.

AI, “black boxes” whose functioning researchers are trying to decipher.

The Green Thumb of Artificial Intelligence Designers?

Do artificial intelligence (AI) designers have a green thumb? This spring, a note from Dario Amodei, co-founder of the AI publisher Anthropic, compared their work to the art of growing plants. One selects the species, the terrain, and chooses the amount of water and sunlight, carefully following the advice of the most influential botanists, in order to create “the optimal conditions to guide their shape and their growth,” he observed. “But the exact structure that emerges is unpredictable,” he added, and our understanding of its functioning is “poor.” Quite the opposite of a classic computer program, whose designers can explain the mechanisms in the smallest detail.

Another image, less bucolic, also often returns among scientists to describe AIs: the “black box.” An analogy that amuses Thomas Fel, a French researcher specializing in their understanding, at Harvard University. “Paradoxically, they are rather transparent,” he smiles, because they are entirely composed of perfectly readable numerical values.

In theory, an AI should be easier to understand than a human brain because its “neurons” are more rudimentary: they are reduced to simple small calculators storing hundreds of numerical values, indicating to them in which case they should react to the signals of their neighbors. Not to mention that we can “open them and mistreat them” without obtaining the patient’s consent, adds Ikram Chraibi Kaadoud, a researcher in “trustworthy AI” at the National Institute for Research in Computer Science and Automation (Inria) at the University of Bordeaux.

Studying the Behavior of Neurons

The rest of this article is for subscribers only.



Enjoyed this post by Thibault Helle? Subscribe for more insights and updates straight from the source.

Similar Posts