Site icon WBHM 90.3

Scientists make a pocket-sized AI brain with help from monkey neurons

This image shows a schematic of a human brain from above. The left side has lines that look like those on a circuit board. The right side has curving lines resembling the folds of a brain. The lines are white, and the image is against a blue background.

Researchers using data from macaque monkeys were able to shrink an AI vision model to a tiny fraction of its original size.

A human brain consumes less power than a light bulb, while artificial intelligence systems guzzle electricity to do the same tasks.

Now, scientists have created a highly efficient AI model that hints at how living brains are able to do so much with so little, a team reports in the journal Nature.

The model, which mimics a part of the brain’s visual system, started out using 60 million variables. But the team was able to compress it into a version that performed nearly as well using just 10,000 variables.

“That is incredibly small,” says Ben Cowley, an author of the study and an assistant professor at Cold Spring Harbor Laboratory. “This is something we could send in a tweet or an email.”

The compact model also appears to work more like a living brain, which could help scientists study what goes wrong in diseases like Alzheimer’s, Cowley says.

More broadly, if the AI model really does replicate strategies found in nature, it could help scientists understand the inner workings of human brains, says Mitya Chklovskii, a group leader at the Simons Foundation’s Flatiron Institute, who was not involved in the study.

Compact, biology-inspired models of the brain could also lead to “more powerful and more humanlike artificial intelligence,” says Chklovskii, who is also on the faculty at NYU Medical Center.

Monkey data

The study is part of an effort to understand the human visual system, which takes in bits of light and transforms them into something we recognize, like grandma or the Grand Canyon.

Cowley says scientists who study the visual system have been trying to answer questions like, “How do you recognize a cat?” or “How do you recognize a dog?”

There’s no good way to watch a human brain do this. So Cowley has been looking at artificial intelligence systems able to accomplish the same sort of tasks.

But there’s a problem: “We’re very impoverished in our understanding of how these AI systems work,” Cowley says, “much like our own brain.”

Working with researchers at Carnegie Mellon University and Princeton University, Cowley created an AI model that his team could understand. It simulates just one part of the visual system, which features cells called V4 neurons.

“They encode colors and textures and curves and very complicated proto-objects,” Cowley says.

Existing AI systems can do the same thing using deep neural network models, which require powerful computers and learn by considering a huge range of possibilities. But Cowley’s team was after something more efficient.

“We want to take these big clunky models and try to compress it down into a much smaller, compact form,” he says.

They started with a model trained on data from macaque monkeys. Then they looked for parts of the model that were redundant or unnecessary. They also applied statistical techniques like those used to compress digital photos.

The result: a model small enough to put in an email attachment.

A compact model with fewer secrets

Because the model is so small and simple, the team was able to get a glimpse of what its artificial neurons were doing.

Some V4 neurons, for example, were responding to shapes with strong edges and lots of curves — the sort of shapes you might see in the produce section of the grocery store.

“When you go into the supermarket and you see the arranged fruit, your V4 neurons love that,” Cowley says. “They love arranged fruit. They love all the curves of the apples [and] oranges.”

Other V4 neurons seemed to respond only to small dots in an image.

“This was quite interesting to us because primates are very drawn to eyes,” Cowley says.

The specialized nature of these V4 neurons may help explain how human and other primate brains are able make sense of what they see without relying on massive computing power.

The findings also may have implications for artificial intelligence.

“If our brains have less complex models and yet can do more than these AI systems, that tells us something about our AI systems,” Cowley says. Namely, they could probably be smaller and simpler yet still do a better job interpreting what they see.

For example, self-driving cars might be able to run on less powerful computers, he says, while correctly distinguishing a pedestrian from airborne plastic bag.

But AI systems need to do more than shrink in order to perform as well as a human brain, Chklovskii says.

For example, he says, a person can easily recognize a friend’s face in any setting and from many angles, even if that friend has acquired a suntan or is sporting a new haircut.

AI systems struggle with this sort of task, even when powered by supercomputers.

That may be because current AI models are based on an understanding of the human brain from the 20th century, Chklovskii says.

“Since then, we learned a lot more about the brain,” he says. “So maybe we should update the foundations of the artificial networks.”

Transcript:

JUANA SUMMERS, HOST:

A human brain consumes less power than a light bulb. An artificial intelligence system uses vastly more energy to do the same tasks. NPR’s Jon Hamilton reports on new research that hints at how living brains do more with less.

JON HAMILTON, BYLINE: The brain’s visual system takes in bits of light and transforms them into something we recognize, like grandma or the Grand Canyon. Ben Cowley of Cold Spring Harbor Laboratory has been studying how this happens.

BEN COWLEY: Getting at the question, for example, how do you recognize a cat? How do you recognize a dog, etc.?

HAMILTON: There’s no good way to watch a human brain do this, so Cowley has been looking at artificial intelligence systems. But he says there’s a problem.

COWLEY: We’re very impoverished in our understanding of how these AI systems work, much like our own brain.

HAMILTON: Cowley’s team created an AI model they could understand. It simulates the neurons in just one part of the visual system.

COWLEY: They encode colors and textures and curves and very complicated proto-objects. So you wouldn’t say, oh, this cares about a dog, but it might care about a weird nose and an eyeball or something like this.

HAMILTON: Existing AI systems can do this using large models that consider every possibility. Cowley’s team was after something more efficient.

COWLEY: So we want to take these big, clunky models and try to compress it down into a much smaller, compact form.

HAMILTON: They started with a model trained on data from macaque monkeys. It worked, but it included about 60 million variables. Next, Cowley’s team pruned the model and applied statistical techniques like those used to compress digital photos. The result, a version with only about 10,000 variables.

COWLEY: And that is incredibly small. This is something we could send in a tweet or an email.

HAMILTON: Yet, it still does most of what monkey neurons do. And because the model is so small and simple, the team was able to get a glimpse of what its artificial neurons were doing. For example, Cowley says cells called V4 neurons were responding to shapes with strong edges and lots of curves.

COWLEY: The best comparison I could find is when you go into the supermarket and you see the arranged fruit, your V4 neurons love that. They love arranged fruit. They love all the curves of the apples, oranges and things like this.

HAMILTON: Cowley says the AI model also included a group of V4 neurons that seem to be looking only for small dots in an image.

COWLEY: And this was quite interesting to us because, especially for primates, we are very drawn to eyes. You can see a big scene, and if there’s a little human there with little eyes, you are directly going to that face, and you’re looking at their eyes.

HAMILTON: Cowley says the results, which appear in the journal Nature, suggest how human brains may have found efficient ways to make sense of what they see. He says the findings also have implications for artificial intelligence.

COWLEY: If our brains have less-complex models and yet can do much more than these AI systems, that tells us something about our AI systems.

HAMILTON: Namely, that they could probably be smaller and simpler, yet do a better job interpreting what they see. Self-driving cars, for example, could run on smaller computers and still not confuse a plastic bag with a pedestrian. Mitya Chklovskii of New York University and the Simons Foundation Flat Iron Institute says the new study shows one way AI can become more like its natural counterpart. But he says a living brain still outperforms an artificial one in some tasks.

MITYA CHKLOVSKII: You can recognize a face of your friend, regardless of the distance to the face, the orientation of the face, maybe a suntan, maybe a new haircut. You still get the face of your friend.

HAMILTON: Chklovskii says that’s because the neurons in biological brains aren’t limited to snapshot images. Instead, they take in videos that show the same face at different times in different places. Jon Hamilton, NPR News.

(SOUNDBITE OF MUSIC)

Exit mobile version