The Science Of Baby’s First Sight

By

When a newborn opens her eyes, she does not see well at all. You, the parent, are a blurry shape of light and dark. Soon, though, her vision comes online. Your baby will recognize you, and you can see it in her eyes. Then baby looks beyond you and that flash of recognition fades. She can’t quite make out what’s out the window. It’s another blurry world of shapes and light. But within a few months, she can see the trees outside. Her entire world is coming into focus.

UNC School of Medicine scientists have found more clues about what happens in the brains of baby mammals as they try to make visual sense of the world. The study in mice, published in the journal Nature Neuroscience, is part of an ongoing project in the lab of Spencer Smith, PhD, assistant professor of cell biology and physiology, to map the functions of the brain areas that play crucial roles in vision. Proper function of these brain areas is likely critical for vision restoration.

“There’s this remarkable biological operation that plays out during development,” Smith said. “Early on, there are genetic programs and chemical pathways that position cells in the brain and help wire up a ‘rough draft’ of the circuitry. Later, after birth, this circuitry is actively sculpted by visual experience: simply looking around our world helps developing brains wire up the most sophisticated visual processing circuitry the world has ever known. Even the best supercomputers and our latest algorithms still can’t compete with the visual processing abilities of humans and animals. We want to know how neural circuitry does this.”

If cures for partial or entire blindness can be developed through, say, gene therapy or retinal implants, then researchers will need to understand the totality of visual brain circuitry to ensure people can recover useful visual function.

“Most work on restoring vision has focused on the retina and the primary visual cortex,” Smith said. “Less work has explored the development of the higher visual areas of the brain, and their potential for recovery from early deficits. I want to understand how these higher visual areas develop. We need to know the critical time windows during which vision should be restored, and what occurs during these windows to ensure proper circuit development.”

To understand the potential challenges that vision restoration later in life might entail, take the case of bilateral cataracts — when the lenses of both eyes are cloudy and vision is severely limited. In developed countries, it’s common to have such cataracts surgically removed very early in life. If so, vision typically develops appropriately.

“But in less developed, rural parts of the world, people often don’t get to a clinic until they are teens or older,” Smith said. “They’ve gone through life seeing light and dark, fuzzy things. That’s about it. When they have the cataracts removed, they recover a large amount of visual function, but it is not complete. They can learn to read and recognize their friends. But they have great difficulty perceiving some types of visual motion.”

It’s the kind of visual perception needed during hand-eye coordination, or simply while navigating the world around you.

There are two subnetworks of visual circuitry, called the ventral and dorsal streams, and the latter of these is important for motion perception.

Smith wanted to know if visual experience is particularly essential for proper development of the dorsal stream. And he wanted to understand what could be changing at the individual neuron level during this early development.

To explore these questions, Smith and his UNC colleagues conducted hundreds of painstaking, time-consuming experiments. In essence, Smith’s lab is reverse engineering complicated brain circuitry with the help of specialized two-photon imaging systems Smith and his team designed and built at the UNC Neuroscience Center, where he is a member.

“If you want to reverse engineer a radio to know how it works, a good way to start would be to watch someone put together a radio,” Smith said. “Well, this is kind of what we’re doing. We’re using our imaging systems to watch how biology builds its visual processing circuitry.”

In one series of experiments, Smith’s team reared mice in complete darkness for several weeks. Even the daily care of the mice was in darkness with the aid of night-vision goggles. Using his imaging system and precision surgical methods, Smith and colleagues could view specific areas of the brain with neuron-level resolution. They showed that the ventral visual stream in mice did indeed come online immediately, with individual neurons firing as the mice responded to visual stimuli. But the dorsal stream did not.

“Keeping the mice in darkness significantly degraded the magnitude of visual responses in the dorsal stream – responses to what they were seeing,” Smith said. The neurons in the dorsal area weren’t firing as strongly as they did in mice raised with normal visual experience. “Interestingly, even after a recovery period in a normal light-dark cycle, the visual deficit in the dorsal stream persisted.”

This is reminiscent of the persistent visual deficits seen in humans with bilateral cataracts that aren’t repaired until later in life.

“Not only did the mice need visual experience to develop their dorsal stream of visual processing, but they needed it in an early developmental time window to refine the brain circuitry,” Smith said. “Otherwise, their vision never properly developed.”

These experiments can help explain what happens in the human analogs of the ventral and dorsal streams when we’re babies, when part of our vision slowly develops and we try to make sense of the world moving around us during the first several months after birth.

Smith added, “Now that we have a little bit of a feel for the lay of the land – how these two subnetworks develop — I really want to drill down into the actual computations that these different brain areas are performing. I want to analyze what information neurons in higher visual areas are encoding. What are they encoding better, or more efficiently, than neurons in the primary visual cortex? What, exactly, are they doing that allows us to analyze complex visual stimuli so quickly and efficiently?”

Leave a Reply

Your email address will not be published. Required fields are marked *