Machine learning helps determine which infants will gain the most from cochlear implantation

Machine learning helps determine which infants will gain the most from cochlear implantation
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Intact hearing in early childhood is essential for normal development of communication skills and language. Neural circuits are responsible for the healthy development of hearing, which is foundational for most academic skills, such as reading and language communication. Deaf babies, or those born with impaired hearing abilities, may suffer from an ongoing deficit in communication skills and language development if they pass the time window critical for the development of these neural circuits along a typical trajectory. Indeed, about 3 out of every 1000 infants are born with severe or profound hearing loss (Cunningham and Cox, 2003), which is often caused by an impaired cochlear development (i.e. the auditory part of the inner ear). For many of these children, cochlear implantation at a very young age may be critical. However, there are some risks in a surgical procedure and parents may choose to delay a surgery to when the child is older. Therefore, it is critical to know which children will benefit and what age is ideal to have cochlear implantation, but how can we know which children will gain the most from this procedure?

Recently, Tan and colleagues tackled this question by using a statistical technique called "machine learning" to predict which children will gain the best language skills within 2 years of implantation. How does machine learning work? This technique, as applied to neuroimaging data, commonly involves an algorithm that first has to "learn" how healthy brains activate during some specific task, in this case, speech processing. For this step, researchers feed several functional magnetic resonance imaging (fMRI) scans from the brains of healthy, typical hearing individuals into the algorithm. In the second stage, the algorithm creates a map of how the healthy brain functions based on what it learned. Finally, researchers feed a single brain scan from the person they want to test into the algorithm. Because the algorithm has learned what healthy brain scans look like, it categorizes or "knows" if this brain scan from a new individual looks similar to the brain scans of healthy individuals or if it looks different. Different is not necessarily bad, by the way. A scan may be put in the different category by the algorithm if there are consistent age dependent changes in how the brain functions during the task.

The cochlea is within the inner ear. A cochlear implant has many parts as illustrated here: (1) an external microphone that functions to detect sounds, (2) a processor that translates sounds into electrical signals, and a magnetic headpiece that serves to transmit these signals to a receiver implanted under the skin. The receiver sends the signals to an array of electrodes that are surgically inserted into the cochlea (illustrated by her right cheek). These electrodes then send the signals along the auditory nerve and the stimulated nerve then relays this information to the brain (auditory cortex as labeled in the image) where the sound is interpreted and translated into meaning. Image and caption source:

In Tan's study, brain scans were collected from typically developing children with intact hearing to use for training the algorithm. Hearing impaired children who were candidates for the cochlear implantation procedure, on average only 20 months old, were scanned before the implantation procedure. In order to define the neural networks that underlie speech processing, they asked all children to listen to stories while inside the MRI scanner. The authors then used a new variant of machine learning to train and test an algorithm to "learn" what healthy brain maps for speech look like. Interestingly, the algorithm was able to successfully discriminate the brain scans of the hearing impaired children from those of healthy children. These results are exciting because the modified machine learning approach implemented by the authors performed better than conventional approaches. Perhaps even more exciting is that the brain activation patterns in the hearing impaired children before cochlear implantation was predictive of their language performance two years after the surgery! One limitation of this specific study is that the healthy children used to build the classifier were 12 months old and thus younger than many of the candidates for the cochlear implantation, which may affect the classification, as mentioned above.

Altogether, the study by Tan and colleagues highlights the potential role of using fMRI scans to identify hearing impaired children who will benefit the most from cochlear implantation. In particular, machine learning algorithms may assist clinicians in predicting the outcomes of potential candidates for this medical procedure. However, before using this method clinically, the results from the cochlear implantation study, and of others using this method, need to be replicated and validated. But, based on these present results, one thing is for sure: the future for these children sounds promising.

Tzipi Horowitz-Kraus is a neuroscientist, as well as member of the Organization for Human Brain Mapping (OHBM) and writes for the Communications/Media Team. The OHBM Media Team brings cutting edge information and research on the human brain to your laptops, desktops and mobile devices in a way that is neurobiologically pleasing. For more information about brain mapping, follow or @OHBMSci_News

Further reading:

Popular in the Community


What's Hot