More stories

  • in

    To perceive faces, your brain relies on a process similar to face recognition systems

    Imagine if every time you looked at a face, one side of the face always appeared distorted as if it were melting, resembling a painting by Salvador Dalí. This is the case for people who have a rare condition known as hemi-prosopometamophosia (hemi-PMO), which makes looking at faces discomforting. According to a new study published in Current Biology, some people with hemi-PMO see distortions to the same half of a person’s face regardless of how the face is viewed. The results demonstrate that our visual system standardizes all the faces we perceive using the same process so they can be better compared to faces we have seen before.
    “Every time we see a face, the brain adjusts our representation of that face so its size, viewpoint, and orientation is matched to faces stored in memory, just like computer face recognition systems such as those used by Facebook and Google,” explains co-author Brad Duchaine, a professor of psychological and brain sciences and the principal investigator of the Social Perception Lab at Dartmouth College. “By aligning the perceived face with faces stored in memory, it’s much easier for us to determine whether the face is one we’ve seen before,” he added.
    Hemi-PMO is a rare disorder that may occur after brain damage. When a person with this condition looks at a face, facial features on one side of the face appear distorted. The existence of hemi-PMO suggests the two halves of the face are processed separately. The condition usually dissipates over time, which makes it difficult to study. As a result, little is known about the condition or what it reveals about how human face processing normally works.
    The current study focused on a right-handed man in his early sixties (“Patient A.D.”) with hemi-PMO whose symptoms have persisted for years. Like many with this condition, his distortions were caused by damage to a fiber bundle called the splenium that connects visual areas in the left hemisphere and right hemisphere of his brain. Five years ago while A.D. was watching television, he noticed that the right halves of people’s faces looked like they had melted. Yet, the left sides of their faces looked normal. He looked in the mirror at his own face and noticed that the right side of his reflection was also distorted. In contrast, A.D. sees no distortions in other body parts or objects.
    The study involved two experiments. In the first, A.D. was presented with images of human faces and non-face images such as objects, houses and cars, and asked to report on distortions. For 17 of the 20 faces, he saw distortions. The distortions were always on the right side of the face and facial features usually appeared to drooped. For example, in one of the faces, A.D. reported that the right eye looked a lot bigger than the left eye while the right eyebrow, right side of the nose, and right side of the lips all hung down unnaturally. Two of the face photographs that did not elicit a distortion showed right profile views in which the right side of the face was not visible. Consistent with his daily experiences, A.D. did not see distortions in any of the non-face images. These results show that his condition affects brain processes specialized for faces.
    For the second part of the study, A.D. reported on distortions that he saw in 15 different faces that were presented in a variety of ways: in the left and right visual field, at different in-depth rotations, and at four picture plane rotations — 0 degrees or upright, 90 degrees, 180 degrees or upside down, and 270 degrees. Regardless of how the faces were presented, A.D. continued to report that the distortions affected the same facial features. For example, even when a face was presented upside down, A.D. still saw the facial features distorted on the right side of the face even though the distortion now appeared on the left-hand side of the stimulus. The consistency of the location of A.D.’s distortion demonstrates that faces, regardless of viewpoint or orientation, are aligned to the same template similar to what computer face recognition systems do. In A.D.’s case, the output from that process is disrupted as it is passed from one brain hemisphere to the other due to his splenium lesion.

    Story Source:
    Materials provided by Dartmouth College. Note: Content may be edited for style and length. More

  • in

    Future mental health care may include diagnosis via brain scan and computer algorithm

    Most of modern medicine has physical tests or objective techniques to define much of what ails us. Yet, there is currently no blood or genetic test, or impartial procedure that can definitively diagnose a mental illness, and certainly none to distinguish between different psychiatric disorders with similar symptoms. Experts at the University of Tokyo are combining machine learning with brain imaging tools to redefine the standard for diagnosing mental illnesses.
    “Psychiatrists, including me, often talk about symptoms and behaviors with patients and their teachers, friends and parents. We only meet patients in the hospital or clinic, not out in their daily lives. We have to make medical conclusions using subjective, secondhand information,” explained Dr. Shinsuke Koike, M.D., Ph.D., an associate professor at the University of Tokyo and a senior author of the study recently published in Translational Psychiatry.
    “Frankly, we need objective measures,” said Koike.
    Challenge of overlapping symptoms
    Other researchers have designed machine learning algorithms to distinguish between those with a mental health condition and nonpatients who volunteer as “controls” for such experiments.
    “It’s easy to tell who is a patient and who is a control, but it is not so easy to tell the difference between different types of patients,” said Koike.

    advertisement

    The UTokyo research team says theirs is the first study to differentiate between multiple psychiatric diagnoses, including autism spectrum disorder and schizophrenia. Although depicted very differently in popular culture, scientists have long suspected autism and schizophrenia are somehow linked.
    “Autism spectrum disorder patients have a 10-times higher risk of schizophrenia than the general population. Social support is needed for autism, but generally the psychosis of schizophrenia requires medication, so distinguishing between the two conditions or knowing when they co-occur is very important,” said Koike.
    Computer converts brain images into a world of numbers
    A multidisciplinary team of medical and machine learning experts trained their computer algorithm using MRI (magnetic resonance imaging) brain scans of 206 Japanese adults, a combination of patients already diagnosed with autism spectrum disorder or schizophrenia, individuals considered high risk for schizophrenia and those who experienced their first instance of psychosis, as well as neurotypical people with no mental health concerns. All of the volunteers with autism were men, but there was a roughly equal number of male and female volunteers in the other groups.
    Machine learning uses statistics to find patterns in large amounts of data. These programs find similarities within groups and differences between groups that occur too often to be easily dismissed as coincidence. This study used six different algorithms to distinguish between the different MRI images of the patient groups.

    advertisement

    The algorithm used in this study learned to associate different psychiatric diagnoses with variations in the thickness, surface area or volume of areas of the brain in MRI images. It is not yet known why any physical difference in the brain is often found with a specific mental health condition.
    Broadening the thin line between diagnoses
    After the training period, the algorithm was tested with brain scans from 43 additional patients. The machine’s diagnosis matched the psychiatrists’ assessments with high reliability and up to 85 percent accuracy.
    Importantly, the machine learning algorithm could distinguish between nonpatients, patients with autism spectrum disorder, and patients with either schizophrenia or schizophrenia risk factors.
    Machines help shape the future of psychiatry
    The research team notes that the success of distinguishing between the brains of nonpatients and individuals at risk for schizophrenia may reveal that the physical differences in the brain that cause schizophrenia are present even before symptoms arise and then remain consistent over time.
    The research team also noted that the thickness of the cerebral cortex, the top 1.5 to 5 centimeters of the brain, was the most useful feature for correctly distinguishing between individuals with autism spectrum disorder, schizophrenia and typical individuals. This unravels an important aspect of the role thickness of the cortex plays in distinguishing between different psychiatric disorders and may direct future studies to understand the causes of mental illness.
    Although the research team trained their machine learning algorithm using brain scans from approximately 200 individuals, all of the data were collected between 2010 to 2013 on one MRI machine, which ensured the images were consistent.
    “If you take a photo with an iPhone or Android camera phone, the images will be slightly different. MRI machines are also like this — each MRI takes slightly different images, so when designing new machine learning protocols like ours, we use the same MRI machine and the exact same MRI procedure,” said Koike.
    Now that their machine learning algorithm has proven its value, the researchers plan to begin using larger datasets and hopefully coordinate multisite studies to train the program to work regardless of the MRI differences. More

  • in

    Energy-efficient tuning of spintronic neurons

    The human brain efficiently executes highly sophisticated tasks, such as image and speech recognition, with an exceptionally lower energy budget than today’s computers can. The development of energy-efficient and tunable artificial neurons capable of emulating brain-inspired processes has, therefore, been a major research goal for decades.
    Researchers at the University of Gothenburg and Tohoku University jointly reported on an important experimental advance in this direction, demonstrating a novel voltage-controlled spintronic microwave oscillator capable of closely imitating the non-linear oscillatory neural networks of the human brain.
    The research team developed a voltage-controlled spintronic oscillator, whose properties can be strongly tuned, with negligible energy consumption. “This is an important breakthrough as these so-called spin Hall nano-oscillators (SHNOs) can act as interacting oscillator-based neurons but have so far lacked an energy-efficient tuning scheme — an essential prerequisite to train the neural networks for cognitive neuromorphic tasks,” proclaimed Shunsuke Fukami, co-author of the study. “The expansion of the developed technology can also drive the tuning of the synaptic interactions between each pair of spintronic neurons in a large complex oscillatory neural network.”
    Earlier this year, the Johan Åkerman group at the University of Gothenburg demonstrated, for the first time, 2D mutually synchronized arrays accommodating 100 SHNOs while occupying an area of less than a square micron. The network can mimic neuron interactions in our brain and carry out cognitive tasks. However, a major bottleneck in training such artificial neurons to produce different responses to different inputs has been the lack of the scheme to control individual oscillator inside such networks.
    The Johan Åkerman group teamed up with Hideo Ohno and Shunsuke Fukami at Tohoku University to develop a bow tie-shaped spin Hall nano-oscillator made from an ultrathin W/CoFeB/MgO material stack with an added functionality of a voltage controlled gate over the oscillating region. Using an effect called voltage-controlled magnetic anisotropy (VCMA), the magnetic and magnetodynamic properties of CoFeB ferromagnet, consisting of a few atomic layers, can be directly controlled to modify the microwave frequency, amplitude, damping, and, thus, the threshold current of the SHNO.
    The researchers also found a giant modulation of SHNO damping up to 42% using voltages from -3 to +1 V in the bow-tied geometry. The demonstrated approach is, therefore, capable of independently turning individual oscillators on/off within a large synchronized oscillatory network driven by a single global drive current. The findings are also valuable since they reveal a new mechanism of energy relaxation in patterned magnetic nanostructures.
    Fukami notes that “With readily available energy-efficient independent control of the dynamical state of individual spintronic neurons, we hope to efficiently train large SHNO networks to carry out complex neuromorphic tasks and scale up oscillator-based neuromorphic computing schemes to much larger network sizes.”

    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Computer scientists set benchmarks to optimize quantum computer performance

    Computer scientists have shown that existing compilers, which tell quantum computers how to use their circuits to execute quantum programs, inhibit the computers’ ability to achieve optimal performance. Specifically, their research has revealed that improving quantum compilation design could help achieve computation speeds up to 45 times faster than currently demonstrated. More

  • in

    An AI algorithm to help identify homeless youth at risk of substance abuse

    While many programs and initiatives have been implemented to address the prevalence of substance abuse among homeless youth in the United States, they don’t always include data-driven insights about environmental and psychological factors that could contribute to an individual’s likelihood of developing a substance use disorder.
    Now, an artificial intelligence (AI) algorithm developed by researchers at the College of Information Sciences and Technology at Penn State could help predict susceptibility to substance use disorder among young homeless individuals, and suggest personalized rehabilitation programs for highly susceptible homeless youth.
    “Proactive prevention of substance use disorder among homeless youth is much more desirable than reactive mitigation strategies such as medical treatments for the disorder and other related interventions,” said Amulya Yadav, assistant professor of information sciences and technology and principal investigator on the project. “Unfortunately, most previous attempts at proactive prevention have been ad-hoc in their implementation.”
    “To assist policymakers in devising effective programs and policies in a principled manner, it would be beneficial to develop AI and machine learning solutions which can automatically uncover a comprehensive set of factors associated with substance use disorder among homeless youth,” added Maryam Tabar, a doctoral student in informatics and lead author on the project paper that will be presented at the Knowledge Discovery in Databases (KDD) conference in late August.
    In that project, the research team built the model using a dataset collected from approximately 1,400 homeless youth, ages 18 to 26, in six U.S. states. The dataset was collected by the Research, Education and Advocacy Co-Lab for Youth Stability and Thriving (REALYST), which includes Anamika Barman-Adhikari, assistant professor of social work at the University of Denver and co-author of the paper.
    The researchers then identified environmental, psychological and behavioral factors associated with substance use disorder among them — such as criminal history, victimization experiences and mental health characteristics. They found that adverse childhood experiences and physical street victimization were more strongly associated with substance use disorder than other types of victimization (such as sexual victimization) among homeless youth. Additionally, PTSD and depression were found to be more strongly associated with substance use disorder than other mental health disorders among this population, according to the researchers.

    advertisement

    Next, the researchers divided their dataset into six smaller datasets to analyze geographical differences. The team trained a separate model to predict substance abuse disorder among homeless youth in each of the six states — which have varying environmental conditions, drug legalization policies and gang associations. The team observed several location-specific variations in the association level of some factors, according to Tabar.
    “By looking at what the model has learned, we can effectively find out factors which may play a correlational role with people suffering from substance abuse disorder,” said Yadav. “And once we know these factors, we are much more accurately able to predict whether somebody suffers from substance use.”
    He added, “So if a policy planner or interventionist were to develop programs that aim to reduce the prevalence of substance abuse disorder, this could provide useful guidelines.”
    Other authors on the KDD paper include Dongwon Lee, associate professor, and Stephanie Winkler, doctoral student, both in the Penn State College of Information Sciences and Technology; and Heesoo Park of Sungkyunkwan University.
    Yadav and Barman-Adhikari are collaborating on a similar project through which they have developed a software agent that designs personalized rehabilitation programs for homeless youth suffering from opioid addiction. Their simulation results show that the software agent — called CORTA (Comprehensive Opioid Response Tool Driven by Artificial Intelligence) — outperforms baselines by approximately 110% in minimizing the number of homeless youth suffering from opioid addiction.

    advertisement

    “We wanted to understand what the causative issues are behind people developing opiate addiction,” said Yadav. “And then we wanted to assign these homeless youth to the appropriate rehabilitation program.”
    Yadav explained that data collected by more than 1,400 homeless youth in the U.S. was used to build AI models to predict the likelihood of opioid addiction among this population. After examining issues that could be the underlying cause of opioid addiction — such as foster care history or exposure to street violence — CORTA solves novel optimization formulations to assign personalized rehabilitation programs.
    “For example, if a person developed an opioid addiction because they were isolated or didn’t have a social circle, then perhaps as part of their rehabilitation program they should talk to a counselor,” explained Yadav. “On the other hand, if someone developed an addiction because they were depressed because they couldn’t find a job or pay their bills, then a career counselor should be a part of the rehabilitation plan.”
    Yadav added, “If you just treat the condition medically, once they go back into the real world, since the causative issue still remains, they’re likely to relapse.”
    Yadav and Barman-Adhikari will present their paper on CORTA, “Optimal and Non-Discriminative Rehabilitation Program Design for Opioid Addiction Among Homeless Youth,” at the International Joint Conference on Artificial Intelligence-Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI), which was to be held in July 2020 but is being rescheduled due to the novel coronavirus pandemic.
    Other collaborators on the CORTA project include Penn State doctoral students Roopali Singh (statistics), Nikolas Siapoutis (statistics) and Yu Liang (informatics). More

  • in

    Linking sight and movement

    To get a better look at the world around them, animals constantly are in motion. Primates and people use complex eye movements to focus their vision (as humans do when reading, for instance); birds, insects, and rodents do the same by moving their heads, and can even estimate distances that way. Yet how these movements play out in the elaborate circuitry of neurons that the brain uses to “see” is largely unknown. And it could be a potential problem area as scientists create artificial neural networks that mimic how vision works in self-driving cars.
    To better understand the relationship between movement and vision, a team of Harvard researchers looked at what happens in one of the brain’s primary regions for analyzing imagery when animals are free to roam naturally. The results of the study, published Tuesday in the journal Neuron, suggest that image-processing circuits in the primary visual cortex not only are more active when animals move, but that they receive signals from a movement-controlling region of the brain that is independent from the region that processes what the animal is looking at. In fact, the researchers describe two sets of movement-related patterns in the visual cortex that are based on head motion and whether an animal is in the light or the dark.
    The movement-related findings were unexpected, since vision tends to be thought of as a feed-forward computation system in which visual information enters through the retina and travels on neural circuits that operate on a one-way path, processing the information piece by piece. What the researchers saw here is more evidence that the visual system has many more feedback components where information can travel in opposite directions than had been thought.
    These results offer a nuanced glimpse into how neural activity works in a sensory region of the brain, and add to a growing body of research that is rewriting the textbook model of vision in the brain.
    “It was really surprising to see this type of [movement-related] information in the visual cortex because traditionally people have thought of the visual cortex as something that only processes images,” said Grigori Guitchounts, a postdoctoral researcher in the Neurobiology Department at Harvard Medical School and the study’s lead author. “It was mysterious, at first, why this sensory region would have this representation of the specific types of movements the animal was making.”
    While the scientists weren’t able to definitively say why this happens, they believe it has to do with how the brain perceives what’s around it.

    advertisement

    “The model explanation for this is that the brain somehow needs to coordinate perception and action,” Guitchounts said. “You need to know when a sensory input is caused by your own action as opposed to when it’s caused by something out there in the world.”
    For the study, Guitchounts teamed up with former Department of Molecular and Cellular Biology Professor David Cox, alumnus Javier Masis, M.A. ’15, Ph.D. ’18, and postdoctoral researcher Steffen B.E. Wolff. The work started in 2017 and wrapped up in 2019 while Guitchounts was a graduate researcher in Cox’s lab. A preprint version of the paper published in January.
    The typical setup of past experiments on vision worked like this: Animals, like mice or monkeys, were sedated, restrained so their heads were in fixed positions, and then given visual stimuli, like photographs, so researchers could see which neurons in the brain reacted. The approach was pioneered by Harvard scientists David H. Hubel and Torsten N. Wiesel in the 1960s, and in 1981 they won a Nobel Prize in medicine for their efforts. Many experiments since then have followed their model, but it did not illuminate how movement affects the neurons that analyze.
    Researchers in this latest experiment wanted to explore that, so they watched 10 rats going about their days and nights. The scientists placed each rat in an enclosure, which doubled as its home, and continuously recorded their head movements. Using implanted electrodes, they measured the brain activity in the primary visual cortex as the rats moved.
    Half of the recordings were taken with the lights on. The other half were recorded in total darkness. The researchers wanted to compare what the visual cortex was doing when there was visual input versus when there wasn’t. To be sure the room was pitch black, they taped shut any crevice that could let in light, since rats have notoriously good vision at night.

    advertisement

    The data showed that on average, neurons in the rats’ visual cortices were more active when the animals moved than when they rested, even in the dark. That caught the researchers off guard: In a pitch-black room, there is no visual data to process. This meant that the activity was coming from the motor cortex, not an external image.
    The team also noticed that the neural patterns in the visual cortex that were firing during movement differed in the dark and light, meaning they weren’t directly connected. Some neurons that were ready to activate in the dark were in a kind of sleep mode in the light.
    Using a machine-learning algorithm, the researchers encoded both patterns. That let them not only tell which way a rat was moving its head by just looking at the neural activity in its visual cortex, but also predict the movement several hundred milliseconds before the rat made it.
    The researchers confirmed that the movement signals came from the motor area of the brain by focusing on the secondary motor cortex. They surgically destroyed it in several rats, then ran the experiments again. The rats in which this area of the brain was lesioned no longer gave off signals in the visual cortex. However, the researchers were not able to determine if the signal originates in the secondary motor cortex. It could be only where it passes through, they said.
    Furthermore, the scientists pointed out some limitations in their findings. For instance, they only measured the movement of the head, and did not measure eye movement. The study is also based on rodents, which are nocturnal. Their visual systems share similarities with humans and primates, but differ in complexity. Still, the paper adds to new lines of research and the findings could potentially be applied to neural networks that control machine vision, like those in autonomous vehicles.
    “It’s all to better understand how vision actually works,” Guitchounts said. “Neuroscience is entering into a new era where we understand that perception and action are intertwined loops. … There’s no action without perception and no perception without action. We have the technology now to measure this.”
    This work was supported by the Harvard Center for Nanoscale Systems and the National Science Foundation Graduate Research Fellowship. More

  • in

    This online calculator can predict your stroke risk

    Doctors can predict patients’ risk for ischemic stroke based on the severity of their metabolic syndrome, a conglomeration of conditions that includes high blood pressure, abnormal cholesterol levels and excess body fat around the abdomen and waist, a new study finds.
    The study found that stroke risk increased consistently with metabolic syndrome severity even in patients without diabetes. Doctors can use this information — and a scoring tool developed by a UVA Children’s pediatrician and his collaborator at the University of Florida — to identify patients at risk and help them reduce that risk.
    “We had previously shown that the severity of metabolic syndrome was linked to future coronary heart disease and type 2 diabetes,” said UVA’s Mark DeBoer, MD. “This study showed further links to future ischemic strokes.”
    Ischemic Stroke Risk
    DeBoer developed the scoring tool, an online calculator to assess the severity of metabolic syndrome, with Matthew J. Gurka, PhD, of the Department of Health Outcomes and Biomedical Informatics at the University of Florida, Gainesville. The tool is available for free at https://metscalc.org/.
    To evaluate the association between ischemic stroke and metabolic syndrome, DeBoer and Gurka reviewed more than 13,000 participants in prior studies and their stroke outcomes. Among that group, there were 709 ischemic strokes over a mean period of 18.6 years assessed in the studies. (Ischemic strokes are caused when blood flow to the brain is obstructed by blood clots or clogged arteries. Hemorrhagic strokes, on the other hand, are caused when blood vessels rupture.)
    The researchers used their tool to calculate “Z scores” measuring the severity of metabolic syndrome among the study participants. They could then analyze the association between metabolic syndrome and ischemic stroke risk.
    The subgroup with the highest association between metabolic syndrome and risk for ischemic stroke was white women, the researchers found. In this group, the research team was able to identify relationships between the individual contributors to metabolic syndrome, such as high blood pressure, and stroke risk.
    The researchers note that race and sex did not seem to make a major difference in stroke risk overall, and they caution that the increased risk seen in white women could be the results of chance alone. “Nevertheless,” they write in a new scientific article outlining their findings, “these results are notable enough that they may warrant further study into race and sex differences.”
    The overall relationship between metabolic syndrome severity and stroke risk was clear, however. And this suggests people with metabolic syndrome can make lifestyle changes to reduce that risk. Losing weight, exercising more, choosing healthy foods — all can help address metabolic syndrome and its harmful effects.
    DeBoer hopes that the tool he and Gurka developed will help doctors guide patients as they seek to reduce their stroke risk and improve their health and well-being.
    “In case there are still individuals out there debating whether to start exercising or eating a healthier diet,” DeBoer said, “this study provides another wake-up call to motivate us all toward lifestyle changes.”

    Story Source:
    Materials provided by University of Virginia Health System. Note: Content may be edited for style and length. More