More stories

  • in

    Screen time can change visual perception — and that's not necessarily bad

    The coronavirus pandemic has shifted many of our interactions online, with Zoom video calls replacing in-person classes, work meetings, conferences and other events. Will all that screen time damage our vision?
    Maybe not. It turns out that our visual perception is highly adaptable, according to research from Psychology Professor and Cognitive and Brain Sciences Coordinator Peter Gerhardstein’s lab at Binghamton University.
    Gerhardstein, Daniel Hipp and Sara Olsen — his former doctoral students — will publish “Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes in Orientation Sensitivity in Visual Contour Perception,” in an upcoming issue of the academic journal Perception. Hipp, the lead author and main originator of the research, is now at the VA Eastern Colorado Health Care System’s Laboratory for Clinical and Translational Research. Olsen, who designed stimuli for the research and aided in the analysis of the results, is now at the University of Minnesota’s Department of Psychiatry.
    “The finding in the work is that the human perceptual system rapidly adjusts to a substantive alteration in the statistics of the visual world, which, as we show, is what happens when someone is playing video games,” Gerhardstein said.
    The experiments
    The research focuses on a basic element of vision: our perception of orientation in the environment.

    advertisement

    Take a walk through the Binghamton University Nature Preserve and look around. Stimuli — trees, branches, bushes, the path — are oriented in many different angles. According to an analysis by Hipp, there is a slight predominance of horizontal and then vertical planes — think of the ground and the trees — but no shortage of oblique angles.
    Then consider the “carpentered world” of a cityscape — downtown Binghamton, perhaps. The percentage of horizontal and vertical orientations increases dramatically, while the obliques fall away. Buildings, roofs, streets, lampposts: The cityscape is a world of sharp angles, like the corner of a rectangle. The digital world ramps up the predominance of the horizontal and vertical planes, Gerhardstein explained.
    Research shows that we tend to pay more attention to horizontal and vertical orientations, at least in the lab; in real-world environments, these differences probably aren’t noticeable, although they likely still drive behavior. Painters, for example, tend to exacerbate these distinctions in their work, a focus of a different research group.
    Orientation is a fundamental aspect of how our brain and eyes work together to build the visual world. Interestingly, it’s not fixed; our visual system can adapt to changes swiftly, as the group’s two experiments show.
    The first experiment established a method of eye tracking that doesn’t require an overt response, such as touching a screen. The second had college students play four hours of Minecraft — one of the most popular computer games in the world — before and after showing them visual stimuli. Then, researchers determined subjects’ ability to perceive phenomena in the oblique and vertical/horizontal orientations using the eye-tracking method from the first experiment.

    advertisement

    A single session produced a clearly detectable change. While the screen-less control group showed no changes in their perception, the game-players detected horizontal and vertical orientations more easily. Neither group changed their perception in oblique orientations.
    We still don’t know how temporary these changes are, although Gerhardstein speculates that the vision of the game-playing research subjects likely returned to normal quickly.
    “So, the immediate takeaway is the impressive extent to which the young adult visual system can rapidly adapt to changes in the statistics of the visual environment,” he said.
    In the next phase of research, Gerhardstein’s lab will track the visual development of two groups of children, one assigned to regularly play video games and the other to avoid screen-time, including television. If the current experiment is any indication, there may be no significant differences, at least when it comes to orientation sensitivity. The pandemic has put in-person testing plans on hold, although researchers have given a survey about children’s playing habits to local parents and will use the results to design a study.
    Adaptive vision
    Other research groups who have examined the effects of digital exposure on other aspects of visual perception have concluded that long-term changes do take place, at least some of which are seen as helpful.
    Helpful? Like other organisms, humans tend to adapt fully to the environment they experience. The first iPhone came out in 2008 and the first iPad in 2010. Children who are around 10 to 12 years old have grown up with these devices, and will live and operate in a digital world as adults, Gerhardstein pointed out.
    “Is it adaptive for them to develop a visual system that is highly sensitive to this particular environment? Many would argue that it is,” he said. “I would instead suggest that a highly flexible system that can shift from one perceptual ‘set’ to another rapidly, so that observers are responding appropriately to the statistics of a digital environment while interacting with digital media, and then shifting to respond appropriately to the statistics of a natural scene or a cityscape, would be most adaptive.” More

  • in

    New detector breakthrough pushes boundaries of quantum computing

    Physicists at Aalto University and VTT Technical Research Centre of Finland have developed a new detector for measuring energy quanta at unprecedented resolution. This discovery could help bring quantum computing out of the laboratory and into real-world applications. The results have been published today in Nature.
    The type of detector the team works on is called a bolometer, which measures the energy of incoming radiation by measuring how much it heats up the detector. Professor Mikko Möttönen’s Quantum Computing and Devices group at Aalto has been developing their expertise in bolometers for quantum computing over the past decade, and have now developed a device that can match current state-of-the-art detectors used in quantum computers.
    ‘It is amazing how we have been able to improve the specs of our bolometer year after year, and now we embark on an exciting journey into the world of quantum devices,’ says Möttönen.
    Measuring the energy of qubits is at the heart of how quantum computers operate. Most quantum computers currently measure a qubit’s energy state by measuring the voltage induced by the qubit. However, there are three problems with voltage measurements: firstly, measuring the voltage requires extensive amplification circuitry, which may limit the scalability of the quantum computer; secondly, this circuitry consumes a lot of power; and thirdly, the voltage measurements carry quantum noise which introduces errors in the qubit readout. Quantum computer researchers hope that by using bolometers to measure qubit energy, they can overcome all of these complications, and now Professor Möttönen’s team have developed one that is fast enough and sensitive enough for the job.
    ‘Bolometers are now entering the field of quantum technology and perhaps their first application could be in reading out the quantum information from qubits. The bolometer speed and accuracy seems now right for it,’ says Professor Möttönen.
    The team had previously produced a bolometer made of a gold-palladium alloy with unparalleled low noise levels in its measurements, but it was still too slow to measure qubits in quantum computers. The breakthrough in this new work was achieved by swapping from making the bolometer out of gold-palladium alloys to making them out of graphene. To do this, they collaborated with Professor Pertti Hakonen’s NANO group — also at Aalto University — who have expertise in fabricating graphene-based devices. Graphene has a very low heat capacity, which means that it is possible to detect very small changes in its energy quickly. It is this speed in detecting the energy differences that makes it perfect for a bolometer with applications in measuring qubits and other experimental quantum systems. By swapping to graphene, the researchers have produced a bolometer that can make measurements in well below a microsecond, as fast as the technology currently used to measure qubits.
    ‘Changing to graphene increased the detector speed by 100 times, while the noise level remained the same. After these initial results, there is still a lot of optimisation we can do to make the device even better,’ says Professor Hakonen.
    Now that the new bolometers can compete when it comes to speed, the hope is to utilise the other advantages bolometers have in quantum technology. While the bolometers reported in the current work performs on par with the current state-of-the-art voltage measurements, future bolometers have the potential to outperform them. Current technology is limited by Heisenberg’s uncertainty principle: voltage measurements will always have quantum noise, but bolometers do not. This higher theoretical accuracy, combined with the lower energy demands and smaller size — the graphene flake could fit comfortably inside a single bacterium — means that bolometers are an exciting new device concept for quantum computing.
    The next steps for their research is to resolve the smallest energy packets ever observed using bolometers in real-time and to use the bolometer to measure the quantum properties of microwave photons, which not only have exciting applications in quantum technologies such as computing and communications, but also in fundamental understanding of quantum physics.
    Many of the scientists involved in the researchers also work at IQM, a spin-out of Aalto University developing technology for quantum computers. “IQM is constantly looking for new ways to enhance its quantum-computer technology and this new bolometer certainly fits the bill,” explains Dr Kuan Yen Tan, Co-Founder of IQM who was also involved in the research.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    'Liking' an article online may mean less time spent reading it

    When people have the option to click “like” on a media article they encounter online, they spend less time actually reading the text, a new study suggests.
    In a lab experiment, researchers found that people spent about 7 percent less time reading articles on controversial topics when they had the opportunity to upvote or downvote them than if there was no interactive element.
    The finding was strongest when an article agreed with the reader’s point of view.
    The results suggest that the ability to interact with online content may change how we consume it, said Daniel Sude, who led the work while earning a doctoral degree in communication at The Ohio State University.
    “When people are voting whether they like or dislike an article, they’re expressing themselves. They are focused on their own thoughts and less on the content in the article,” Sude said.
    “It is like the old phrase, ‘If you’re talking, you’re not listening.’ People were talking back to the articles without listening to what they had to say.”
    In another finding, people’s existing views on controversial topics like gun control or abortion became stronger after voting on articles that agreed with their views, even when they spent less time reading them.

    advertisement

    “Just having the ability to like an article you agreed with was enough to amplify your attitude,” said study co-author Silvia Knobloch-Westerwick, professor of communication at Ohio State.
    “You didn’t need to read the article carefully, you didn’t have to learn anything new, but you are more committed to what you already believed.”
    The study, also co-authored by former Ohio State doctoral student George Pearson, was published online recently in the journal Computers in Human Behavior and will appear in the January 2021 print edition.
    The study involved 235 college students. Before the study, the researchers measured their views on four controversial topics used in the experiment: abortion, welfare benefits, gun control and affirmative action.
    Participants were then showed four versions of an online news website created for the study, each on one of the controversial topics. Each webpage showed headlines and first paragraphs for four articles, two with a conservative slant and two with a liberal slant. Participants could click on the headlines to read the full stories.

    advertisement

    Two versions of the websites had a banner that said, “Voting currently enabled for this topic,” and each article had an up arrow or down arrow that participants could click on to express their opinion.
    The other two websites had a banner that said, “Voting currently disabled for this topic.”
    Participants were given three minutes to browse each website as they wished, although they were not told about the time limit. The researchers measured the time participants spent on each story and whether they voted if they had the opportunity.
    As expected, for each website, participants spent more time reading articles that agreed with their views (about 1.5 minutes) than opposing views (less than a minute).
    But they spent about 12 seconds less time reading the articles they agreed with if they could vote.
    In addition, people voted on about 12 percent of articles that they didn’t select to read, the study showed.
    “Rather than increasing engagement with website content, having the ability to interact may actually distract from it,” Sude said.
    The researchers measured the participants’ views on the four topics again after they read the websites to see if their attitudes had changed at all.
    Results showed that when participants were not able to vote, time spent reading articles that agreed with their original views strengthened these views. The more time they spent reading, the stronger their views became.
    When participants were able to vote, their voting behavior was as influential as their reading time. Even if they stopped reading and upvoted an article, their attitudes still became stronger.
    “It is important that people’s views still became stronger by just having the opportunity to vote, Knobloch-Westerwick said.
    “When they had the opportunity to vote on the articles, their attitudes were getting more extreme with limited or no input from the articles themselves. They were in an echo chamber of one.”
    Sude said there is a better way to interact with online news.
    “Don’t just click the like button. Read the article and leave thoughtful comments that are more than just a positive or negative rating,” he said.
    “Say why you liked or disliked the article. The way we express ourselves is important and can influence the way we think about an issue.” More

  • in

    The secretive networks used to move money offshore

    In 2016, the world’s largest ever data leak dubbed “The Panama Papers” exposed a scandal, uncovering a vast global network of people — including celebrities and world leaders, who used offshore tax havens, anonymous transactions through intermediaries and shell corporations to hide their wealth, grow their fortunes and avoid taxes.
    Researchers at USC Viterbi School of Engineering have now conducted a deep analysis of the entities and their interrelationships that were originally revealed in the 11.5 million files leaked to the International Consortium of Investigative Journalists. The academic researchers have made some discoveries about how this network and transactions operate, uncovering uniquely fragmented network behavior, vastly different from more traditional social or organizational networks, demonstrating why these systems of transactions and associations are so robust and difficult to infiltrate or take down. The work has been published in Applied Network Science.
    Lead author Mayank Kejriwal is an assistant professor working in the Daniel J. Epstein Department of Industrial and Systems Engineering and USC’s Information Sciences Institute who studies complex (typically, social) systems like online trafficking markets using computational methods and network science. He said the research team’s aim was to study the Panama Papers network as a whole, in the same way you might study a social network like Facebook, to try to understand what the network behavior can tell us about how money can be moved.
    “In general, in any social network like LinkedIn or Facebook, there is something called ‘Small World Phenomenon’, which means that you’re only ever around six people away from anyone in the world,” Kejriwal said.
    “For instance, if you want get from yourself to Bill Gates, on average you would be around six connections away,” he said.
    However the team discovered that the Panama Papers network was about as far removed from this traditional social or organizational network behavior as it could possibly be. Instead of a network of highly integrated connections, the researchers discovered a series of secretive disconnected fragments, with entities, intermediaries and individuals involved in transactions and corporations exhibiting very few connections with other entities in the system.

    advertisement

    “It was really unusual. The degree of fragmentation is something I have never seen before,” said Kejriwal. “I’m not aware of any other network that has this kind of fragmentation.”
    “So (without any documentation or leak), if you wanted to find the chain between one organization and another organization, you would not be able to find it, because the chances are that that there is no chain — it’s completely disconnected,” Kejriwal said.
    Most social, friendship or organizational networks contain a series of triangular structures in a system known as the ‘friend of a friend phenomenon.”
    “The simple notion is that a friend of a friend is also a friend,” Kejriwal said. “And we can measure that by counting the number of triangles in the network.”
    However, the team discovered that this triangular structure was not a feature of the Panama Papers network.

    advertisement

    “It turns out that not only is it not prevalent, but it’s far less than prevalent than even for a random network,” Kejriwal said. “If you literally randomly connect things, in a haphazard fashion and then you count the triangles in that network, this network is even sparser than that.” He added, “Compared to a random network, in this type of network, links between financial entities are scrambled until they are essentially meaningless (so that anyone can be transacting with anyone else).”
    It is precisely this disconnectedness that makes the system of secret global financial dealings so robust. Because there was no way to trace relationships between entities, the network could not be easily compromised.
    “So what this suggests is that secrecy is built into the system and you cannot penetrate it,” Kejriwal said.
    “In an interconnected world, we don’t expect anyone to be impenetrable. Everyone has a weak link,” Kejriwal said. “But not in this network. The fact it is so fragmented actually protects them.”
    Kejriwal said the network behavior demonstrates that those involved in the Panama Papers network of offshore entities and transactions were very sophisticated, knowing exactly how to move money around in a way that it becomes untraceable and they are not vulnerable through their connections to others in the system. Because it is a global network, there are few options for national or international bodies to intervene in order to recoup taxes and investigate corruption and money laundering.
    “I don’t know how anyone would try to bring this down, and I’m not sure that they would be able to. The system seems unattackable,” Kejriwal said. More

  • in

    App analyzes coronavirus genome on a smartphone

    A new mobile app has made it possible to analyse the genome of the SARS-CoV-2 virus on a smartphone in less than half an hour.
    Cutting-edge nanopore devices have enabled scientists to read or ‘sequence’ the genetic material in a biological sample outside a laboratory, however analysing the raw data has still required access to high-end computing power — until now.
    The app Genopo, developed by the Garvan Institute of Medical Research, in collaboration with the University of Peradeniya in Sri Lanka, makes genomics more accessible to remote or under-resourced regions, as well as the hospital bedside.
    “Not everyone has access to the high-power computing resources that are required for DNA and RNA analysis, but most people have access to a smartphone,” says co-senior author Dr Ira Deveson, who heads the Genomic Technologies Group at Garvan’s Kinghorn Centre for Clinical Genomics.
    “Fast, real-time genomic analysis is more crucial today than ever, as a central method for tracking the spread of coronavirus. Our app makes genomic analysis more accessible, literally placing the technology into the pockets of scientists around the world.”
    The researchers report the app Genopo in the journal Communications Biology.

    advertisement

    Taking genome analysis off-line
    Genomic sequencing no longer requires a sophisticated lab setup.
    At the size of a USB stick, portable devices such as the Oxford Nanopore Technologies MinION sequencer can rapidly generate genomic sequences from a sample in the field or the clinic. The technology has been used for Ebola surveillance in West Africa, to profile microbial communities in the Arctic and determine coronavirus evolution during the current pandemic.
    However, analysing genome sequencing data requires powerful computation. Scientists need to piece the many strings of genetic letters from the raw data into a single sequence and pinpoint the instances of genetic variation that shed light on how a virus evolves.
    “Until now, genomic analysis has required the processing power of high-end server computers or cloud services. We set out to change that,” explains co-senior author Hasindu Gamaarachchi, Genomics Computing Systems Engineer at the Garvan Institute.

    advertisement

    “To enable in situ genomic sequencing and analysis, in real time and without major laboratory infrastructure, we developed an app that could execute bioinformatics workflows on nanopore sequencing datasets that are downloaded to a smartphone. The reengineering process, spearheaded by first author Hiruna Samarakoon, required overcoming a number of technical challenges due to various resource constraints in smartphones. The app Genopo combines a number of available bioinformatics tools into a single Android application, ‘miniaturised’ to work on the processing power of a consumer Android device.”
    Coronavirus testing
    The researchers tested Genopo on the raw sequencing data of virus samples isolated from nine Sydney patients infected with SARS-CoV-2, which involved extracting and amplifying the virus RNA from a swab sample, sequencing the amplified DNA with a MinION device and analysing the data on a smartphone. The researchers tested their app on different Android devices, including models from Nokia, Huawei, LG and Sony.
    The Genopo app took an average 27 minutes to determine the complete SARS-CoV-2 genome sequence from the raw data, which the researchers say opens the possibility to do genomic analysis at the point of care, in real time. The researchers also showed that Genopo can be used to profile DNA methylation — a modification which changes gene activity — in a sample of the human genome.
    “This illustrates a flexible, efficient architecture that is suitable to run many popular bioinformatics tools and accommodate small or large genomes,” says Dr Deveson. “We hope this will make genomics much more accessible to researchers to unlock the information in DNA or RNA to the benefit of human health, including in the current pandemic.”
    Genopo is a free, open-source application available through the Google Play store (https://play.google.com/store/apps/details?id=com.mobilegenomics.genopo&hl=en).
    This project was supported by a Medical Research Future Fund (grant APP1173594), a Cancer Institute NSW Early Career Fellowship and The Kinghorn Foundation. Garvan is affiliated with St Vincent’s Hospital Sydney and UNSW Sydney. More

  • in

    Driving behavior less 'robotic' thanks to new model

    Researchers from TU Delft have now developed a new model that describes driving behaviour on the basis of one underlying ‘human’ principle: managing the risk below a threshold level. This model can accurately predict human behaviour during a wide range of driving tasks. In time, the model could be used in intelligent cars, to make them feel less ‘robotic’. The research conducted by doctoral candidate Sarvesh Kolekar and his supervisors Joost de Winter and David Abbink will be published in Nature Communications on Tuesday 29 September 2020.
    Risk threshold
    Driving behaviour is usually described using models that predict an optimum path. But this is not how people actually drive. ‘You don’t always adapt your driving behaviour to stick to one optimum path,’ says researcher Sarvesh Kolekar from the Department of Cognitive Robotics. ‘People don’t drive continuously in the middle of their lane, for example: as long as they are within the acceptable lane limits, they are fine with it.’
    Models that predict an optimum path are not only popular in research, but also in vehicle applications. ‘The current generation of intelligent cars drive very neatly. They continuously search for the safest path: i.e. one path at the appropriate speed. This leads to a “robotic” style of driving,’ continues Kolekar. ‘To get a better understanding of human driving behaviour, we tried to develop a new model that used the human risk threshold as the underlying principle.’
    Driver’s Risk Field
    To get to grips with this concept, Kolekar introduced the so-called Driver’s Risk Field (DRF). This is an ever-changing two-dimensional field around the car that indicates how high the driver considers the risk to be at each point. Kolekar devised these risk assessments in previous research. The gravity of the consequences of the risk in question are then taken into account in the DRF. For example, having a cliff on one side of the road boundary is much more dangerous than having grass. ‘The DRF was inspired by a concept from psychology, put forward a long time ago (in 1938) by Gibson and Crooks. These authors claimed that car drivers ‘feel’ the risk field around them, as it were, and base their traffic manoeuvres on these perceptions.’ Kolekar managed to turn this theory into a computer algorithm.
    Predictions
    Kolekar then tested the model in seven scenarios, including overtaking and avoiding an obstacle. ‘We compared the predictions made by the model with experimental data on human driving behaviour taken from the literature. Luckily, a lot of information is already available. It turned out that our model only needs a small amount of data to ‘get’ the underlying human driving behaviour and could even predict reasonable human behaviour in previously unseen scenarios. Thus, driving behaviour rolls out more or less automatically; it is ’emergent’.
    Elegant
    This elegant description of human driving behaviour has huge predictive and generalising value. Apart from the academic value, the model can also be used in intelligent cars. ‘If intelligent cars were to take real human driving habits into account, they would have a better chance of being accepted. The car would behave less like a robot.’

    Story Source:
    Materials provided by Delft University of Technology. Note: Content may be edited for style and length. More

  • in

    Machine learning homes in on catalyst interactions to accelerate materials development

    A machine learning technique rapidly rediscovered rules governing catalysts that took humans years of difficult calculations to reveal — and even explained a deviation. The University of Michigan team that developed the technique believes other researchers will be able to use it to make faster progress in designing materials for a variety of purposes.
    “This opens a new door, not just in understanding catalysis, but also potentially for extracting knowledge about superconductors, enzymes, thermoelectrics, and photovoltaics,” said Bryan Goldsmith, an assistant professor of chemical engineering, who co-led the work with Suljo Linic, a professor of chemical engineering.
    The key to all of these materials is how their electrons behave. Researchers would like to use machine learning techniques to develop recipes for the material properties that they want. For superconductors, the electrons must move without resistance through the material. Enzymes and catalysts need to broker exchanges of electrons, enabling new medicines or cutting chemical waste, for instance. Thermoelectrics and photovoltaics absorb light and generate energetic electrons, thereby generating electricity.
    Machine learning algorithms are typically “black boxes,” meaning that they take in data and spit out a mathematical function that makes predictions based on that data.
    “Many of these models are so complicated that it’s very difficult to extract insights from them,” said Jacques Esterhuizen, a doctoral student in chemical engineering and first author of the paper in the journal Chem. “That’s a problem because we’re not only interested in predicting material properties, we also want to understand how the atomic structure and composition map to the material properties.”
    But a new breed of machine learning algorithm lets researchers see the connections that the algorithm is making, identifying which variables are most important and why. This is critical information for researchers trying to use machine learning to improve material designs, including for catalysts.
    A good catalyst is like a chemical matchmaker. It needs to be able to grab onto the reactants, or the atoms and molecules that we want to react, so that they meet. Yet, it must do so loosely enough that the reactants would rather bind with one another than stick with the catalyst.
    In this particular case, they looked at metal catalysts that have a layer of a different metal just below the surface, known as a subsurface alloy. That subsurface layer changes how the atoms in the top layer are spaced and how available the electrons are for bonding. By tweaking the spacing, and hence the electron availability, chemical engineers can strengthen or weaken the binding between the catalyst and the reactants.
    Esterhuizen started by running quantum mechanical simulations at the National Energy Research Scientific Computing Center. These formed the data set, showing how common subsurface alloy catalysts, including metals such as gold, iridium and platinum, bond with common reactants such as oxygen, hydroxide and chlorine.
    The team used the algorithm to look at eight material properties and conditions that might be important to the binding strength of these reactants. It turned out that three mattered most. The first was whether the atoms on the catalyst surface were pulled apart from one another or compressed together by the different metal beneath. The second was how many electrons were in the electron orbital responsible for bonding, the d-orbital in this case. And the third was the size of that d-electron cloud.
    The resulting predictions for how different alloys bind with different reactants mostly reflected the “d-band” model, which was developed over many years of quantum mechanical calculations and theoretical analysis. However, they also explained a deviation from that model due to strong repulsive interactions, which occurs when electron-rich reactants bind on metals with mostly filled electron orbitals.

    Story Source:
    Materials provided by University of Michigan. Original written by Kate McAlpine. Note: Content may be edited for style and length. More

  • in

    Brain circuitry shaped by competition for space as well as genetics

    Complex brain circuits in rodents can organise themselves with genetics playing only a secondary role, according to a new computer modelling study published today in eLife.
    The findings help answer a key question about how the brain wires itself during development. They suggest that simple interactions between nerve cells contribute to the development of complex brain circuits, so that a precise genetic blueprint for brain circuitry is unnecessary. This discovery may help scientists better understand disorders that affect brain development and inform new ways to treat conditions that disrupt brain circuits.
    The circuits that help rodents process sensory information collected by their whiskers are a great example of the complexity of brain wiring. These circuits are organised into cylindrical clusters or ‘whisker barrels’ that closely match the pattern of whiskers on the animal’s face.
    “The brain cells within one whisker barrel become active when its corresponding whisker is touched,” explains lead author Sebastian James, Research Associate at the Department of Psychology, University of Sheffield, UK. “This precise mapping between the individual whisker and its brain representation makes the whisker-barrel system ideal for studying brain wiring.”
    James and his colleagues used computer modelling to determine if this pattern of brain wiring could emerge without a precise genetic blueprint. Their simulations showed that, in the cramped quarters of the developing rodent brain, strong competition for space between nerve fibers originating from different whiskers can cause them to concentrate into whisker-specific clusters. The arrangement of these clusters to form a map of the whiskers is assisted by simple patterns of gene expression in the brain tissue.
    The team also tested their model by seeing if it could recreate the results of experiments that track the effects of a rat losing a whisker on its brain development. “Our simulations demonstrated that the model can be used to accurately test how factors inside and outside of the brain can contribute to the development of cortical fields,” says co-author Leah Krubitzer, Professor of Psychology at the University of California, Davis, US.
    The authors suggest that this and similar computational models could be adapted to study the development of larger, more complex brains, including those of humans.
    “Many of the basic mechanisms of development in the rodent barrel cortex are thought to translate to development in the rest of cortex, and may help inform research into various neurodevelopmental disorders and recovery from brain injuries,” concludes senior author Stuart Wilson, Lecturer in Cognitive Neuroscience at the University of Sheffield. “As well as reducing the number of animal experiments needed to understand cortical development, exploring the parameters of computational models like ours can offer new insights into how development and evolution interact to shape the brains of mammals, including ourselves.”

    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More