More stories

  • in

    How genetic variation gives rise to differences in mathematical ability

    DNA variation in a gene called ROBO1 is associated with early anatomical differences in a brain region that plays a key role in quantity representation, potentially explaining how genetic variability might shape mathematical performance in children, according to a study published October 22nd in the open-access journal PLOS Biology by Michael Skeide of the Max Planck Institute for Human Cognitive and Brain Sciences, and colleagues. Specifically, the authors found that genetic variants of ROBO1 in young children are associated with grey matter volume in the right parietal cortex, which in turn predicts mathematical test scores in second grade.
    Mathematical ability is known to be heritable and related to several genes that play a role for brain development. But it has not been clear how math-related genes might sculpt the developing human brain. As a result, it is an open question how genetic variation could give rise to differences in mathematical ability. To address this gap in knowledge, Skeide and his collaborators combined genotyping with brain imaging in unschooled children without mathematical training.
    The authors analyzed 18 single nucleotide polymorphisms (SNPs) — genetic variants affecting a single DNA building block — in 10 genes previously implicated in mathematical performance. They then examined the relationship between these variants and the volume of grey matter (which mainly consists of nerve cell bodies), across the whole brain in a total of 178 three- to six-year-old children who underwent magnetic resonance imaging. Finally, they identified brain regions whose grey matter volumes could predict math test scores in second grade.
    They found that variants in ROBO1, a gene that regulates prenatal growth of the outermost layer of neural tissue in the brain, are associated with the grey matter volume in the right parietal cortex, a key brain region for quantity representation. Moreover, grey matter volume within these regions predicted the children’s math test scores at seven to nine years of age. According to the authors, the results suggest that genetic variability might shape mathematical ability by influencing the early development of the brain’s basic quantity processing system.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    AI detects hidden earthquakes

    Measures of Earth’s vibrations zigged and zagged across Mostafa Mousavi’s screen one morning in Memphis, Tenn. As part of his PhD studies in geophysics, he sat scanning earthquake signals recorded the night before, verifying that decades-old algorithms had detected true earthquakes rather than tremors generated by ordinary things like crashing waves, passing trucks or stomping football fans.
    “I did all this tedious work for six months, looking at continuous data,” Mousavi, now a research scientist at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth), recalled recently. “That was the point I thought, ‘There has to be a much better way to do this stuff.'”
    This was in 2013. Handheld smartphones were already loaded with algorithms that could break down speech into sound waves and come up with the most likely words in those patterns. Using artificial intelligence, they could even learn from past recordings to become more accurate over time.
    Seismic waves and sound waves aren’t so different. One moves through rock and fluid, the other through air. Yet while machine learning had transformed the way personal computers process and interact with voice and sound, the algorithms used to detect earthquakes in streams of seismic data have hardly changed since the 1980s.
    That has left a lot of earthquakes undetected.
    Big quakes are hard to miss, but they’re rare. Meanwhile, imperceptibly small quakes happen all the time. Occurring on the same faults as bigger earthquakes — and involving the same physics and the same mechanisms — these “microquakes” represent a cache of untapped information about how earthquakes evolve — but only if scientists can find them.

    advertisement

    In a recent paper published in Nature Communications, Mousavi and co-authors describe a new method for using artificial intelligence to bring into focus millions of these subtle shifts of the Earth. “By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop,” said Stanford geophysicist Gregory Beroza, one of the paper’s authors.
    Focusing on what matters
    Mousavi began working on technology to automate earthquake detection soon after his stint examining daily seismograms in Memphis, but his models struggled to tune out the noise inherent to seismic data. A few years later, after joining Beroza’s lab at Stanford in 2017, he started to think about how to solve this problem using machine learning.
    The group has produced a series of increasingly powerful detectors. A 2018 model called PhaseNet, developed by Beroza and graduate student Weiqiang Zhu, adapted algorithms from medical image processing to excel at phase-picking, which involves identifying the precise start of two different types of seismic waves. Another machine learning model, released in 2019 and dubbed CRED, was inspired by voice-trigger algorithms in virtual assistant systems and proved effective at detection. Both models learned the fundamental patterns of earthquake sequences from a relatively small set of seismograms recorded only in northern California.
    In the Nature Communications paper, the authors report they’ve developed a new model to detect very small earthquakes with weak signals that current methods usually overlook, and to pick out the precise timing of the seismic phases using earthquake data from around the world. They call it Earthquake Transformer.

    advertisement

    According to Mousavi, the model builds on PhaseNet and CRED, and “embeds those insights I got from the time I was doing all of this manually.” Specifically, Earthquake Transformer mimics the way human analysts look at the set of wiggles as a whole and then hone in on a small section of interest.
    People do this intuitively in daily life — tuning out less important details to focus more intently on what matters. Computer scientists call it an “attention mechanism” and frequently use it to improve text translations. But it’s new to the field of automated earthquake detection, Mousavi said. “I envision that this new generation of detectors and phase-pickers will be the norm for earthquake monitoring within the next year or two,” he said.
    The technology could allow analysts to focus on extracting insights from a more complete catalog of earthquakes, freeing up their time to think more about what the pattern of earthquakes means, said Beroza, the Wayne Loel Professor of Earth Science at Stanford Earth.
    Hidden faults
    Understanding patterns in the accumulation of small tremors over decades or centuries could be key to minimizing surprises — and damage — when a larger quake strikes.
    The 1989 Loma Prieta quake ranks as one of the most destructive earthquake disasters in U.S. history, and as one of the largest to hit northern California in the past century. It’s a distinction that speaks less to extraordinary power in the case of Loma Prieta than to gaps in earthquake preparedness, hazard mapping and building codes — and to the extreme rarity of large earthquakes.
    Only about one in five of the approximately 500,000 earthquakes detected globally by seismic sensors every year produce shaking strong enough for people to notice. In a typical year, perhaps 100 quakes will cause damage.
    In the late 1980s, computers were already at work analyzing digitally recorded seismic data, and they determined the occurrence and location of earthquakes like Loma Prieta within minutes. Limitations in both the computers and the waveform data, however, left many small earthquakes undetected and many larger earthquakes only partially measured.
    After the harsh lesson of Loma Prieta, many California communities have come to rely on maps showing fault zones and the areas where quakes are likely to do the most damage. Fleshing out the record of past earthquakes with Earthquake Transformer and other tools could make those maps more accurate and help to reveal faults that might otherwise come to light only in the wake of destruction from a larger quake, as happened with Loma Prieta in 1989, and with the magnitude-6.7 Northridge earthquake in Los Angeles five years later.
    “The more information we can get on the deep, three-dimensional fault structure through improved monitoring of small earthquakes, the better we can anticipate earthquakes that lurk in the future,” Beroza said.
    Earthquake Transformer
    To determine an earthquake’s location and magnitude, existing algorithms and human experts alike look for the arrival time of two types of waves. The first set, known as primary or P waves, advance quickly — pushing, pulling and compressing the ground like a Slinky as they move through it. Next come shear or S waves, which travel more slowly but can be more destructive as they move the Earth side to side or up and down.
    To test Earthquake Transformer, the team wanted to see how it worked with earthquakes not included in training data that are used to teach the algorithms what a true earthquake and its seismic phases look like. The training data included one million hand-labeled seismograms recorded mostly over the past two decades where earthquakes happen globally, excluding Japan. For the test, they selected five weeks of continuous data recorded in the region of Japan shaken 20 years ago by the magnitude-6.6 Tottori earthquake and its aftershocks.
    The model detected and located 21,092 events — more than two and a half times the number of earthquakes picked out by hand, using data from only 18 of the 57 stations that Japanese scientists originally used to study the sequence. Earthquake Transformer proved particularly effective for the tiny earthquakes that are harder for humans to pick out and being recorded in overwhelming numbers as seismic sensors multiply.
    “Previously, people had designed algorithms to say, find the P wave. That’s a relatively simple problem,” explained co-author William Ellsworth, a research professor in geophysics at Stanford. Pinpointing the start of the S wave is more difficult, he said, because it emerges from the erratic last gasps of the fast-moving P waves. Other algorithms have been able to produce extremely detailed earthquake catalogs, including huge numbers of small earthquakes missed by analysts — but their pattern-matching algorithms work only in the region supplying the training data.
    With Earthquake Transformer running on a simple computer, analysis that would ordinarily take months of expert labor was completed within 20 minutes. That speed is made possible by algorithms that search for the existence of an earthquake and the timing of the seismic phases in tandem, using information gleaned from each search to narrow down the solution for the others.
    “Earthquake Transformer gets many more earthquakes than other methods, whether it’s people sitting and trying to analyze things by looking at the waveforms, or older computer methods,” Ellsworth said. “We’re getting a much deeper look at the earthquake process, and we’re doing it more efficiently and accurately.”
    The researchers trained and tested Earthquake Transformer on historic data, but the technology is ready to flag tiny earthquakes almost as soon as they happen. According to Beroza, “Earthquake monitoring using machine learning in near real-time is coming very soon.” More

  • in

    Individuals may legitimize hacking when angry with system or authority

    University of Kent research has found that when individuals feel that a system or authority is unresponsive to their demands, they are more likely to legitimise hacker activity at an organisation’s expense.
    Individuals are more likely to experience anger when they believe that systems or authorities have overlooked pursuing justice on their behalf or listening to their demands. In turn, the study found that if the systems or authorities in question were a victim of hacking, individuals would be more likely to legitimise the hackers’ disruptive actions as a way to manifest their own anger against the organisation.
    With more organisations at risk to cyber security breaches, and more elements of individuals’ social lives taking place online, this research is timely in highlighting how hackers are perceived by individuals seeking justice.
    The research, led by Maria Heering and Dr Giovanni Travaglino at the University of Kent’s School of Psychology, was carried out with British undergraduate students and participants on academic survey crowdsourcer, Prolific Academic. The participants were presented with fictional scenarios of unfair treatment from authorities, with complaints either dismissed or pursued, before they were told that hackers had defaced the authorities’ websites. Participants were then asked to indicate how much they disagreed or agreed with the hackers’ actions. These hackers were predominantly supported by participants perceiving them as a way to ‘get back at’ the systems who do not listen to their demands.
    Maria Heering said: ‘When individuals perceive a system as unjust, they are motivated to participate in political protest and collective action to promote social change. However, if they believe they will not have voice, they will legitimise groups and individuals who disrupt the system on their behalf. While this study explored individuals’ feelings of anger, there is certainly more to be explored in this research area. For example, there might be important differences between the psychological determinations of individuals’ support for humorous, relatively harmless forms of hacking, and more serious and dangerous ones.’

    Story Source:
    Materials provided by University of Kent. Note: Content may be edited for style and length. More

  • in

    New tool can diagnose strokes with a smartphone

    A new tool created by researchers at Penn State and Houston Methodist Hospital could diagnose a stroke based on abnormalities in a patient’s speech ability and facial muscular movements, and with the accuracy of an emergency room physician — all within minutes from an interaction with a smartphone.
    “When a patient experiences symptoms of a stroke, every minute counts,” said James Wang, professor of information sciences and technology at Penn State. “But when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist — a specialist who may not be immediately available — to perform clinical diagnostic tests.”
    Wang and his colleagues have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting.
    “Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan,” said Wang. “We are trying to simulate or emulate this process by using our machine learning approach.”
    The team’s novel approach is the first to analyze the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient’s face or voice, such as a drooping cheek or slurred speech.
    The results could help emergency room physicians to more quickly determine critical next steps for the patient. Ultimately, the application could be utilized by caregivers or patients to make self-assessments before reaching the hospital.

    advertisement

    “This is one of the first works that is enabling AI to help with stroke diagnosis in emergency settings,” added Sharon Huang, associate professor of information sciences and technology at Penn State.
    To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas. Each patient was asked to perform a speech test to analyze their speech and cognitive communication while being recorded on an Apple iPhone.
    “The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment,” said Huang.
    Testing the model on the Houston Methodist dataset, the researchers found that its performance achieved 79% accuracy — comparable to clinical diagnostics by emergency room doctors, who use additional tests such as CT scans. However, the model could help save valuable time in diagnosing a stroke, with the ability to assess a patient in as little as four minutes.
    “There are millions of neurons dying every minute during a stroke,” said John Volpi, a vascular neurologist and co-director of the Eddy Scurlock Stroke Center at Houston Methodist Hospital. “In severe strokes it is obvious to our providers from the moment the patient enters the emergency department, but studies suggest that in the majority of strokes, which have mild to moderate symptoms, that a diagnosis can be delayed by hours and by then a patient may not be eligible for the best possible treatments.”
    “The earlier you can identify a stroke, the better options (we have) for the patients,” added Stephen T.C. Wong, John S. Dunn, Sr. Presidential Distinguished Chair in Biomedical Engineering at the Ting Tsung and Wei Fong Chao Center for BRAIN and Houston Methodist Cancer Center. “That’s what makes an early diagnosis essential.”

    advertisement

    Volpi said that physicians currently use a binary approach toward diagnosing strokes: They either suspect a stroke, sending the patient for a series of scans that could involve radiation; or they do not suspect a stroke, potentially overlooking patients who may need further assessment.
    “What we think in that triage moment is being either biased toward overutilization (of scans, which have risks and benefits) or underdiagnosis,” said Volpi, a co-author on the paper. “If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit.”
    He added, “We have great therapeutics, medicines and procedures for strokes, but we have very primitive and, frankly, inaccurate diagnostics.”
    Other collaborators on the project include Tongan Cai and Mingli Yu, graduate students working with Wang and Huang at Penn State; and Kelvin Wong, associate research professor of electronic engineering in oncology at Houston Methodist Hospital.

    Story Source:
    Materials provided by Penn State. Original written by Jessica Hallman. Note: Content may be edited for style and length. More

  • in

    Reviewing multiferroics for future, low-energy data storage

    A new UNSW study comprehensively reviews the magnetic structure of the multiferroic material bismuth ferrite (BiFeO3 — BFO).
    The review advances FLEET’s search for low-energy electronics, bringing together current knowledge on the magnetic order in BFO films, and giving researchers a solid platform to further develop this material in low-energy magnetoelectric memories.
    BFO is unique in that it displays both magnetic and electronic ordering (ie, is ‘multiferroic’) at room temperature, allowing for low-energy switching in data storage devices.
    MULTIFERROICS: COMBINED MAGNETIC AND ELECTRONIC ORDERING FOR LOW-ENERGY DATA STORAGE
    Multiferroics are materials that have more than one ‘order parameter’.
    For example, a magnetic material displays magnetic order: you can imagine that the material is made up of lots of neatly arranged (ordered), tiny magnets.

    advertisement

    BFO cycloid diagram
    Spin (magnetic order) in the multi-ferroic material bismuth-ferrite ‘cycles’ through the crystal, offering potential application in emerging electronics fields such as magnonics
    Some materials display electronic order — a property referred to as ferroelectricity — which can be considered the electrical equivalent of magnetism.
    In a ferroelectric material, some atoms are positively charged, others are negatively charged, and the way these atoms are arranged in the material gives a specific order to the charge in the material.
    In nature, a small fraction of known materials possess both magnetic and ferroelectric order (as is the case for BFO) and are thus referred to as multiferroic materials.

    advertisement

    The coupling between magnetic and ferroelectric order in a multiferroic material unlocks interesting physics and opens the way for applications such as energy-efficient electronics, for example in non-volatile memory devices.
    Studies at FLEET focus on the potential use of such materials as a switching mechanism.
    Ferroelectric materials can be considered the electrical equivalent of a permanent magnet, possessing a spontaneous polarisation. This polarisation is switchable by an electric field.
    The storage of data on traditional hard disks relies on switching each bit’s magnetic state: from zero, to one, to zero. But it takes a relatively large amount of energy to generate the magnetic field required to accomplish this.
    In a ‘multiferroic memory,’ the coupling between the magnetic and ferroelectric order could allow ‘flipping’ of the state of a bit by electric field, rather than a magnetic field.
    Electric fields are a lot less energetically costly to generate than magnetic fields, so multiferroic memory would be a significant win for ultra-low-energy electronics, a key aim in FLEET.
    BFO: A UNIQUE MULTIFERROIC MATERIAL
    Bismuth ferrite (BFO) is unique among multiferroics: its magnetic and ferroelectric persist up to room temperature. Most multiferroics only exhibit both order parameters at far below room temperature, making them impractical for low-energy electronics.
    (There’s no point in designing low-energy electronics if it costs you more energy to cool the system than you save in operation.)
    THE STUDY
    Co-author Dr Dan Sando preparing materials for study at UNSW
    The new UNSW study reviews the magnetic structure of bismuth ferrite; in particular, when it is grown as a thin single crystal layer on a substrate.
    The paper examines BFO’s complicated magnetic order, and the many different experimental tools used to probe and help understand it.
    Multiferroics is a challenging topic. For example, for researchers trying to enter the field, it’s very difficult to get a full picture on the magnetism of BFO from any one reference.
    “So, we decided to write it,” says Dr Daniel Sando. “We were in the perfect position to do so, as we had all the information in our heads, Stuart wrote a literature review chapter, and we had the combined necessary physics background to explain the important concepts in a tutorial-style manner.”
    The result is a comprehensive, complete, and detailed review article that will attract significant attention from researchers and will serve as a useful reference for many.
    Co-lead author Dr Stuart Burns explains what new researchers to the field of multiferroics will gain from the article:
    “We structured the review as a build-your-own-experiment starter pack: readers will be taken through the chronology of BFO, a selection of techniques to utilize (alongside the advantages and pitfalls of each) and various interesting ways to modify the physics at play. With these pieces in place, experimentalists will know what to expect, and can focus on engineering new low-energy devices and memory architectures.”
    The other lead author, Oliver Paull, says “We hope that other researchers in our field will use this work to train their students, learn the nuances of the material, and have a one-stop reference article which contains all pertinent references — the latter in itself an extremely valuable contribution.”
    Prof Nagy Valanoor added “The most fulfilling aspect of this paper was its style as a textbook chapter. We left no stone unturned!”
    The discussion paper includes incorporation of BFO into functional devices that use the cross coupling between ferroelectricity and magnetism, and very new fields such as antiferromagnetic spintronics, where the quantum mechanical property of the spin of the electron can be used to process information. More

  • in

    A wearable sensor to help ALS patients communicate

    People with amyotrophic lateral sclerosis (ALS) suffer from a gradual decline in their ability to control their muscles. As a result, they often lose the ability to speak, making it difficult to communicate with others.
    A team of MIT researchers has now designed a stretchable, skin-like device that can be attached to a patient’s face and can measure small movements such as a twitch or a smile. Using this approach, patients could communicate a variety of sentiments, such as “I love you” or “I’m hungry,” with small movements that are measured and interpreted by the device.
    The researchers hope that their new device would allow patients to communicate in a more natural way, without having to deal with bulky equipment. The wearable sensor is thin and can be camouflaged with makeup to match any skin tone, making it unobtrusive.
    “Not only are our devices malleable, soft, disposable, and light, they’re also visually invisible,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences at MIT and the leader of the research team. “You can camouflage it and nobody would think that you have something on your skin.”
    The researchers tested the initial version of their device in two ALS patients (one female and one male, for gender balance) and showed that it could accurately distinguish three different facial expressions — smile, open mouth, and pursed lips.
    MIT graduate student Farita Tasnim and former research scientist Tao Sun are the lead authors of the study, which appears today in Nature Biomedical Engineering. Other MIT authors are undergraduate Rachel McIntosh, postdoc Dana Solav, and research scientist Lin Zhang. Yuandong Gu of the A*STAR Institute of Microelectronics in Singapore and Nikta Amiri, Mostafa Tavakkoli Anbarani, and M. Amin Karami of the University of Buffalo are also authors.

    advertisement

    A skin-like sensor
    Dagdeviren’s lab, the Conformable Decoders group, specializes in developing conformable (flexible and stretchable) electronic devices that can adhere to the body for a variety of medical applications. She became interested in working on ways to help patients with neuromuscular disorders communicate after meeting Stephen Hawking in 2016, when the world-renowned physicist visited Harvard University and Dagdeviren was a junior fellow in Harvard’s Society of Fellows.
    Hawking, who passed away in 2018, suffered from a slow-progressing form of ALS. He was able to communicate using an infrared sensor that could detect twitches of his cheek, which moved a cursor across rows and columns of letters. While effective, this process could be time-consuming and required bulky equipment.
    Other ALS patients use similar devices that measure the electrical activity of the nerves that control the facial muscles. However, this approach also requires cumbersome equipment, and it is not always accurate.
    “These devices are very hard, planar, and boxy, and reliability is a big issue. You may not get consistent results, even from the same patients within the same day,” Dagdeviren says.

    advertisement

    Most ALS patients also eventually lose the ability to control their limbs, so typing is not a viable strategy to help them communicate. The MIT team set out to design a wearable interface that patients could use to communicate in a more natural way, without the bulky equipment required by current technologies.
    The device they created consists of four piezoelectric sensors embedded in a thin silicone film. The sensors, which are made of aluminum nitride, can detect mechanical deformation of the skin and convert it into an electric voltage that can be easily measured. All of these components are easy to mass-produce, so the researchers estimate that each device would cost around $10.
    The researchers used a process called digital imaging correlation on healthy volunteers to help them select the most useful locations to place the sensor. They painted a random black-and-white speckle pattern on the face and then took many images of the area with multiple cameras as the subjects performed facial motions such as smiling, twitching the cheek, or mouthing the shape of certain letters. The images were processed by software that analyzes how the small dots move in relation to each other, to determine the amount of strain experienced in a single area.
    “We had subjects doing different motions, and we created strain maps of each part of the face,” McIntosh says. “Then we looked at our strain maps and determined where on the face we were seeing a correct strain level for our device, and determined that that was an appropriate place to put the device for our trials.”
    The researchers also used the measurements of skin deformations to train a machine-learning algorithm to distinguish between a smile, open mouth, and pursed lips. Using this algorithm, they tested the devices with two ALS patients, and were able to achieve about 75 percent accuracy in distinguishing between these different movements. The accuracy rate in healthy subjects was 87 percent.
    Enhanced communication
    Based on these detectable facial movements, a library of phrases or words could be created to correspond to different combinations of movements, the researchers say.
    “We can create customizable messages based on the movements that you can do,” Dagdeviren says. “You can technically create thousands of messages that right now no other technology is available to do. It all depends on your library configuration, which can be designed for a particular patient or group of patients.”
    The information from the sensor is sent to a handheld processing unit, which analyzes it using the algorithm that the researchers trained to distinguish between facial movements. In the current prototype, this unit is wired to the sensor, but the connection could also be made wireless for easier use, the researchers say.
    The researchers have filed for a patent on this technology and they now plan to test it with additional patients. In addition to helping patients communicate, the device could also be used to track the progression of a patient’s disease, or to measure whether treatments they are receiving are having any effect, the researchers say.
    “There are a lot of clinical trials that are testing whether or not a particular treatment is effective for reversing ALS,” Tasnim says. “Instead of just relying on the patients to report that they feel better or they feel stronger, this device could give a quantitative measure to track the effectiveness.”
    The research was funded by the MIT Media Lab Consortium, the National Science Foundation, and the National Institute of Biomedical Imaging and Bioengineering. More

  • in

    'Spooky' similarity in how brains and computers see

    The brain detects 3D shape fragments (bumps, hollows, shafts, spheres) in the beginning stages of object vision — a newly discovered strategy of natural intelligence that Johns Hopkins University researchers also found in artificial intelligence networks trained to recognize visual objects.
    A new paper in Current Biology details how neurons in area V4, the first stage specific to the brain’s object vision pathway, represent 3D shape fragments, not just the 2D shapes used to study V4 for the last 40 years. The Johns Hopkins researchers then identified nearly identical responses of artificial neurons, in an early stage (layer 3) of AlexNet, an advanced computer vision network. In both natural and artificial vision, early detection of 3D shape presumably aids interpretation of solid, 3D objects in the real world.
    “I was surprised to see strong, clear signals for 3D shape as early as V4,” said Ed Connor, a neuroscience professor and director of the Zanvyl Krieger Mind/Brain Institute. “But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels.”
    One of the long-standing challenges for artificial intelligence has been to replicate human vision. Deep (multilayer) networks like AlexNet have achieved major gains in object recognition, based on high capacity Graphical Processing Units (GPU) developed for gaming and massive training sets fed by the explosion of images and videos on the Internet.
    Connor and his team applied the same tests of image responses to natural and artificial neurons and discovered remarkably similar response patterns in V4 and AlexNet layer 3. What explains what Connor describes as a “spooky correspondence” between the brain — a product of evolution and lifetime learning — and AlexNet — designed by computer scientists and trained to label object photographs?
    AlexNet and similar deep networks were actually designed in part based on the multi-stage visual networks in the brain, Connor said. He said the close similarities they observed may point to future opportunities to leverage correlations between natural and artificial intelligence.
    “Artificial networks are the most promising current models for understanding the brain. Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence,” Connor said.

    Story Source:
    Materials provided by Johns Hopkins University. Note: Content may be edited for style and length. More

  • in

    For the first time: Realistic simulation of plasma edge instabilities in tokamaks

    Among the loads to which the plasma vessel in a fusion device may be exposed, so-called edge localized modes are particularly undesirable. By computer simulations the origin and the course of this plasma-edge instability could now be explained for the first time in detail.
    Edge Localised Modes, ELMs for short, are one of the disturbances of the plasma confinement that are caused by the interaction between the charged plasma particles and the confining magnetic field cage. During ELM events, the edge plasma loses its confinement for a short time and periodically throws plasma particles and energy outwards onto the vessel walls. Typically, one tenth of the total energy content can thus be ejected abruptly. While the present generation of medium-sized fusion devices can cope with this, large devices such as ITER or a future power plant would not be able to withstand this strain.
    Experimental methods to attenuate, suppress or avoid ELMs have already been successfully developed in current fusion devices (see PI 3/2020). After extensive previous work, it has now been possible for the first time by means of computational simulations to identify the trigger responsible for the explosive onset of these edge instabilities and to reconstruct the course of several ELM cycles — in good agreement with experimentally observed values. A publication accepted in the scientific journal Nuclear Fusion explains this important prerequisite for predicting and avoiding ELM instabilities in future fusion devices.
    The ELM instability builds up after a quiet phase of about 5 to 20 milliseconds — depending on the external conditions — until in half a millisecond between 5 and 15 percent of the energy stored in the plasma is flung onto the walls. Then the equilibrium is restored until the next ELM eruption follows.
    The plasma theorists around first author Andres Cathey of IPP, who come from several laboratories of the European fusion programme EUROfusion, were able to describe and explain the complex physical processes behind this phenomenon in detail: as a non-linear interplay between destabilising effects — the steep rise in plasma pressure at the plasma edge and the increase in current density — and the stabilising plasma flow. If the heating power fed into the plasma is changed in the simulation, the calculated result shows the same effect on the repetition rate of the ELMs, i.e. the frequency, as an increase of the heating power in a plasma experiment at ASDEX Upgrade tokamak: experiment and simulation are in Agreement.
    Although the processes take place in a very short time, their simulation requires a great deal of computing effort. This is because the simulation must resolve into small calculation steps both the short ELM crash and the long development phase between two ELMs — a calculation problem that could only be solved with one of the fastest supercomputers currently available.
    For the simulations the JOREK code was used, a non-linear code for the calculation of tokamak plasmas in realistic geometry, which is being developed in European and international cooperation with strong contributions from IPP.

    Story Source:
    Materials provided by Max-Planck-Institut für Plasmaphysik (IPP). Note: Content may be edited for style and length. More