More stories

  • in

    'Magic' angle graphene and the creation of unexpected topological quantum states

    Electrons inhabit a strange and topsy-turvy world. These infinitesimally small particles have never ceased to amaze and mystify despite the more than a century that scientists have studied them. Now, in an even more amazing twist, physicists have discovered that, under certain conditions, interacting electrons can create what are called “topological quantum states.” This finding, which was recently published in the journal Nature, has implications for many technological fields of study, especially information technology.
    Topological states of matter are particularly intriguing classes of quantum phenomena. Their study combines quantum physics with topology, which is the branch of theoretical mathematics that studies geometric properties that can be deformed but not intrinsically changed. Topological quantum states first came to the public’s attention in 2016 when three scientists — Princeton’s Duncan Haldane, who is Princeton’s Thomas D. Jones Professor of Mathematical Physics and Sherman Fairchild University Professor of Physics, together with David Thouless and Michael Kosterlitz — were awarded the Nobel Prize for their work in uncovering the role of topology in electronic materials.
    “The last decade has seen quite a lot of excitement about new topological quantum states of electrons,” said Ali Yazdani, the Class of 1909 Professor of Physics at Princeton and the senior author of the study. “Most of what we have uncovered in the last decade has been focused on how electrons get these topological properties, without thinking about them interacting with one another.”
    But by using a material known as magic-angle twisted bilayer graphene, Yazdani and his team were able to explore how interacting electrons can give rise to rise to surprising phases of matter.
    The remarkable properties of graphene were discovered two years ago when Pablo Jarillo-Herrero and his team at the Massachusetts Institute of Technology (MIT) used it to induce superconductivity — a state in which electrons flow freely without any resistance. The discovery was immediately recognized as a new material platform for exploring unusual quantum phenomena.
    Yazdani and his fellow researchers were intrigued by this discovery and set out to further explore the intricacies of superconductivity.

    advertisement

    But what they discovered led them down a different and untrodden path.
    “This was a wonderful detour that came out of nowhere,” said Kevin Nuckolls, the lead author of the paper and a graduate student in physics. “It was totally unexpected, and something we noticed that was going to be important.”
    Following the example of Jarillo-Herrero and his team, Yazdani, Nuckolls and the other researchers focused their investigation on twisted bilayer graphene.
    “It’s really a miracle material,” Nuckolls said. “It’s a two-dimensional lattice of carbon atoms that’s a great electrical conductor and is one of the strongest crystals known.”
    Graphene is produced in a deceptively simple but painstaking manner: a bulk crystal of graphite, the same pure graphite in pencils, is exfoliated using sticky tape to remove the top layers until finally reaching a single-atom-thin layer of carbon, with atoms arranged in a flat honeycomb lattice pattern.

    advertisement

    To get the desired quantum effect, the Princeton researchers, following the work of Jarillo-Herrero, placed two sheets of graphene on top of each other with the top layer angled slightly. This twisting creates a moiré pattern, which resembles and is named after a common French textile design. The important point, however, is the angle at which the top layer of graphene is positioned: precisely 1.1 degrees, the “magic” angle that produces the quantum effect.
    “It’s such a weird glitch in nature,” Nuckolls said, “that it is exactly this one angle that needs to be achieved.” Angling the top layer of graphene at 1.2 degrees, for example, produces no effect.
    The researchers generated extremely low temperatures and created a slight magnetic field. They then used a machine called a scanning tunneling microscope, which relies on a technique called “quantum tunneling” rather than light to view the atomic and subatomic world. They directed the microscope’s conductive metal tip on the surface of the magic-angle twisted graphene and were able to detect the energy levels of the electrons.
    They found that the magic-angle graphene changed how electrons moved on the graphene sheet. “It creates a condition which forces the electrons to be at the same energy,” said Yazdani. “We call this a ‘flat band.'”
    When electrons have the same energy — are in a flat band material — they interact with each other very strongly. “This interplay can make electrons do many exotic things,” Yazdani said.
    One of these “exotic” things, the researchers discovered, was the creation of unexpected and spontaneous topological states.
    “This twisting of the graphene creates the right conditions to create a very strong interaction between electrons,” Yazdani explained. “And this interaction unexpectedly favors electrons to organize themselves into a series of topological quantum states.”
    Specifically, they discovered that the interaction between electrons creates what are called topological insulators. These are unique devices that act as insulators in their interiors, which means that the electrons inside are not free to move around and therefore do not conduct electricity. However, the electrons on the edges are free to move around, meaning they are conductive. Moreover, because of the special properties of topology, the electrons flowing along the edges are not hampered by any defects or deformations. They flow continuously and effectively circumvent the constraints — such as minute imperfections in a material’s surface — that typically impede the movement of electrons.
    During the course of the work, Yazdani’s experimental group teamed up two other Princetonians — Andrei Bernevig, professor of physics, and Biao Lian, assistant professor of physics — to understand the underlying physical mechanism for their findings.
    “Our theory shows that two important ingredients — interactions and topology — which in nature mostly appear decoupled from each other, combine in this system,” Bernevig said. This coupling creates the topological insulator states that were observed experimentally.
    Although the field of quantum topology is relatively new, it holds great potential for revolutionizing the areas of electrical engineering, materials science and especially computer science.
    “People talk a lot about its relevance to quantum computing, where you can use these topological quantum states to make better types of quantum bits,” Yazdani said. “The motivation for what we’re trying to do is to understand how quantum information can be encoded inside a topological phase. Research in this area is producing exciting new science and can have potential impact in advancing quantum information technologies.”
    Yazdani and his team will continue their research into understanding how the interactions of electrons give rise to different topological states.
    “The interplay between the topology and superconductivity in this material system is quite fascinating and is something we will try to understand next,” Yazdani said. More

  • in

    Toward imperceptible electronics that you cannot see or feel

    Researchers have fabricated transparent, ultrathin, flexible sensors with cross-aligned silver nanowire microelectronics fabricated using print technique that would be inexpensive and straightforward to mass-produce. This advance will find much use in biometrics and many other applications that require underlying visual observation. More

  • in

    Fans are not amused about decisions made by video assistants

    Since the 2019/20 season, controversial referee calls in the English Premier League may be technically reviewed and, if deemed necessary, corrected. Using a Twitter analysis of 129 games in the English Premier League, a research team from the Technical University of Munich (TUM) has now determined how decisions made by video referees affect the mood of the fans.
    For its 2019/20 season, the English Premier League introduced the video assistant referee (VAR). Dr. Otto Kolbinger and Melanie Knopp from the Chair of Performance Analysis and Sports Informatics at the Technical University of Munich have now investigated the extent to which this influences the mood of audiences.
    A total of 643,251 English-language tweets from the social media channel Twitter were included in the study, which investigated 94 VAR incidents from 129 games. Of these, over 58,000 tweets (9.1 percent) were directly related to the video referee.
    Analyzing Twitter tweets using artificial intelligence
    For their analysis, the team employed “text mining,” an algorithm-based analysis process which unearths structures of meaning buried in text data. The study focused on the automatic extraction of tacit knowledge from large amounts of text data, in this case tweets, collected via an interface.
    “We used the official hashtag for each game to ensure that the tweets really refer to the game in question,” explained Dr. Kolbinger, elucidating the procedure. “We also, for the first time ever, used a new text classification algorithm. In our case, it performed better than algorithms used in previous studies.”
    To avoid so-called overfitting — the over-adaptation of a model to a given data set — the team allowed only a fraction of the variables to flow into each individual step during model fitting.

    advertisement

    The use of video assistants kill the mood
    In its data analysis, the team examined whether tweets referring to a specific VAR situation are formulated positively or negatively. They found that the average sentiment of tweets relating to decisions by the video referee was significantly lower than that of other tweets: 76.24 percent of the 58,000 tweets were negative, 12.33 percent positive and 11.43 percent neutral.
    The research team also examined the average sentiment of all tweets for a given match chronologically. It turns out that the mood of tweets published after a VAR incident is significantly worse than that of tweets published before the incident.
    This slump lasted 20 minutes on average. Deploying VAR in a match produces a negative sentiment on Twitter. It was this realization that led to the striking study title, rich in associations: “Video kills the sentiment.”
    More transparent communication of VAR decisions
    According to the researchers the status quo is unsatisfactory, which is why they are calling on the governing bodies of the European football associations and leagues to improve the system.

    advertisement

    “The football associations should attempt to communicate all VAR decisions with greater transparency,” recommends Dr. Kolbinger. “To ensure this transparency, the associations could broadcast the communications between the referee on the field and the video referee, as it is done in field hockey. An alternative would be to introduce the option of a ‘coaches challenge’, as in American football. But this is all just food for thought, based on our results.”
    Assess audience responses quantitatively and qualitatively
    “The research project led by Dr. Otto Kolbinger and Melanie Knopp is a pioneering achievement,” says Prof. Lames, who heads the Chair of Performance Analysis and Sports Informatics. “They deployed a technology that they developed and applied for the first time in Germany. It is an innovative and groundbreaking contribution that advances science in this area. ”
    “These sentiment analyzes can be used to measure reactions from audiences both quantitatively and qualitatively,” explains Prof. Lames. “In addition, we can investigate assessments and emotions, which is an extremely valuable marketing tool.” More

  • in

    Grasping exponential growth

    Most people underestimate exponential growth, including when it comes to the spread of the coronavirus. The ability to grasp the magnitude of exponential growth depends on the way in which it is communicated. Using the right framing helps to understand the benefit of mitigation measures.
    The coronavirus outbreak offered the public a crash course in statistics, with terms like doubling time, logarithmic scales, R factor, rolling averages, and excess mortality now on everyone’s tongue. However, simply having heard these terms does not mean that someone will be able to comprehend the speed of the spread.
    Exponential growth is a notoriously difficult concept to understand. This difficulty can be illustrated by an old Indian legend about a king who was tricked by one of his advisers, saying “Noble lord, I want nothing more than a chess board to be filled with grains of rice. Place one grain on the first square and double the amount of grain for each square that follows.”
    The king agreed to the deal, seemingly unaware of the explosive growth that would result from doubling the amount of grain for each of the 64 chessboard squares. At the end of the procedure, he would owe his adviser no less than 18 quintillion, 446 quadrillion, 744 trillion, 73 billion, 709 million, 551 thousand and 615 grains — the equivalent of around 11 billion train carriages full of rice.
    The tendency to underestimate exponential growth can result in negative consequences during a pandemic. If people misjudge how quickly the virus can spread, then they are less likely to take measures such as mask wearing, social distancing, or working from home. Instead, people may perceive such measures as exaggerated.
    A new research paper published by the journal PLOS ONE from ETH Zurich’s Center for Law and Economics and the Lucerne University of Applied Sciences and Arts has taken a closer look at this behavioural phenomenon, known as exponential growth bias. Martin Schonger, lecturer and director of a study programme at HSLU and Senior Research Fellow at ETH Zürich, and doctoral researcher Daniela Sele wanted to find out whether the way in which the exponential spread of infectious disease is communicated can affect the magnitude of this bias. From previous experiments, the researchers knew that people underestimate exponential growth even when they are aware of exponential growth bias. In other words, informing the public of potential bias does little to improve perception: informed people still underestimate what exponential growth really means in practice, just like people who are unaware of the bias.

    advertisement

    Doubling time — a concept easier to understand than growth rate
    The research team conducted an experiment in which over 400 participants were presented with the same scenario: a country currently has a thousand cases, and this figure climbs by 26 percent every day. With this exponential spread of the virus, the country would reach one million cases in 30 days. However, there is a chance to reduce the growth rate from 26 percent to 9 percent by adopting mitigation measures.
    Researchers quizzed participants on the situation, framing their questions from different perspectives: How many cases can be prevented by adopting mitigation measures? By adopting the measures, how much time can be gained before reaching one million cases? How many cases will there be after 30 days if mitigation measures lengthen the doubling time from three days to eight days? By the way, extending the doubling time like this is equivalent to reducing the growth rate from 26 percent to 9 percent — something that few people recognise intuitively.
    Researchers stated that they were surprised by the clear and consistent results of the experiment. Their first finding: talking about growth rates is an ineffective way of communicating the spread of pandemic diseases. Over 90 percent of participants drastically underestimated the number of infections after 30 days of exponential spread. They were much more on the mark, however, when the question was framed using doubling times.
    Imagining the impact of mitigation
    The researchers’ second finding was that people have trouble gauging how many infections can be prevented with mitigation measures. When asked how many infections could be prevented in the scenario above (starting from a thousand cases, a growth rate of 9 percent instead of 26 percent over 30 days), people responded with estimates that were extremely far off. The typical (median) participant believed that 8,600 cases could be prevented, when, in fact, the figure is almost one million.

    advertisement

    However, when participants were asked about the number of days that could be gained by adopting mitigation measures — for example, until hospitals are overloaded, or until there is a vaccine on the market — their estimates were significantly better.
    The experiment achieved its best results with questions framed from the perspective of time gained and the impact of slowing down doubling times. A statement that combines both of these would be, for example: “If each of us adopt preventative measures today, cases of the virus will slow down — we can estimate that they will double only every eight days, as opposed to every three days. This allows 50 additional days to implement preparatory measures to combat the virus (e.g., by providing much needed supplies to hospitals, or finding treatments and vaccines) before reaching one million cases.”
    Choosing the right words
    The study, conducted during the Swiss partial lockdown in spring of 2020, did not focus on how public authorities and the media discussed the spread of the virus. However, Sele and Schonger have been following the way in which the drastic measures were communicated and comparing these observations with their research findings.
    According to the authors, the Federal Office of Public Health (FOPH) and the scientific task force often use doubling times rather than growth rates. In the experiment, they found that this method of framing communication surrounding the coronavirus improved people’s understanding. However, the FOPH made little mention of the potential for time gained, even though the research findings indicate that this information helps to better transmit the message.
    The researchers suspect that the direct impact of official communication is limited. Reporting in the press might play a more significant role, but the media mostly focus on case numbers and rarely frame communication in the context of time gained.
    Schonger and Sele see COVID measures as just one application of the framing theory when it comes to communicating exponential growth: similar phenomena might also be observed in the banking and finance industry, or when it comes to legal or environmental policy-making. More

  • in

    Wearable sensor may signal you're developing COVID-19 — even if your symptoms are subtle

    A smart ring that generates continuous temperature data may foreshadow COVID-19, even in cases when infection is not suspected. The device, which may be a better illness indicator than a thermometer, could lead to earlier isolation and testing, curbing the spread of infectious diseases, according to a preliminary study led by UC San Francisco and UC San Diego.
    An analysis of data from 50 people previously infected with COVID-19, published online in the peer-reviewed journal Scientific Reports on Dec. 14, 2020, found that data obtained from the commercially available smart ring accurately identified higher temperatures in people with symptoms of COVID-19.
    While it is not known how effectively the smart ring can detect asymptomatic COVID-19, which affects between 10 percent to 70 percent of those infected according to the Centers for Disease Control and Prevention, the authors reported that for 38 of the 50 participants, fever was identified when symptoms were unreported or even unnoticed.
    Of note, the researchers analyzed weeks of temperature data to determine typical ranges for each of the 50 participants. “Many factors impact body temperature,” said principal investigator and senior author Ashley Mason, PhD, assistant professor in the UCSF Department of Psychiatry and faculty at the Osher Center for Integrative Medicine. “Single-point temperature measurement is not very meaningful. People go in and out of fever, and a temperature that is clearly elevated for one person may not be a major aberration for another person. Continual temperature information can better identify fever.”
    According to co-author Frederick Hecht, MD, professor of medicine and director of research at the UCSF Osher Center for Integrative Medicine, this work is “important for showing the potential of wearable devices in early detection of COVID-19, as well as other infectious diseases.”
    Asymptomatic Illness or Illness with Unreported/Unnoticed Symptoms?
    While the number of study participants was too small to extrapolate for the whole population, the authors said they were encouraged that the smart ring detected illness when symptoms were subtle or unnoticed. “This raises the question of how many asymptomatic cases are truly asymptomatic and how many might just be unnoticed or unreported,” said first author Benjamin Smarr, PhD, an assistant professor in the Department of Bioengineering and the Halicio?lu Data Science Institute at UC San Diego. “By using wearable technology, we’re able to query the body directly.”

    advertisement

    To conduct the study, the researchers used the Oura Ring, a wearable sensor made by the Finnish startup Oura, which pairs to a mobile app. The ring continuously measures sleep and wakefulness, heart and respiratory rates, and temperature. The researchers provided the rings to nearly 3,400 health care workers across the U.S., and worked with Oura to invite existing users to participate in the study via the Oura app, resulting in enrollment of more than 65,000 participants worldwide in a now concluded prospective, observational study, which the UC researchers are preparing for publication.
    The participants in the preliminary study reported that they had previously been infected with COVID-19. A continuous record of their biomonitoring data was still available for analysis from the weeks before their infection, through the time of enrollment until the end of the study.
    No-touch thermometers that detect infrared radiation from the forehead are used to quickly screen for fever in airports and offices and are believed to detect some COVID-19 cases, but many studies suggest their value is limited. The ring records temperature all the time, so each measurement is contextualized by the history of that individual, making relative elevations much easier to spot. “Context matters in temperature assessment,” Smarr emphasized.
    Heart Rate, Respiration Rate Provide Other Clues
    Other illness-associated changes that the rings detect included increased heart rate, reduced heart rate variability and increased respiration rate, but these changes were not as strongly correlated, the authors noted.

    advertisement

    The researchers are using data from the larger, prospective study to develop an algorithm from data collected by wearable devices that can identify when it appears that the user is becoming sick. Mason’s team can then trigger a request for the user to complete with a self-collection COVID-19 test kit. The researchers will evaluate the algorithm in a new study of 4,000 additional participants.
    “The hope is that people infected with COVID will be able to prepare and isolate sooner, call their doctor sooner, notify any folks they’ve been in contact with sooner, and not spread the virus,” Mason said.
    Co-Authors: Sarah Fisher, Anoushka Chowdhary, Karena Puldon, Adam Rao and Frederick Hecht from UCSF; Kirstin Aschbacher from Oura and UCSF; and Stephen Dilchert from CUNY, New York.
    Funding: Oura Health Oy.
    Disclosures: Aschbacher is an employee of Oura Health Oy, in addition to holding an adjunct associate professor position at UCSF. Smarr has worked as a paid consultant at Oura Health Oy within the last 12 months, although not during this research project. More

  • in

    Like adults, children by age 3 prefer seeing fractal patterns

    By the time children are 3 years old they already have an adult-like preference for visual fractal patterns commonly seen in nature, according to University of Oregon researchers.
    That discovery emerged among children who’ve been raised in a world of Euclidean geometry, such as houses with rooms constructed with straight lines in a simple non-repeating manner, said the study’s lead author Kelly E. Robles, a doctoral student in the UO’s Department of Psychology.
    “Unlike early humans who lived outside on savannahs, modern-day humans spend the majority of their early lives inside these humanmade structures,” Robles said. “So, since children are not heavily exposed to these natural low-to-moderate complexity fractal patterns, this preference must come from something earlier in development or perhaps are innate.”
    The study published online Nov. 25 in the Nature journal Humanities and Social Sciences Communication. In it, researchers explored how individual differences in processing styles may account for trends in fractal fluency. Previous research had suggested that a preference for fractal patterns may develop as a result of environmental and developmental factors acquired across a person’s lifespan.
    In the UO study, researchers exposed participants — 82 adults, ages 18-33, and 96 children, ages 3-10 — to images of fractal patterns, exact and statistical, ranging in complexity on computer screens.
    Exact fractals are highly ordered such that the same basic pattern repeats exactly at every scale and may possess spatial symmetry such as that seen in snowflakes. Statistical fractals, in contrast, repeat in a similar but not exact fashion across scale and do not possess spatial symmetry as seen in coastlines, clouds, mountains, rivers and trees. Both forms appear in art across many cultures.

    advertisement

    When viewing the fractal patterns, Robles said, subjects chose favorites between different pairs of images that differed in complexity. When looking at exact fractal patterns, selections involved different pairs of snowflake-like or tree-branch-like images. For the statistical fractals, selections involved choosing between pairs of cloud-like images.
    “Since people prefer a balance of simplicity and complexity, we were looking to confirm that people preferred low-to-moderate complexity in statistically repeating patterns, and that the presence of order in exact repeating patterns allowed for a tolerance of and preference for more complex patterns,” she said.
    Although there were some differences in the preferences of adults and children, the overall trend was similar. Exact patterns with greater complexity were more preferred, while preference for statistical patterns peaked at low-moderate complexity and then decreases with additional complexity.
    In subsequent steps with the participants, the UO team was able to rule out the possibility that age-related perceptual strategies or biases may have driven different preferences for statistical and exact patterns.
    “We found that people prefer the most common natural pattern, the statistical fractal patterns of low-moderate complexity, and that this preference does not stem from or vary across decades of exposure to nature or to individual differences in how we process images,” Robles said. “Our preferences for fractals are set before our third birthdays, suggesting that our visual system is tuned to better process these patterns that are highly prevalent in nature.”
    The aesthetic experience of viewing nature’s fractals holds huge potential benefits — ranging from stress-reduction to refreshing mental fatigue, said co-author Richard Taylor, professor and head of the UO’s Department of Physics.
    “Nature provides these benefits for free, but we increasingly find ourselves surrounded by urban landscapes devoid of fractals,” he said. “This study shows that incorporating fractals into urban environments can begin providing benefits from a very early age.”
    Taylor is using fractal inspired designs, in his own research, in an effort to create implants to treat macular degeneration. He and co-author Margaret Sereno, professor of psychology and director of the Integrative Perception Lab, also have published on the positive aesthetic benefits of installing fractal solar panels and window blinds.
    Fractal carpets, recently installed in the UO’s Phil and Penny Knight Campus for Accelerating Scientific Impact, are seen in the new facility’s virtual grand opening tour. Sereno and Taylor also are collaborating on future applications with Ihab Elzeyadi, a professor in the UO’s Department of Architecture. More

  • in

    New computational method validates images without 'ground truth'

    A realtor sends a prospective homebuyer a blurry photograph of a house taken from across the street. The homebuyer can compare it to the real thing — look at the picture, then look at the real house — and see that the bay window is actually two windows close together, the flowers out front are plastic and what looked like a door is actually a hole in the wall.
    What if you aren’t looking at a picture of a house, but something very small — like a protein? There is no way to see it without a specialized device so there’s nothing to judge the image against, no “ground truth,” as it’s called. There isn’t much to do but trust that the imaging equipment and the computer model used to create images are accurate.
    Now, however, research from the lab of Matthew Lew at the McKelvey School of Engineering at Washington University in St. Louis has developed a computational method to determine how much confidence a scientist should have that their measurements, at any given point, are accurate, given the model used to produce them.
    The research was published Dec. 11 in Nature Communications.
    “Fundamentally, this is a forensic tool to tell you if something is right or not,” said Lew, assistant professor in the Preston M. Green Department of Electrical & Systems Engineering. It’s not simply a way to get a sharper picture. “This is a whole new way of validating the trustworthiness of each detail within a scientific image.
    “It’s not about providing better resolution,” he added of the computational method, called Wasserstein-induced flux (WIF). “It’s saying, ‘This part of the image might be wrong or misplaced.'”
    The process used by scientists to “see” the very small — single-molecule localization microscopy (SMLM) — relies on capturing massive amounts of information from the object being imaged. That information is then interpreted by a computer model that ultimately strips away most of the data, reconstructing an ostensibly accurate image — a true picture of a biological structure, like an amyloid protein or a cell membrane.

    advertisement

    There are a few methods already in use to help determine whether an image is, generally speaking, a good representation of the thing being imaged. These methods, however, cannot determine how likely it is that any single data point within an image is accurate.
    Hesam Mazidi, a recent graduate who was a PhD student in Lew’s lab for this research, tackled the problem.
    “We wanted to see if there was a way we could do something about this scenario without ground truth,” he said. “If we could use modeling and algorithmic analysis to quantify if our measurements are faithful, or accurate enough.”
    The researchers didn’t have ground truth — no house to compare to the realtor’s picture — but they weren’t empty handed. They had a trove of data that is usually ignored. Mazidi took advantage of the massive amount of information gathered by the imaging device that usually gets discarded as noise. The distribution of noise is something the researchers can use as ground truth because it conforms to specific laws of physics.
    “He was able to say, ‘I know how the noise of the image is manifested, that’s a fundamental physical law,'” Lew said of Mazidi’s insight.

    advertisement

    “He went back to the noisy, imperfect domain of the actual scientific measurement,” Lew said. All of the data points recorded by the imaging device. “There is real data there that people throw away and ignore.”
    Instead of ignoring it, Mazidi looked to see how well the model predicted the noise — given the final image and the model that created it.
    Analyzing so many data points is akin to running the imaging device over and over again, performing multiple test runs to calibrate it.
    “All of those measurements give us statistical confidence,” Lew said.
    WIF allows them to determine not if the entire image is probable based on the model, but, considering the image, if any given point on the image is probable, based on the assumptions built into the model.
    Ultimately, Mazidi developed a method that can say with strong statistical confidence that any given data point in the final image should or should not be in a particular spot.
    It’s as if the algorithm analyzed the picture of the house and — without ever having seen the place — it cleaned up the image, revealing the hole in the wall.
    In the end, the analysis yields a single number per data point, between -1 and 1. The closer to one, the more confident a scientist can be that a point on an image is, in fact, accurately representing the thing being imaged.
    This process can also help scientists improve their models. “If you can quantify performance, then you can also improve your model by using the score,” Mazidi said. Without access to ground truth, “it allows us to evaluate performance under real experimental conditions rather than a simulation.”
    The potential uses for WIF are far-reaching. Lew said the next step is to use it to validate machine learning, where biased datasets may produce inaccurate outputs.
    How would a researcher know, in such a case, that their data was biased? “Using this model, you’d be able to test on data that has no ground truth, where you don’t know if the neural network was trained with data that are similar to real-world data.
    “Care has to be taken in every type of measurement you take,” Lew said. “Sometimes we just want to push the big red button and see what we get, but we have to remember, there’s a lot that happens when you push that button.” More

  • in

    Challenges of fusing robotics and neuroscience

    Combining neuroscience and robotic research has gained impressive results in the rehabilitation of paraplegic patients. A research team led by Prof. Gordon Cheng from the Technical University of Munich (TUM) was able to show that exoskeleton training not only helped patients to walk, but also stimulated their healing process. With these findings in mind, Prof. Cheng wants to take the fusion of robotics and neuroscience to the next level.
    Prof. Cheng, by training a paraplegic patient with the exoskeleton within your sensational study under the “Walk Again” project, you found that patients regained a certain degree of control over the movement of their legs. Back then, this came as a complete surprise to you …
    … and it somehow still is. Even though we had this breakthrough four years ago, this was only the beginning. To my regret, none of these patients is walking around freely and unaided yet. We have only touched the tip of the iceberg. To develop better medical devices, we need to dig deeper in understanding how the brain works and how to translate this into robotics.
    In your paper published in Science Robotics this month, you and your colleague Prof. Nicolelis, a leading expert in neuroscience and in particular in the area of the human-machine interface, argue that some key challenges in the fusion of neuroscience and robotics need to be overcome in order to take the next steps. One of them is to “close the loop between the brain and the machine” — what do you mean by that?
    The idea behind this is that the coupling between the brain and the machine should work in a way where the brain thinks of the machine as an extension of the body. Let’s take driving as an example. While driving a car, you don’t think about your moves, do you? But we still don’t know how this really works. My theory is that the brain somehow adapts to the car as if it is a part of the body. With this general idea in mind, it would be great to have an exoskeleton that would be embraced by the brain in the same way.
    How could this be achieved in practice?
    The exoskeleton that we were using for our research so far is actually just a big chunk of metal and thus rather cumbersome for the wearer. I want to develop a “soft” exoskeleton — something that you can just wear like a piece of clothing that can both sense the user’s movement intentions and provide instantaneous feedback. Integrating this with recent advances in brain-machine interfaces that allow real-time measurement of brain responses enables the seamless adaptation of such exoskeletons to the needs of individual users. Given the recent technological advances and better understanding of how to decode the user’s momentary brain activity, the time is ripe for their integration into more human-centered or, better ? brain-centered ? solutions.
    What other pieces are still missing? You talked about providing a “more realistic functional model” for both disciplines.
    We have to facilitate the transfer through new developments, for example robots that are closer to human behaviour and the construction of the human body and thus lower the threshold for the use of robots in neuroscience. This is why we need more realistic functional models, which means that robots should be able to mimic human characteristics. Let’s take the example of a humanoid robot actuated with artificial muscles. This natural construction mimicking muscles instead of the traditional motorized actuation would provide neuroscientists with a more realistic model for their studies. We think of this as a win-win situation to facilitate better cooperation between neuroscience and robotics in the future.
    You are not alone in the mission of overcoming these challenges. In your Elite Graduate Program in Neuroengineering, the first and only one of its kind in Germany combining experimental and theoretical neuroscience with in-depth training in engineering, you are bringing together the best students in the field.
    As described above, combining the two disciplines of robotics and neuroscience is a tough exercise, and therefore one of the main reasons why I created this master’s program in Munich. To me, it is important to teach the students to think more broadly and across disciplines, to find previously unimagined solutions. This is why lecturers from various fields, for example hospitals or the sports department, are teaching our students. We need to create a new community and a new culture in the field of engineering. From my standpoint, education is the key factor.

    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More