More stories

  • in

    Research reveals how airflow inside a car may affect COVID-19 transmission risk

    A new study of airflow patterns inside a car’s passenger cabin offers some suggestions for potentially reducing the risk of COVID-19 transmission while sharing rides with others.
    The study, by a team of Brown University researchers, used computer models to simulate the airflow inside a compact car with various combinations of windows open or closed. The simulations showed that opening windows — the more windows the better — created airflow patterns that dramatically reduced the concentration of airborne particles exchanged between a driver and a single passenger. Blasting the car’s ventilation system didn’t circulate air nearly as well as a few open windows, the researchers found.
    “Driving around with the windows up and the air conditioning or heat on is definitely the worst scenario, according to our computer simulations,” said Asimanshu Das, a graduate student in Brown’s School of Engineering and co-lead author of the research. “The best scenario we found was having all four windows open, but even having one or two open was far better than having them all closed.”
    Das co-led the research with Varghese Mathai, a former postdoctoral researcher at Brown who is now an assistant professor of physics at the University of Massachusetts, Amherst. The study is published in the journal Science Advances.
    The researchers stress that there’s no way to eliminate risk completely — and, of course, current guidance from the U.S. Centers for Disease Control (CDC) notes that postponing travel and staying home is the best way to protect personal and community health. The goal of the study was simply to study how changes in airflow inside a car may worsen or reduce risk of pathogen transmission.
    The computer models used in the study simulated a car, loosely based on a Toyota Prius, with two people inside — a driver and a passenger sitting in the back seat on the opposite side from the driver. The researchers chose that seating arrangement because it maximizes the physical distance between the two people (though still less than the 6 feet recommended by the CDC). The models simulated airflow around and inside a car moving at 50 miles per hour, as well as the movement and concentration of aerosols coming from both driver and passenger. Aerosols are tiny particles that can linger in the air for extended periods of time. They are thought to be one way in which the SARS-CoV-2 virus is transmitted, particularly in enclosed spaces.

    advertisement

    Part of the reason that opening windows is better in terms of aerosol transmission is because it increases the number of air changes per hour (ACH) inside the car, which helps to reduce the overall concentration of aerosols. But ACH was only part of the story, the researchers say. The study showed that different combinations of open windows created different air currents inside the car that could either increase or decrease exposure to remaining aerosols.
    Because of the way air flows across the outside of the car, air pressure near the rear windows tends to be higher than pressure at the front windows. As a result, air tends to enter the car through the back windows and exit through the front windows. With all the windows open, this tendency creates two more-or-less independent flows on either side of the cabin. Since the occupants in the simulations were sitting on opposite sides of the cabin, very few particles end up being transferred between the two. The driver in this scenario is at slightly higher risk than the passenger because the average airflow in the car goes from back to front, but both occupants experience a dramatically lower transfer of particles compared to any other scenario.
    The simulations for scenarios in which some but not all windows are down yielded some possibly counterintuitive results. For example, one might expect that opening windows directly beside each occupant might be the simplest way to reduce exposure. The simulations found that while this configuration is better than no windows down at all, it carries a higher exposure risk compared to putting down the window opposite each occupant.
    “When the windows opposite the occupants are open, you get a flow that enters the car behind the driver, sweeps across the cabin behind the passenger and then goes out the passenger-side front window,” said Kenny Breuer, a professor of engineering at Brown and a senior author of the research. “That pattern helps to reduce cross-contamination between the driver and passenger.”
    It’s important to note, the researchers say, that airflow adjustments are no substitute for mask-wearing by both occupants when inside a car. And the findings are limited to potential exposure to lingering aerosols that may contain pathogens. The study did not model larger respiratory droplets or the risk of actually becoming infected by the virus.
    Still, the researchers say the study provides valuable new insights into air circulation patterns inside a car’s passenger compartment — something that had received little attention before now.
    “This is the first study we’re aware of that really looked at the microclimate inside a car,” Breuer said. “There had been some studies that looked at how much external pollution gets into a car, or how long cigarette smoke lingers in a car. But this is the first time anyone has looked at airflow patterns in detail.”
    The research grew out of a COVID-19 research task force established at Brown to gather expertise from across the University to address widely varying aspects of the pandemic. Jeffrey Bailey, an associate professor of pathology and laboratory medicine and a coauthor of the airflow study, leads the group. Bailey was impressed with how quickly the research came together, with Mathai suggesting the use of computer simulations that could be done while laboratory research at Brown was paused for the pandemic.
    “This is really a great example of how different disciplines can come together quickly and produce valuable findings,” Bailey said. “I talked to Kenny briefly about this idea, and within three or four days his team was already doing some preliminary testing. That’s one of the great things about being at a place like Brown, where people are eager to collaborate and work across disciplines.” More

  • in

    New CRISPR-based test for COVID-19 uses a smartphone camera

    Imagine swabbing your nostrils, putting the swab in a device, and getting a read-out on your phone in 15 to 30 minutes that tells you if you are infected with the COVID-19 virus. This has been the vision for a team of scientists at Gladstone Institutes, University of California, Berkeley (UC Berkeley), and University of California, San Francisco (UCSF). And now, they report a scientific breakthrough that brings them closer to making this vision a reality.
    One of the major hurdles to combating the COVID-19 pandemic and fully reopening communities across the country is the availability of mass rapid testing. Knowing who is infected would provide valuable insights about the potential spread and threat of the virus for policymakers and citizens alike.
    Yet, people must often wait several days for their results, or even longer when there is a backlog in processing lab tests. And, the situation is worsened by the fact that most infected people have mild or no symptoms, yet still carry and spread the virus.
    In a new study published in the scientific journal Cell, the team from Gladstone, UC Berkeley, and UCSF has outlined the technology for a CRISPR-based test for COVID-19 that uses a smartphone camera to provide accurate results in under 30 minutes.
    “It has been an urgent task for the scientific community to not only increase testing, but also to provide new testing options,” says Melanie Ott, MD, PhD, director of the Gladstone Institute of Virology and one of the leaders of the study. “The assay we developed could provide rapid, low-cost testing to help control the spread of COVID-19.”
    The technique was designed in collaboration with UC Berkeley bioengineer Daniel Fletcher, PhD, as well as Jennifer Doudna, PhD, who is a senior investigator at Gladstone, a professor at UC Berkeley, president of the Innovative Genomics Institute, and an investigator of the Howard Hughes Medical Institute. Doudna recently won the 2020 Nobel Prize in Chemistry for co-discovering CRISPR-Cas genome editing, the technology that underlies this work.

    advertisement

    Not only can their new diagnostic test generate a positive or negative result, it also measures the viral load (or the concentration of SARS-CoV-2, the virus that causes COVID-19) in a given sample.
    “When coupled with repeated testing, measuring viral load could help determine whether an infection is increasing or decreasing,” says Fletcher, who is also a Chan Zuckerberg Biohub Investigator. “Monitoring the course of a patient’s infection could help health care professionals estimate the stage of infection and predict, in real time, how long is likely needed for recovery.”
    A Simpler Test through Direct Detection
    Current COVID-19 tests use a method called quantitative PCR — the gold standard of testing. However, one of the issues with using this technique to test for SARS-CoV-2 is that it requires DNA. Coronavirus is an RNA virus, which means that to use the PCR approach, the viral RNA must first be converted to DNA. In addition, this technique relies on a two-step chemical reaction, including an amplification step to provide enough of the DNA to make it detectable. So, current tests typically need trained users, specialized reagents, and cumbersome lab equipment, which severely limits where testing can occur and causes delays in receiving results.
    As an alternative to PCR, scientists are developing testing strategies based on the gene-editing technology CRISPR, which excels at specifically identifying genetic material.

    advertisement

    All CRISPR diagnostics to date have required that the viral RNA be converted to DNA and amplified before it can be detected, adding time and complexity. In contrast, the novel approach described in this recent study skips all the conversion and amplification steps, using CRISPR to directly detect the viral RNA.
    “One reason we’re excited about CRISPR-based diagnostics is the potential for quick, accurate results at the point of need,” says Doudna. “This is especially helpful in places with limited access to testing, or when frequent, rapid testing is needed. It could eliminate a lot of the bottlenecks we’ve seen with COVID-19.”
    Parinaz Fozouni, a UCSF graduate student working in Ott’s lab at Gladstone, had been working on an RNA detection system for HIV for the past few years. But in January 2020, when it became clear that the coronavirus was becoming a bigger issue globally and that testing was a potential pitfall, she and her colleagues decided to shift their focus to COVID-19.
    “We knew the assay we were developing would be a logical fit to help the crisis by allowing rapid testing with minimal resources,” says Fozouni, who is co-first author of the paper, along with Sungmin Son and María Díaz de León Derby from Fletcher’s team at UC Berkeley. “Instead of the well-known CRISPR protein called Cas9, which recognizes and cleaves DNA, we used Cas13, which cleaves RNA.”
    In the new test, the Cas13 protein is combined with a reporter molecule that becomes fluorescent when cut, and then mixed with a patient sample from a nasal swab. The sample is placed in a device that attaches to a smartphone. If the sample contains RNA from SARS-CoV-2, Cas13 will be activated and will cut the reporter molecule, causing the emission of a fluorescent signal. Then, the smartphone camera, essentially converted into a microscope, can detect the fluorescence and report that a swab tested positive for the virus.
    “What really makes this test unique is that it uses a one-step reaction to directly test the viral RNA, as opposed to the two-step process in traditional PCR tests,” says Ott, who is also a professor in the Department of Medicine at UCSF. “The simpler chemistry, paired with the smartphone camera, cuts down detection time and doesn’t require complex lab equipment. It also allows the test to yield quantitative measurements rather than simply a positive or negative result.”
    The researchers also say that their assay could be adapted to a variety of mobile phones, making the technology easily accessible.
    “We chose to use mobile phones as the basis for our detection device since they have intuitive user interfaces and highly sensitive cameras that we can use to detect fluorescence,” explains Fletcher. “Mobile phones are also mass-produced and cost-effective, demonstrating that specialized lab instruments aren’t necessary for this assay.”
    Accurate and Quick Results to Limit the Pandemic
    When the scientists tested their device using patient samples, they confirmed that it could provide a very fast turnaround time of results for samples with clinically relevant viral loads. In fact, the device accurately detected a set of positive samples in under 5 minutes. For samples with a low viral load, the device required up to 30 minutes to distinguish it from a negative test.
    “Recent models of SARS-CoV-2 suggest that frequent testing with a fast turnaround time is what we need to overcome the current pandemic,” says Ott. “We hope that with increased testing, we can avoid lockdowns and protect the most vulnerable populations.”
    Not only does the new CRISPR-based test offer a promising option for rapid testing, but by using a smartphone and avoiding the need for bulky lab equipment, it has the potential to become portable and eventually be made available for point-of-care or even at-home use. And, it could also be expanded to diagnose other respiratory viruses beyond SARS-CoV-2.
    In addition, the high sensitivity of smartphone cameras, together with their connectivity, GPS, and data-processing capabilities, have made them attractive tools for diagnosing disease in low-resource regions.
    “We hope to develop our test into a device that could instantly upload results into cloud-based systems while maintaining patient privacy, which would be important for contact tracing and epidemiologic studies,” Ott says. “This type of smartphone-based diagnostic test could play a crucial role in controlling the current and future pandemics.”
    About the Research Project
    The study entitled “Amplification-free detection of SARS-CoV-2 with CRISPR-Cas13a and mobile phone microscopy,” was published online by Cell on December 4, 2020.
    Other authors of the study include Gavin J. Knott, Michael V. D’Ambrosio, Abdul Bhuiya, Max Armstrong, and Andrew Harris from UC Berkeley; Carley N. Gray, G. Renuka Kumar, Stephanie I. Stephens, Daniela Boehm, Chia-Lin Tsou, Jeffrey Shu, Jeannette M. Osterloh, Anke Meyer-Franke, and Katherine S. Pollard from Gladstone Institutes; Chunyu Zhao, Emily D. Crawford, Andreas S. Puschnick, Maira Phelps, and Amy Kistler from the Chan Zuckerberg Biohub; Neil A. Switz from San Jose State University; and Charles Langelier and Joseph L. DeRisi from UCSF.
    The research was supported by the National Institutes of Health (NIAID grant 5R61AI140465-03 and NIDA grant 1R61DA048444-01); the NIH Rapid Acceleration of Diagnostics (RADx) program; the National Heart, Lung, and Blood Institute; the National Institute of Biomedical Imaging and Bioengineering; the Department of Health and Human Services (Grant No. 3U54HL143541-02S1); as well as through philanthropic support from Fast Grants, the James B. Pendleton Charitable Trust, The Roddenberry Foundation, and multiple individual donors. This work was also made possible by a generous gift from an anonymous private donor in support of the ANCeR diagnostics consortium. More

  • in

    Protein storytelling to address the pandemic

    In the last five decades, we’ve learned a lot about the secret lives of proteins — how they work, what they interact with, the machinery that makes them function — and the pace of discovery is accelerating.
    The first three-dimensional protein structure began emerging in the 1970s. Today, the Protein Data Bank, a worldwide repository of information about the 3D structures of large biological molecules, has information about hundreds of thousands of proteins. Just this week, the company DeepMind shocked the protein structure world with its accurate, AI-driven predictions.
    But the 3D structure is often not enough to truly understand what a protein is up to, explains Ken Dill, director of the Laufer Center for Physical and Quantitative Biology at Stony Brook University and a member of the National Academy of Sciences. “It’s like somebody asking how an automobile works, and a mechanic opening the hood of a car and saying, ‘see, there’s the engine, that’s how it works.'”
    In the intervening decades, computer simulations have built upon and added to the understanding of protein behavior by setting these 3D molecular machines in motion. Analyzing their energy landscapes, interactions, and dynamics has taught us even more about these prime movers of life.
    “We’re really trying to ask the question: how does it work? Not just, how does it look?” Dill said. “That’s the essence of why you want to know protein structures in the first place, and one of the biggest applications of this is for drug discovery.”
    Writing in Science magazine in November 2020, Dill and his Stony Brook colleagues Carlos Simmerling and Emiliano Brini shared their perspectives on the evolution of the field.

    advertisement

    “Computational Molecular Physics is an increasingly powerful tool for telling the stories of protein molecule actions,” they wrote. “Systematic improvements in forcefields, enhanced sampling methods, and accelerators have enabled [computational molecular physics] to reach timescales of important biological actions…. At this rate, in the next quarter century, we’ll be telling stories of protein molecules over the whole lifespan, tens of minutes, of a bacterial cell.”
    Speeding Simulations
    Decades after the first dynamic models of proteins, however, computational biophysicists still face major challenges. To be useful, simulations need to be accurate; and to be accurate, simulation needs to progress atom by atom and femtosecond (10^-12 seconds) by femtosecond. To match the timescales that matter, simulations must extend over microseconds or milliseconds — that is, millions of time-steps.
    “Computational molecular physics has developed at a fast clip relatively speaking, but not enough to get us into the time and size and motion range we need to see,” he said.
    One of the main methods researchers use to understand proteins in this way is called molecular dynamics. Since 2015, with support from the National Institutes of Health and the National Science Foundation, Dill and his team have been working to speed up molecular dynamics simulations. Their method, called MELD, accelerates the process by providing vague but important information about the system being studied.

    advertisement

    Dill likens the method to a treasure hunt. Instead of asking someone to find a treasure that could be anywhere, they provide a map with clues, saying: ‘it’s either near Chicago or Idaho.’ In the case of actual proteins, that might mean telling the simulation that one part of a chain of amino acids is near another part of the chain. This narrowing of the search field can speed up simulations significantly — sometimes more than 1000-times faster — enabling novel studies and providing new insights.
    Protein Structure Predictions for COVID-19
    One of the most important uses of biophysical modeling in our daily lives is drug discovery and development. 3D models of viruses or bacteria help identify weak spots in their defenses, and molecular dynamics simulations determine what small molecules may bind to those attackers and gum up their works without having to test every possibility in the lab.
    Dill’s Laufer Center team is involved in a number of efforts to find drugs and treatments for COVID-19, with support from the White House-organized COVID-19 HPC Consortium, an effort among Federal government, industry, and academic leaders to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research.
    “Everyone dropped other things to work on COVID-19,” Dill recalled.
    The first step the team took was to use MELD to determine the 3D shape of the coronavirus’ unknown proteins. Only three of the 29 of the virus’ proteins have been definitively resolved so far. “Most structures are not known, which is not a good beginning for drug discovery,” he said. “Can we predict structures that are not known? That’s the primary thing that we used Frontera for.”
    The Frontera supercomputer at the Texas Advanced Computing Center (TACC) — the fastest at any university in the world — allowed Dill and his team to make structure predictions for 19 additional proteins. Each of these could serve as an avenue for new drug developments. They have made their structure predictions publicly available and are working with teams to experimentally test their accuracy.
    While it seems like the vaccine race is already close to declaring a winner, the first round of vaccines, drugs, and treatments are only the starting point for a recovery. As with HIV, it is likely that the first drugs developed will not work on all people, or will be surpassed by more effective ones with fewer side-effects in the future.
    Dill and his Laufer Center team are playing the long game, hoping to find targets and mechanisms that are more promising than those already being developed.
    Repurposing Drugs and Exploring New Approaches
    A second project by the Laufer Center group uses Frontera to scan millions of commercially available small molecules for efficacy against COVID-19, in collaboration with Dima Kozakov’s group at Stony Brook University.
    “By focusing on the repurposing of commercially available molecules it’s possible, in principle, to shorten the time it takes to find a new drug,” he said. “Kozakov’s group has the ability to quickly screen thousands of molecules to identify the best hundred ones. We use our physics modeling to filter this pool of candidates even further, narrowing the options experimentalists need to test.”
    A third project is studying an interesting cellular protein known as PROTAC that directs the “trash collector proteins” of human cells to pick up specific target proteins that they would not usually remove.
    “Our cell has smart ways to identify proteins that needs to be destroyed. It gets next to it, puts a sticker on it, and the proteins who collect trash take it away,” he explained. “Initially PROTAC molecules have been used to target cancer related proteins. Now there is a push to transfer this concept to target SARS-CoV-2 proteins.”
    Collaborating with Stony Brook chemist Peter Tonge, they are working to simulate the interaction of novel PROTACS with the COVID-19 virus. “These are some of our most ambitious simulations, both in term of the size of the systems we are tackling and in terms of the chemical complexity,” he said. “Frontera is a crucial resource to give us sufficient turnaround times. For one simulation we need 30 GPUs and four to five days of continuous calculations.”
    The team is developing and testing their protocols on a non-COVID test system to benchmark their predictions. Once they settle on a protocol, they will apply this design procedure to COVID systems.
    Every protein has a story to tell and Dill, Brini and their collaborators are building and applying the tools that help elucidate these stories. “There are some problems in protein science where we believe the real challenge is getting the physics and math right,” Dill concluded. “We’re testing that hypothesis on COVID-19.” More

  • in

    Unlocking the secrets of chemical bonding with machine learning

    A new machine learning approach offers important insights into catalysis, a fundamental process that makes it possible to reduce the emission of toxic exhaust gases or produce essential materials like fabric.
    In a report published in Nature Communications, Hongliang Xin, associate professor of chemical engineering at Virginia Tech, and his team of researchers developed a Bayesian learning model of chemisorption, or Bayeschem for short, aiming to use artificial intelligence to unlock the nature of chemical bonding at catalyst surfaces.
    “It all comes down to how catalysts bind with molecules,” said Xin. “The interaction has to be strong enough to break some chemical bonds at reasonably low temperatures, but not too strong that catalysts would be poisoned by reaction intermediates. This rule is known as the Sabatier principle in catalysis.”
    Understanding how catalysts interact with different intermediates and determining how to control their bond strengths so that they are within that “goldilocks zone” is the key to designing efficient catalytic processes, Xin said. The research provides a tool for that purpose.
    Bayeschem works using Bayesian learning, a specific machine learning algorithm for inferring models from data. “Suppose you have a domain model based on well-established physical laws, and you want to use it to make predictions or learn something new about the world,” explained Siwen Wang, a former chemical engineering doctoral student. “The Bayesian approach is to learn the distribution of model parameters given our prior knowledge and the observed, often scarce, data, while providing uncertainty quantification of model predictions.”
    The d-band theory of chemisorption used in Bayeschem is a theory describing chemical bonding at solid surfaces involving d-electrons that are usually shaped like a four-leaf clover. The model explains how d-orbitals of catalyst atoms are overlapping and attracted to adsorbate valence orbitals that have a spherical or dumbbell-like shape. It has been considered the standard model in heterogeneous catalysis since its development by Hammer and Nørskov in the 1990s, and though it has been successful in explaining bonding trends of many systems, Xin said the model fails at times due to the intrinsic complexity of electronic interactions.
    According to Xin, Bayeschem brings the d-band theory to a new level for quantifying those interaction strengths and possibly tailoring some knobs, such as structure and composition, to design better materials. The approach advances the d-band theory of chemisorption by extending its prediction and interpretation capabilities of adsorption properties, both of which are crucial in catalyst discovery. However, compared with the black-box machine learning models that are trained by large amounts of data, the prediction accuracy of Bayeschem is still amenable to improvement, said Hemanth Pillai, a chemical engineering doctoral student in Xin’s group who contributed equally to the study.
    “The opportunity to come up with highly accurate and interpretable models that build on deep learning algorithms and the theory of chemisorption is highly rewarding for achieving the goals of artificial intelligence in catalysis,” said Xin.

    Story Source:
    Materials provided by Virginia Tech. Original written by Tina Russell. Note: Content may be edited for style and length. More

  • in

    Using a video game to understand the origin of emotions

    Emotions are complex phenomena that influence our minds, bodies and behaviour. A number of studies have sought to connect given emotions, such as fear or pleasure, to specific areas of the brain, but without success. Some theoretical models suggest that emotions emerge through the coordination of multiple mental processes triggered by an event. These models involve the brain orchestrating adapted emotional responses via the synchronisation of motivational, expressive and visceral mechanisms. To investigate this hypothesis, a research team from the University of Geneva (UNIGE) studied brain activity using functional MRI. They analysed the feelings, expressions and physiological responses of volunteers while they were playing a video game that had been specially developed to arouse different emotions depending on the progress of the game. The results, published in the journal PLOS Biology, show that different emotional components recruit several neural networks in parallel distributed throughout the brain, and that their transient synchronisation generates an emotional state. The somatosensory and motor pathways are two of the areas involved in this synchronisation, thereby validating the idea that emotion is grounded in action-oriented functions in order to allow an adapted response to events.
    Most studies use passive stimulation to understand the emergence of emotions: they typically present volunteers with photos, videos or images evoking fear, anger, joy or sadness while recording the cerebral response using electroencephalography or imaging. The goal is to pinpoint the specific neural networks for each emotion. “The problem is, these regions overlap for different emotions, so they’re not specific,” begins Joana Leitão, a post-doctoral fellow in the Department of Fundamental Neurosciences (NEUFO) in UNIGE’s Faculty of Medicine and at the Swiss Centre for Affective Sciences (CISA). “What’s more, it’s likely that, although these images represent emotions well, they don’t evoke them.”
    A question of perspective
    Several neuroscientific theories have attempted to model the emergence of an emotion, although none has so far been proven experimentally. The UNIGE research team subscribe to the postulate that emotions are “subjective”: two individuals faced with the same situation may experience a different emotion. “A given event is not assessed in the same way by each person because the perspectives are different,” continues Dr Leitão.
    In a theoretical model known as the component process model (CPM) — devised by Professor Klaus Scherer, the retired founding director of CISA- an event will generate multiple responses in the organism. These relate to components of cognitive assessment (novelty or concordance with a goal or norms), motivation, physiological processes (sweating or heart rate), and expression (smiling or shouting). In a situation that sets off an emotional response, these different components influence each other dynamically. It is their transitory synchronisation that might correspond to an emotional state.
    Emotional about Pacman
    The Geneva neuroscientists devised a video game to evaluate the applicability of this model. “The aim is to evoke emotions that correspond to different forms of evaluation,” explains Dr Leitão. “Rather than viewing simple images, participants play a video game that puts them in situations they’ll have to evaluate so they can advance and win rewards.” The game is an arcade game that is similar to the famous Pacman. Players have to grab coins, touch the “nice monsters,” ignore the “neutral monsters” and avoid the “bad guys” to win points and pass to the next level.
    The scenario involves situations that trigger the four components of the CPM model differently. At the same time, the researchers were able to measure brain activity via imaging; facial expression by analysing the zygomatic muscles; feelings via questions; and physiology by skin and cardiorespiratory measurements. “All of these components involve different circuits distributed throughout the brain,” says the Geneva-based researcher. “By cross-referencing the imagery data with computational modelling, we were able to determine how these components interact over time and at what point they synchronise to generate an emotion.”
    A made-to-measure emotional response
    The results also indicate that a region deep in the brain called the basal ganglia is involved in this synchronisation. This structure is known as a convergence point between multiple cortical regions, each of which is equipped with specialised affective, cognitive or sensorimotor processes. The other regions involve the sensorimotor network, the posterior insula and the prefrontal cortex. “The involvement of the somatosensory and motor zones accords with the postulate of theories that consider emotion as a preparatory mechanism for action that enables the body to promote an adaptive response to events,” concludes Patrik Vuilleumier, full professor at NEUFO and senior author of the study.

    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More

  • in

    Tech makes it possible to digitally communicate through human touch

    Instead of inserting a card or scanning a smartphone to make a payment, what if you could simply touch the machine with your finger?
    A prototype developed by Purdue University engineers would essentially let your body act as the link between your card or smartphone and the reader or scanner, making it possible for you to transmit information just by touching a surface.
    The prototype doesn’t transfer money yet, but it’s the first technology that can send any information through the direct touch of a fingertip. While wearing the prototype as a watch, a user’s body can be used to send information such as a photo or password when touching a sensor on a laptop, the researchers show in a new study.
    “We’re used to unlocking devices using our fingerprints, but this technology wouldn’t rely on biometrics — it would rely on digital signals. Imagine logging into an app on someone else’s phone just by touch,” said Shreyas Sen, a Purdue associate professor of electrical and computer engineering.
    “Whatever you touch would become more powerful because digital information is going through it.”
    The study is published in Transactions on Computer-Human Interaction, a journal by the Association for Computing Machinery. Shovan Maity, a Purdue alum, led the study as a Ph.D. student in Sen’s lab. The researchers also will present their findings at the Association for Computing Machinery’s Computer Human Interaction (ACM CHI) conference in May.

    advertisement

    The technology works by establishing an “internet” within the body that smartphones, smartwatches, pacemakers, insulin pumps and other wearable or implantable devices can use to send information. These devices typically communicate using Bluetooth signals that tend to radiate out from the body. A hacker could intercept those signals from 30 feet away, Sen said.
    Sen’s technology instead keeps signals confined within the body by coupling them in a so-called “Electro-Quasistatic range” that is much lower on the electromagnetic spectrum than typical Bluetooth communication. This mechanism is what enables information transfer by only touching a surface.
    Even if your finger hovered just one centimeter above a surface, information wouldn’t transfer through this technology without a direct touch. This would prevent a hacker from stealing private information such as credit card credentials by intercepting the signals.
    The researchers demonstrated this capability in the lab by having a person interact with two adjacent surfaces. Each surface was equipped with an electrode to touch, a receiver to get data from the finger and a light to indicate that data had transferred. If the finger directly touched an electrode, only the light of that surface turned on. The fact that the light of the other surface stayed off indicated that the data didn’t leak out.
    Similarly, if a finger hovered as close as possible over a laptop sensor, a photo wouldn’t transfer. But a direct touch could transfer a photo.

    advertisement

    Credit card machines and apps such as Apple Pay use a more secure alternative to Bluetooth signals — called near-field communication — to receive a payment from tapping a card or scanning a phone. Sen’s technology would add the convenience of making a secure payment in a single gesture.
    “You wouldn’t have to bring a device out of your pocket. You could leave it in your pocket or on your body and just touch,” Sen said.
    The technology could also replace key fobs or cards that currently use Bluetooth communication to grant access into a building. Instead, a person might just touch a door handle to enter.
    Like machines today that scan coupons, gift cards and other information from a phone, using this technology in real life would require surfaces everywhere to have the right hardware for recognizing your finger.
    The software on the device that a person is wearing would also need to be configured to send signals through the body to the fingertip — and have a way to turn off so that information, such as a payment, wouldn’t be transferred to every surface equipped to receive it.
    The researchers believe that the applications of this technology would go beyond how we interact with devices today.
    “Anytime you are enabling a new hardware channel, it gives you more possibilities. Think of big touch screens that we have today — the only information that the computer receives is the location of your touch. But the ability to transfer information through your touch would change the applications of that big touch screen,” Sen said.
    A video about the research is available on YouTube at https://youtu.be/-2oscW5i5DQ.

    Story Source:
    Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length. More

  • in

    Mapping quantum structures with light to unlock their capabilities

    A new tool that uses light to map out the electronic structures of crystals could reveal the capabilities of emerging quantum materials and pave the way for advanced energy technologies and quantum computers, according to researchers at the University of Michigan, University of Regensburg and University of Marburg.
    A paper on the work is published in Science.
    Applications include LED lights, solar cells and artificial photosynthesis.
    “Quantum materials could have an impact way beyond quantum computing,” said Mackillo Kira, professor of electrical engineering and computer science at the University of Michigan, who led the theory side of the new study. “If you optimize quantum properties right, you can get 100% efficiency for light absorption.”
    Silicon-based solar cells are already becoming the cheapest form of electricity, although their sunlight-to-electricity conversion efficiency is rather low, about 30%. Emerging “2D” semiconductors, which consist of a single layer of crystal, could do that much better — potentially using up to 100% of the sunlight. They could also elevate quantum computing to room temperature from the near-absolute-zero machines demonstrated so far.
    “New quantum materials are now being discovered at a faster pace than ever,” said Rupert Huber, professor of physics at the University of Regensburg in Germany, who led the experimental work. “By simply stacking such layers one on top of the other under variable twist angles, and with a wide selection of materials, scientists can now create artificial solids with truly unprecedented properties.”
    The ability to map these properties down to the atoms could help streamline the process of designing materials with the right quantum structures. But these ultrathin materials are much smaller and messier than earlier crystals, and the old analysis methods don’t work. Now, 2D materials can be measured with the new laser-based method at room temperature and pressure.

    advertisement

    The measurable operations include processes that are key to solar cells, lasers and optically driven quantum computing. Essentially, electrons pop between a “ground state,” in which they cannot travel, and states in the semiconductor’s “conduction band,” in which they are free to move through space. They do this by absorbing and emitting light.
    The quantum mapping method uses a 100 femtosecond (100 quadrillionths of a second) pulse of red laser light to pop electrons out of the ground state and into the conduction band. Next the electrons are hit with a second pulse of infrared light. This pushes them so that they oscillate up and down an energy “valley” in the conduction band, a little like skateboarders in a halfpipe.
    The team uses the dual wave/particle nature of electrons to create a standing wave pattern that looks like a comb. They discovered that when the peak of this electron comb overlaps with the material’s band structure — its quantum structure — electrons emit light intensely. That powerful light emission along, with the narrow width of the comb lines, helped create a picture so sharp that researchers call it super-resolution.
    By combining that precise location information with the frequency of the light, the team was able to map out the band structure of the 2D semiconductor tungsten diselenide. Not only that, but they could also get a read on each electron’s orbital angular momentum through the way the front of the light wave twisted in space. Manipulating an electron’s orbital angular momentum, known also as a pseudospin, is a promising avenue for storing and processing quantum information.
    In tungsten diselenide, the orbital angular momentum identifies which of two different “valleys” an electron occupies. The messages that the electrons send out can show researchers not only which valley the electron was in but also what the landscape of that valley looks like and how far apart the valleys are, which are the key elements needed to design new semiconductor-based quantum devices.
    For instance, when the team used the laser to push electrons up the side of one valley until they fell into the other, the electrons emitted light at that drop point, too. That light gives clues about the depths of the valleys and the height of the ridge between them. With this kind of information, researchers can figure out how the material would fare for a variety of purposes.
    The paper is titled, “Super-resolution lightwave tomography of electronic bands in quantum materials.” This research was funded by the Army Research Office, German Research Foundation and U-M College of Engineering Blue Sky Research Program. More