More stories

  • in

    THz emission spectroscopy reveals optical response of GaInN/GaN multiple quantum wells

    A team of researchers at the Institute of Laser Engineering, Osaka University, in collaboration with Bielefeld University and Technical University Braunschweig in Germany, came closer to unraveling the complicated optical response of wide-bandgap semiconductor multiple quantum wells and how atomic-scale lattice vibration can generate free space terahertz emission. Their work provides a significant push towards the application of laser terahertz emission microscopes to nano-seismology of wide-bandgap quantum devices.
    Terahertz (THz) waves can be generated by ultrafast processes occurring in a material. By looking at THz emission, researchers have been able to study different processes at the quantum level — from simple bulk semiconductors to advanced quantum materials such as multiple quantum wells .
    The THz research group led by Prof. Masayoshi Tonouchi at the Institute of Laser Engineering, Osaka University and his PhD student Abdul Mannan, together with international collaborators Prof. Dmitry Turchinovich at Bielefeld University and Prof. Andreas Hangleiter at Technical University of Braunschweig, has measured multifunction response in buried GaInN/GaN multiple quantum wells (MQWs) which includes dynamic screening effect of the built-in field inside the GaInN quantum wells, capacitive charge oscillation between GaN and GaInN quantum wells, and acoustic wave beams launched by the stress release between GaN and GaInN. All these functions can be monitored by observing THz emission into free space. In addition, it was proven that the propagating acoustic waves provide a new technique to evaluate the thickness of buried structure in devices at the resolution of 10 nm on the wafer scale, making nano-seismology a unique LTEM application for wide-bandgap quantum devices.
    Probing buried structures in opto-acoustic devices at ultra-high resolution is still an unexplored area of research. In the present work, acoustically driven electromagnetic THz emission into free space is utilized for probing GaInN/GaN MQWs sandwiched in GaN material. Laser-induced polarization dynamics of charge carriers results in a partial release of coherent acoustic phonons (CAPs) in GaInN/GaN MQW. This CAP pulse propagating within a material creates the associated electric polarization wave-packet. Once the propagating CAP pulse encounters the discontinuity of acoustic impedance or piezoelectric constant within the structure, this will lead to the transient change in the associated electric polarization, which serves as the source of the acoustically driven electromagnetic THz emission into free space. The temporal separation between ultrafast polarization dynamics in GaInN/GaN MQW and acoustically driven THz emission gives the thickness of the CAP-propagating medium (nano seismology).
    The specialist team organized for THz emission spectroscopy, opto-THz science, and wide-bandgap/quantum-well semiconductor material science has made a significant step towards 3D dynamic characterization, including buried active layers in various materials and devices. “A 3D active tool to characterize ultrafast carrier dynamics, strain physics, phonon dynamics, and ultrafast dielectric responses locally in a non-contact and non-destructive manner has become an essential area of research for new materials and devices. We hope the present work contributes to such an evolution,” says Prof. Masayoshi Tonouchi.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    Reaching your life goals as a single-celled organism

    How is it possible to move in the desired direction without a brain or nervous system? Single-celled organisms apparently manage this feat without any problems: for example, they can swim towards food with the help of small flagellar tails.
    How these extremely simply built creatures manage to do this was not entirely clear until now. However, a research team at TU Wien (Vienna) has now been able to simulate this process on the computer: They calculated the physical interaction between a very simple model organism and its environment. This environment is a liquid with a non-uniform chemical composition, it contains food sources that are unevenly distributed.
    The simulated organism was equipped with the ability to process information about food in its environment in a very simple way. With the help of a machine learning algorithm, the information processing of the virtual being was then modified and optimised in many evolutionary steps. The result was a computer organism that moves in its search for food in a very similar way to its biological counterparts.
    Chemotaxis: Always going where the chemistry is right
    “At first glance, it is surprising that such a simple model can solve such a difficult task,” says Andras Zöttl, who led the research project, which was carried out in the “Theory of Soft Matter” group (led by Gerhard Kahl) at the Institute of Theoretical Physics at TU Wien. “Bacteria can use receptors to determine in which direction, for example, the oxygen or nutrient concentration is increasing, and this information then triggers a movement into the desired direction. This is called chemotaxis.”
    The behaviour of other, multicellular organisms can be explained by the interconnection of nerve cells. But a single-celled organism has no nerve cells — in this case, only extremely simple processing steps are possible within the cell. Until now, it was not clear how such a low degree of complexity could be sufficient to connect simple sensory impressions — for example from chemical sensors — with targeted motor activity. More

  • in

    Universal equation for explosive phenomena

    Climate change, a pandemic or the coordinated activity of neurons in the brain: In all of these examples, a transition takes place at a certain point from the base state to a new state. Researchers at the Technical University of Munich (TUM) have discovered a universal mathematical structure at these so-called tipping points. It creates the basis for a better understanding of the behavior of networked systems.
    It is an essential question for scientists in every field: How can we predict and influence changes in a networked system? “In biology, one example is the modelling of coordinated neuron activity,” says Christian Kühn, professor of multiscale and stochastic dynamics at TUM. Models of this kind are also used in other disciplines, for example when studying the spread of diseases or climate change.
    All critical changes in networked systems have one thing in common: a tipping point where the system makes a transition from a base state to a new state. This may be a smooth shift, where the system can easily return to the base state. Or it can be a sharp, difficult-to-reverse transition where the system state can change abruptly or “explosively.” Transitions of this kind also occur in climate change, for example with the melting of the polar ice caps. In many cases, the transitions result from the variation of a single parameter, such as the rise in concentrations of greenhouse gases behind climate change.
    Similar structures in many models
    In some cases — such as climate change — a sharp tipping point would have extremely negative effects, while in others it would be desirable. Consequently, researchers have used mathematical models to investigate how the type of transition is influenced by the introduction of new parameters or conditions. “For example, you could vary another parameter, perhaps related to how people change their behavior in a pandemic. Or you might adjust an input in a neural system,” says Kühn. “In these examples and many other cases, we have seen that we can go from a continuous to a discontinuous transition or vice versa.”
    Kühn and Dr. Christian Bick of Vrije Universiteit Amsterdam studied existing models from various disciplines that were created to understand certain systems. “We found it remarkable that so many mathematical structures related to the tipping point looked very similar in those models,” says Bick. “By reducing the problem to the most basic possible equation, we were able to identify a universal mechanism that decides on the type of tipping point and is valid for the greatest possible number of models.”
    Universal mathematical tool
    The scientists have thus described a new core mechanism that makes it possible to calculate whether a networked system will have a continuous or discontinuous transition. “We provide a mathematical tool that can be applied universally — in other words, in theoretical physics, the climate sciences and in neurobiology and other disciplines — and works independently of the specific case at hand,” says Kühn.
    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    Intersection of 2D materials results in entirely New materials

    In 1884, Edwin Abbott wrote the novel Flatland: A Romance in Many Dimensions as a satire of Victorian hierarchy. He imagined a world that existed only in two dimensions, where the beings are 2D geometric figures. The physics of such a world is somewhat akin to that of modern 2D materials, such as graphene and transition metal dichalcogenides, which include tungsten disulfide (WS2), tungsten diselenide (WSe2), molybdenum disulfide (MoS2) and molybdenum diselenide (MoSe2).
    Modern 2D materials consist of single-atom layers, where electrons can move in two dimensions but their motion in the third dimension is restricted. Due to this ‘squeeze’, 2D materials have enhanced optical and electronic properties that show great promise as next-generation, ultrathin devices in the fields of energy, communications, imaging and quantum computing, among others.
    Typically, for all these applications, the 2D materials are envisioned in flat-lying arrangements. Unfortunately, however, the strength of these materials is also their greatest weakness — they are extremely thin. This means that when they are illuminated, light can interact with them only over a tiny thickness, which limits their usefulness. To overcome this shortcoming, researchers are starting to look for new ways to fold the 2D materials into complex 3D shapes.
    In our 3D universe, 2D materials can be arranged on top of each other. To extend the Flatland metaphor, such an arrangement would quite literally represent parallel worlds inhabited by people who are destined to never meet.
    Now, scientists from the Department of Physics at the University of Bath in the UK have found a way to arrange 2D sheets of WS2 (previously created in their lab) into a 3D configuration, resulting in an energy landscape that is strongly modified when compared to that of the flat-laying WS2 sheets. This particular 3D arrangement is known as a ‘nanomesh’: a webbed network of densely-packed, randomly distributed stacks, containing twisted and/or fused WS2 sheets.
    Modifications of this kind in Flatland would allow people to step into each other’s worlds. “We didn’t set out to distress the inhabitants of Flatland,” said Professor Ventsislav Valev who led the research, “But because of the many defects that we nanoengineered in the 2D materials, these hypothetical inhabitants would find their world quite strange indeed.
    “First, our WS2 sheets have finite dimensions with irregular edges, so their world would have a strangely shaped end. Also, some of the sulphur atoms have been replaced by oxygen, which would feel just wrong to any inhabitant. Most importantly, our sheets intersect and fuse together, and even twist on top of each other, which modifies the energy landscape of the materials. For the Flatlanders, such an effect would look like the laws of the universe had suddenly changed across their entire landscape.”
    Dr Adelina Ilie, who developed the new material together with her former PhD student and post-doc Zichen Liu, said: “The modified energy landscape is a key point for our study. It is proof that assembling 2D materials into a 3D arrangement does not just result in ‘thicker’ 2D materials — it produces entirely new materials. Our nanomesh is technologically simple to produce, and it offers tunable material properties to meet the demands of future applications.”
    Professor Valev added: “The nanomesh has very strong nonlinear optical properties — it efficiently converts one laser colour into another over a broad palette of colours. Our next goal is to use it on Si waveguides for developing quantum optical communications.”
    PhD student Alexander Murphy, also involved in the research, said: “In order to reveal the modified energy landscape, we devised new characterisation methods and I look forward to applying these to other materials. Who knows what else we could discover?”
    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Smartphone breath alcohol testing devices vary widely in accuracy

    Alcohol-impaired driving kills 29 people a day and costs $121 billion a year in the U.S. After years of progress in reducing alcohol-impaired driving fatalities, efforts began to stall in 2009, and fatalities started increasing again in 2015. With several studies demonstrating that drinkers cannot accurately estimate their own blood alcohol concentration (BAC), handheld alcohol breath testing devices, also known as breathalyzers, allow people to measure their own breath alcohol concentration (BrAC) to determine if they are below the legal limit of .08% before attempting to drive.
    The latest generation of personal alcohol breath testing devices pair with smartphones. While some of these devices were found to be relatively accurate, others may mislead users into thinking that they are fit to drive, according to a new study from the Perelman School of Medicine at the University of Pennsylvania.
    The findings, published today in Alcoholism: Clinical & Experimental Research, compares the accuracy of six such devices with that of two validated alcohol-consumption tests — BAC taken from venipuncture, and a police-grade handheld breath testing device.
    “All alcohol-impaired driving crashes are preventable tragedies,” says lead investigator M. Kit Delgado, MD, MS, an assistant professor of Emergency Medicine and Epidemiology at Penn. “It is common knowledge that you should not drive if intoxicated, but people often don’t have or plan alternative travel arrangements and have difficulty judging their fitness to drive after drinking. Some may use smartphone breathalyzers to see if they are over the legal driving limit. If these devices lead people to incorrectly believe their blood alcohol content is low enough to drive safely, they endanger not only themselves, but everyone else on the road or in the car.”
    To assess these devices, researchers engaged 20 moderate drinkers between the ages of 21 and 39. The participants were given three doses of vodka over 70 minutes with the goal of reaching a peak BAC over the legal driving limit of around 0.10%. After each dose, participants’ BrAC was measured using smartphone-paired devices and a police-grade handheld device. After the third dose, their blood was drawn and tested for BAC, the most accurate way of measuring alcohol consumption. Researchers also explored the devices’ ability to detect breath alcohol concentration above common legal driving limits (0.05% and 0.08%). They used statistical analysis to explore differences between the measurements.
    All seven devices underestimated BAC by more than 0.01%, though the some were consistently more accurate than others. Two devices failed to detect BrAC levels of 0.08% as measured by a police-grade device more than half the time. Since the completion of the study, one of the devices was discontinued and is no longer sold, and other models have been replaced by newer technologies. However, two of the other devices had similar accuracy as a police-grade device. These devices have been used to remotely collect accurate measurements of alcohol consumption for research . They could also be used to scale up contingency management addiction treatment programs that have been shown to help promote abstinence among patients with alcohol use disorders. These programs, which have proven to be highly effective, have traditionally provided prizes for negative in person breathalyzer measurements. Smartphone breathalyzer apps allow these programs to be administered remotely as breath alcohol readings can be verified with automatically captured pictures of the person’s face providing the reading and prize redemption could be automated.
    “While it’s always best to plan not to drive after drinking, if the public or addiction treatment providers are going to use these devices, some are more accurate than others. Given how beneficial these breathalyzer devices could be to public health, our findings suggest that oversight or regulation would be valuable,” Delgado concludes. “Currently, the Food and Drug Administration doesn’t require approval for these devices — which would involve clearance based on review of data accuracy — but it should reconsider this position in light of our findings.”
    Story Source:
    Materials provided by University of Pennsylvania School of Medicine. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence makes great microscopes better than ever

    To observe the swift neuronal signals in a fish brain, scientists have started to use a technique called light-field microscopy, which makes it possible to image such fast biological processes in 3D. But the images are often lacking in quality, and it takes hours or days for massive amounts of data to be converted into 3D volumes and movies.
    Now, EMBL scientists have combined artificial intelligence (AI) algorithms with two cutting-edge microscopy techniques — an advance that shortens the time for image processing from days to mere seconds, while ensuring that the resulting images are crisp and accurate. The findings are published in Nature Methods.
    “Ultimately, we were able to take ‘the best of both worlds’ in this approach,” says Nils Wagner, one of the paper’s two lead authors and now a PhD student at the Technical University of Munich. “AI enabled us to combine different microscopy techniques, so that we could image as fast as light-field microscopy allows and get close to the image resolution of light-sheet microscopy.”
    Although light-sheet microscopy and light-field microscopy sound similar, these techniques have different advantages and challenges. Light-field microscopy captures large 3D images that allow researchers to track and measure remarkably fine movements, such as a fish larva’s beating heart, at very high speeds. But this technique produces massive amounts of data, which can take days to process, and the final images usually lack resolution.
    Light-sheet microscopy homes in on a single 2D plane of a given sample at one time, so researchers can image samples at higher resolution. Compared with light-field microscopy, light-sheet microscopy produces images that are quicker to process, but the data are not as comprehensive, since they only capture information from a single 2D plane at a time.
    To take advantage of the benefits of each technique, EMBL researchers developed an approach that uses light-field microscopy to image large 3D samples and light-sheet microscopy to train the AI algorithms, which then create an accurate 3D picture of the sample.
    “If you build algorithms that produce an image, you need to check that these algorithms are constructing the right image,” explains Anna Kreshuk, the EMBL group leader whose team brought machine learning expertise to the project. In the new study, the researchers used light-sheet microscopy to make sure the AI algorithms were working, Anna says. “This makes our research stand out from what has been done in the past.”
    Robert Prevedel, the EMBL group leader whose group contributed the novel hybrid microscopy platform, notes that the real bottleneck in building better microscopes often isn’t optics technology, but computation. That’s why, back in 2018, he and Anna decided to join forces. “Our method will be really key for people who want to study how brains compute. Our method can image an entire brain of a fish larva, in real time,” Robert says.
    He and Anna say this approach could potentially be modified to work with different types of microscopes too, eventually allowing biologists to look at dozens of different specimens and see much more, much faster. For example, it could help to find genes that are involved in heart development, or could measure the activity of thousands of neurons at the same time.
    Next, the researchers plan to explore whether the method can be applied to larger species, including mammals. More

  • in

    Researchers develop artificial intelligence that can detect sarcasm in social media

    Computer science researchers at the University of Central Florida have developed a sarcasm detector.
    Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.
    That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion — either positive, negative or neutral — associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.
    The team’s findings were recently published in the journal Entropy.
    Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
    “The presence of sarcasm in text is the main hindrance in the performance of sentiment analysis,” says Assistant Professor of engineering Ivan Garibay ’00MS ’04PhD. “Sarcasm isn’t always easy to identify in conversation, so you can imagine it’s pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”
    The team, which includes computer science doctoral student Ramya Akula, began working on this problem under a DARPA grant that supports the organization’s Computational Simulation of Online Social Behavior program.
    “Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions and gestures that cannot be represented in text,” says Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O). “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
    This is one of the challenges Garibay’s Complex Adaptive Systems Lab (CASL) is studying. CASL is an interdisciplinary research group dedicated to the study of complex phenomena such as the global economy, the global information environment, innovation ecosystems, sustainability, and social and cultural dynamics and evolution. CASL scientists study these problems using data science, network science, complexity science, cognitive science, machine learning, deep learning, social sciences, team cognition, among other approaches.
    “In face-to-face conversation, sarcasm can be identified effortlessly using facial expressions, gestures, and tone of the speaker,” Akula says. “Detecting sarcasm in textual communication is not a trivial task as none of these cues are readily available. Specially with the explosion of internet usage, sarcasm detection in online communications from social networking platforms is much more challenging.”
    Garibay is an assistant professor in Industrial Engineering and Management Systems. He has several degrees including a Ph.D. in computer science from UCF. Garibay is the director of UCF’s Artificial Intelligence and Big Data Initiative of CASL and of the master’s program in data analytics. His research areas include complex systems, agent-based models, information and misinformation dynamics on social media, artificial intelligence and machine learning. He has more than 75 peer-reviewed papers and more than $9.5 million in funding from various national agencies.
    Akula is a doctoral scholar and graduate research assistant at CASL. She has a master’s degree in computer science from Technical University of Kaiserslautern in Germany and a bachelor’s degree in computer science engineering from Jawaharlal Nehru Technological University, India.
    Story Source:
    Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length. More

  • in

    Algorithms show accuracy in gauging unconsciousness under general anesthesia

    Anesthestic drugs act on the brain but most anesthesiologists rely on heart rate, respiratory rate, and movement to infer whether surgery patients remain unconscious to the desired degree. In a new study, a research team based at MIT and Massachusetts General Hospital shows that a straightforward artificial intelligence approach, attuned to the kind of anesthetic being used, can yield algorithms that assess unconsciousness in patients based on brain activity with high accuracy and reliability.
    “One of the things that is foremost in the minds of anesthesiologists is ‘Do I have somebody who is lying in front of me who may be conscious and I don’t realize it?’ Being able to reliably maintain unconsciousness in a patient during surgery is fundamental to what we do,” said senior author Emery N. Brown, Edward Hood Taplin Professor in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT, and an anesthesiologist at MGH. “This is an important step forward.”
    More than providing a good readout of unconsciousness, Brown added, the new algorithms offer the potential to allow anesthesiologists to maintain it at the desired level while using less drug than they might administer when depending on less direct, accurate and reliable indicators. That can improve patient’s post-operative outcomes, such as delirium.
    “We may always have to be a little bit ‘overboard’,” said Brown, who is also a professor at Harvard Medical School. “But can we do it with sufficient accuracy so that we are not dosing people more than is needed?”
    Used to drive an infusion pump, for instance, algorithms could help anesthesiologists precisely throttle drug delivery to optimize a patient’s state and the doses they are receiving.
    Artificial intelligence, real-world testing
    To develop the technology to do so, postdocs John Abel and Marcus Badgeley led the study, published in PLOS ONE [LINK TBD], in which they trained machine learning algorithms on a remarkable data set the lab gathered back in 2013. In that study, 10 healthy volunteers in their 20s underwent anesthesia with the commonly used drug propofol. As the dose was methodically raised using computer controlled delivery, the volunteers were asked to respond to a simple request until they couldn’t anymore. Then when they were brought back to consciousness as the dose was later lessened, they became able to respond again. All the while, neural rhythms reflecting their brain activity were recorded with electroencephalogram (EEG) electrodes, providing a direct, real-time link between measured brain activity and exhibited unconsciousness. More