More stories

  • in

    Intersection of 2D materials results in entirely New materials

    In 1884, Edwin Abbott wrote the novel Flatland: A Romance in Many Dimensions as a satire of Victorian hierarchy. He imagined a world that existed only in two dimensions, where the beings are 2D geometric figures. The physics of such a world is somewhat akin to that of modern 2D materials, such as graphene and transition metal dichalcogenides, which include tungsten disulfide (WS2), tungsten diselenide (WSe2), molybdenum disulfide (MoS2) and molybdenum diselenide (MoSe2).
    Modern 2D materials consist of single-atom layers, where electrons can move in two dimensions but their motion in the third dimension is restricted. Due to this ‘squeeze’, 2D materials have enhanced optical and electronic properties that show great promise as next-generation, ultrathin devices in the fields of energy, communications, imaging and quantum computing, among others.
    Typically, for all these applications, the 2D materials are envisioned in flat-lying arrangements. Unfortunately, however, the strength of these materials is also their greatest weakness — they are extremely thin. This means that when they are illuminated, light can interact with them only over a tiny thickness, which limits their usefulness. To overcome this shortcoming, researchers are starting to look for new ways to fold the 2D materials into complex 3D shapes.
    In our 3D universe, 2D materials can be arranged on top of each other. To extend the Flatland metaphor, such an arrangement would quite literally represent parallel worlds inhabited by people who are destined to never meet.
    Now, scientists from the Department of Physics at the University of Bath in the UK have found a way to arrange 2D sheets of WS2 (previously created in their lab) into a 3D configuration, resulting in an energy landscape that is strongly modified when compared to that of the flat-laying WS2 sheets. This particular 3D arrangement is known as a ‘nanomesh’: a webbed network of densely-packed, randomly distributed stacks, containing twisted and/or fused WS2 sheets.
    Modifications of this kind in Flatland would allow people to step into each other’s worlds. “We didn’t set out to distress the inhabitants of Flatland,” said Professor Ventsislav Valev who led the research, “But because of the many defects that we nanoengineered in the 2D materials, these hypothetical inhabitants would find their world quite strange indeed.
    “First, our WS2 sheets have finite dimensions with irregular edges, so their world would have a strangely shaped end. Also, some of the sulphur atoms have been replaced by oxygen, which would feel just wrong to any inhabitant. Most importantly, our sheets intersect and fuse together, and even twist on top of each other, which modifies the energy landscape of the materials. For the Flatlanders, such an effect would look like the laws of the universe had suddenly changed across their entire landscape.”
    Dr Adelina Ilie, who developed the new material together with her former PhD student and post-doc Zichen Liu, said: “The modified energy landscape is a key point for our study. It is proof that assembling 2D materials into a 3D arrangement does not just result in ‘thicker’ 2D materials — it produces entirely new materials. Our nanomesh is technologically simple to produce, and it offers tunable material properties to meet the demands of future applications.”
    Professor Valev added: “The nanomesh has very strong nonlinear optical properties — it efficiently converts one laser colour into another over a broad palette of colours. Our next goal is to use it on Si waveguides for developing quantum optical communications.”
    PhD student Alexander Murphy, also involved in the research, said: “In order to reveal the modified energy landscape, we devised new characterisation methods and I look forward to applying these to other materials. Who knows what else we could discover?”
    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    A common antibiotic slows a mysterious coral disease

    Slathering corals in a common antibiotic seems to temporarily soothe a mysterious tissue-eating disease, new research suggests.

    Just off Florida, a type of coral infected with stony coral tissue loss disease, or SCTLD, showed widespread improvement several months after being treated with amoxicillin, researchers report April 21 in Scientific Reports. While the deadly disease eventually reappeared, the results provide a spot of good news while scientists continue the search for what causes it.

    “The antibiotic treatments give the corals a break,” says Erin Shilling, a coral researcher at Florida Atlantic University’s Harbor Branch Oceanographic Institute in Fort Pierce. “It’s very good at halting the lesions it’s applied to.”

    Divers discovered SCTLD on reefs near Miami in 2014. Characterized by white lesions that rapidly eat away at coral tissue, the disease plagues nearly all of the Great Florida Reef, which spans 580 kilometers from St. Lucie Inlet in Marin County to Dry Tortugas National Park beyond the Florida Keys. In recent years, SCTLD has spread to reefs in the Caribbean (SN: 7/9/19).

    As scientists search for the cause, they are left to treat the lesions through trial and error. Two treatments that show promise involve divers applying a chlorinated epoxy or an amoxicillin paste to infected patches. “We wanted to experimentally assess these techniques to see if they’re as effective as people have been reporting anecdotally,” Shilling says.In April 2019, Shilling and colleagues identified 95 lesions on 32 colonies of great star coral (Montastraea cavernosa) off Florida’s east coast. The scientists dug trenches into the corals around the lesions to separate diseased tissue from healthy tissue, then filled the moats and covered the diseased patches with the antibiotic paste or chlorinated epoxy and monitored the corals over 11 months.

    Treatment with an amoxicillin paste (white bands, left) stopped a tissue-eating lesion from spreading over a great star coral colony up to 11 months later (right).E.N. Shilling, I.R. Combs and J.D. Voss/Scientific Reports 2021

    Within about three months of the treatment, some 95 percent of infected coral tissues treated with amoxicillin had healed. Meanwhile, only about 20 percent of infected tissue treated with chlorinated epoxy had healed in that time — no better than untreated lesions. 

    But a one-and-done treatment doesn’t stop new lesions from popping up over time, the team found. And some key questions remain unanswered, the scientists note, including how the treatment works on larger scales and what, if any, longer-term side effects the antibiotic could have on the corals and their surrounding environment.“Erin’s work is fabulous,” says Karen Neely, a marine biologist at Nova Southeastern University in Fort Lauderdale, Fla. Neely and her colleagues see similar results in their two-year experiment at the Florida National Marine Sanctuary. The researchers used the same amoxicillin paste and chlorinated epoxy treatments on more than 2,300 lesions on upwards of 1,600 coral colonies representing eight species, including great star coral.Those antibiotic treatments were more than 95 percent effective across all species, Neely says. And spot-treating new lesions that popped up after the initial treatment appeared to stop corals from becoming reinfected over time. That study is currently undergoing peer-review in Frontiers in Marine Science.

    “Overall, putting these corals in this treatment program saves them,” Neely says. “We don’t get happy endings very often, so that’s a nice one.” More

  • in

    Smartphone breath alcohol testing devices vary widely in accuracy

    Alcohol-impaired driving kills 29 people a day and costs $121 billion a year in the U.S. After years of progress in reducing alcohol-impaired driving fatalities, efforts began to stall in 2009, and fatalities started increasing again in 2015. With several studies demonstrating that drinkers cannot accurately estimate their own blood alcohol concentration (BAC), handheld alcohol breath testing devices, also known as breathalyzers, allow people to measure their own breath alcohol concentration (BrAC) to determine if they are below the legal limit of .08% before attempting to drive.
    The latest generation of personal alcohol breath testing devices pair with smartphones. While some of these devices were found to be relatively accurate, others may mislead users into thinking that they are fit to drive, according to a new study from the Perelman School of Medicine at the University of Pennsylvania.
    The findings, published today in Alcoholism: Clinical & Experimental Research, compares the accuracy of six such devices with that of two validated alcohol-consumption tests — BAC taken from venipuncture, and a police-grade handheld breath testing device.
    “All alcohol-impaired driving crashes are preventable tragedies,” says lead investigator M. Kit Delgado, MD, MS, an assistant professor of Emergency Medicine and Epidemiology at Penn. “It is common knowledge that you should not drive if intoxicated, but people often don’t have or plan alternative travel arrangements and have difficulty judging their fitness to drive after drinking. Some may use smartphone breathalyzers to see if they are over the legal driving limit. If these devices lead people to incorrectly believe their blood alcohol content is low enough to drive safely, they endanger not only themselves, but everyone else on the road or in the car.”
    To assess these devices, researchers engaged 20 moderate drinkers between the ages of 21 and 39. The participants were given three doses of vodka over 70 minutes with the goal of reaching a peak BAC over the legal driving limit of around 0.10%. After each dose, participants’ BrAC was measured using smartphone-paired devices and a police-grade handheld device. After the third dose, their blood was drawn and tested for BAC, the most accurate way of measuring alcohol consumption. Researchers also explored the devices’ ability to detect breath alcohol concentration above common legal driving limits (0.05% and 0.08%). They used statistical analysis to explore differences between the measurements.
    All seven devices underestimated BAC by more than 0.01%, though the some were consistently more accurate than others. Two devices failed to detect BrAC levels of 0.08% as measured by a police-grade device more than half the time. Since the completion of the study, one of the devices was discontinued and is no longer sold, and other models have been replaced by newer technologies. However, two of the other devices had similar accuracy as a police-grade device. These devices have been used to remotely collect accurate measurements of alcohol consumption for research . They could also be used to scale up contingency management addiction treatment programs that have been shown to help promote abstinence among patients with alcohol use disorders. These programs, which have proven to be highly effective, have traditionally provided prizes for negative in person breathalyzer measurements. Smartphone breathalyzer apps allow these programs to be administered remotely as breath alcohol readings can be verified with automatically captured pictures of the person’s face providing the reading and prize redemption could be automated.
    “While it’s always best to plan not to drive after drinking, if the public or addiction treatment providers are going to use these devices, some are more accurate than others. Given how beneficial these breathalyzer devices could be to public health, our findings suggest that oversight or regulation would be valuable,” Delgado concludes. “Currently, the Food and Drug Administration doesn’t require approval for these devices — which would involve clearance based on review of data accuracy — but it should reconsider this position in light of our findings.”
    Story Source:
    Materials provided by University of Pennsylvania School of Medicine. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence makes great microscopes better than ever

    To observe the swift neuronal signals in a fish brain, scientists have started to use a technique called light-field microscopy, which makes it possible to image such fast biological processes in 3D. But the images are often lacking in quality, and it takes hours or days for massive amounts of data to be converted into 3D volumes and movies.
    Now, EMBL scientists have combined artificial intelligence (AI) algorithms with two cutting-edge microscopy techniques — an advance that shortens the time for image processing from days to mere seconds, while ensuring that the resulting images are crisp and accurate. The findings are published in Nature Methods.
    “Ultimately, we were able to take ‘the best of both worlds’ in this approach,” says Nils Wagner, one of the paper’s two lead authors and now a PhD student at the Technical University of Munich. “AI enabled us to combine different microscopy techniques, so that we could image as fast as light-field microscopy allows and get close to the image resolution of light-sheet microscopy.”
    Although light-sheet microscopy and light-field microscopy sound similar, these techniques have different advantages and challenges. Light-field microscopy captures large 3D images that allow researchers to track and measure remarkably fine movements, such as a fish larva’s beating heart, at very high speeds. But this technique produces massive amounts of data, which can take days to process, and the final images usually lack resolution.
    Light-sheet microscopy homes in on a single 2D plane of a given sample at one time, so researchers can image samples at higher resolution. Compared with light-field microscopy, light-sheet microscopy produces images that are quicker to process, but the data are not as comprehensive, since they only capture information from a single 2D plane at a time.
    To take advantage of the benefits of each technique, EMBL researchers developed an approach that uses light-field microscopy to image large 3D samples and light-sheet microscopy to train the AI algorithms, which then create an accurate 3D picture of the sample.
    “If you build algorithms that produce an image, you need to check that these algorithms are constructing the right image,” explains Anna Kreshuk, the EMBL group leader whose team brought machine learning expertise to the project. In the new study, the researchers used light-sheet microscopy to make sure the AI algorithms were working, Anna says. “This makes our research stand out from what has been done in the past.”
    Robert Prevedel, the EMBL group leader whose group contributed the novel hybrid microscopy platform, notes that the real bottleneck in building better microscopes often isn’t optics technology, but computation. That’s why, back in 2018, he and Anna decided to join forces. “Our method will be really key for people who want to study how brains compute. Our method can image an entire brain of a fish larva, in real time,” Robert says.
    He and Anna say this approach could potentially be modified to work with different types of microscopes too, eventually allowing biologists to look at dozens of different specimens and see much more, much faster. For example, it could help to find genes that are involved in heart development, or could measure the activity of thousands of neurons at the same time.
    Next, the researchers plan to explore whether the method can be applied to larger species, including mammals. More

  • in

    Researchers develop artificial intelligence that can detect sarcasm in social media

    Computer science researchers at the University of Central Florida have developed a sarcasm detector.
    Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.
    That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion — either positive, negative or neutral — associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.
    The team’s findings were recently published in the journal Entropy.
    Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
    “The presence of sarcasm in text is the main hindrance in the performance of sentiment analysis,” says Assistant Professor of engineering Ivan Garibay ’00MS ’04PhD. “Sarcasm isn’t always easy to identify in conversation, so you can imagine it’s pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”
    The team, which includes computer science doctoral student Ramya Akula, began working on this problem under a DARPA grant that supports the organization’s Computational Simulation of Online Social Behavior program.
    “Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions and gestures that cannot be represented in text,” says Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O). “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
    This is one of the challenges Garibay’s Complex Adaptive Systems Lab (CASL) is studying. CASL is an interdisciplinary research group dedicated to the study of complex phenomena such as the global economy, the global information environment, innovation ecosystems, sustainability, and social and cultural dynamics and evolution. CASL scientists study these problems using data science, network science, complexity science, cognitive science, machine learning, deep learning, social sciences, team cognition, among other approaches.
    “In face-to-face conversation, sarcasm can be identified effortlessly using facial expressions, gestures, and tone of the speaker,” Akula says. “Detecting sarcasm in textual communication is not a trivial task as none of these cues are readily available. Specially with the explosion of internet usage, sarcasm detection in online communications from social networking platforms is much more challenging.”
    Garibay is an assistant professor in Industrial Engineering and Management Systems. He has several degrees including a Ph.D. in computer science from UCF. Garibay is the director of UCF’s Artificial Intelligence and Big Data Initiative of CASL and of the master’s program in data analytics. His research areas include complex systems, agent-based models, information and misinformation dynamics on social media, artificial intelligence and machine learning. He has more than 75 peer-reviewed papers and more than $9.5 million in funding from various national agencies.
    Akula is a doctoral scholar and graduate research assistant at CASL. She has a master’s degree in computer science from Technical University of Kaiserslautern in Germany and a bachelor’s degree in computer science engineering from Jawaharlal Nehru Technological University, India.
    Story Source:
    Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length. More

  • in

    Algorithms show accuracy in gauging unconsciousness under general anesthesia

    Anesthestic drugs act on the brain but most anesthesiologists rely on heart rate, respiratory rate, and movement to infer whether surgery patients remain unconscious to the desired degree. In a new study, a research team based at MIT and Massachusetts General Hospital shows that a straightforward artificial intelligence approach, attuned to the kind of anesthetic being used, can yield algorithms that assess unconsciousness in patients based on brain activity with high accuracy and reliability.
    “One of the things that is foremost in the minds of anesthesiologists is ‘Do I have somebody who is lying in front of me who may be conscious and I don’t realize it?’ Being able to reliably maintain unconsciousness in a patient during surgery is fundamental to what we do,” said senior author Emery N. Brown, Edward Hood Taplin Professor in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT, and an anesthesiologist at MGH. “This is an important step forward.”
    More than providing a good readout of unconsciousness, Brown added, the new algorithms offer the potential to allow anesthesiologists to maintain it at the desired level while using less drug than they might administer when depending on less direct, accurate and reliable indicators. That can improve patient’s post-operative outcomes, such as delirium.
    “We may always have to be a little bit ‘overboard’,” said Brown, who is also a professor at Harvard Medical School. “But can we do it with sufficient accuracy so that we are not dosing people more than is needed?”
    Used to drive an infusion pump, for instance, algorithms could help anesthesiologists precisely throttle drug delivery to optimize a patient’s state and the doses they are receiving.
    Artificial intelligence, real-world testing
    To develop the technology to do so, postdocs John Abel and Marcus Badgeley led the study, published in PLOS ONE [LINK TBD], in which they trained machine learning algorithms on a remarkable data set the lab gathered back in 2013. In that study, 10 healthy volunteers in their 20s underwent anesthesia with the commonly used drug propofol. As the dose was methodically raised using computer controlled delivery, the volunteers were asked to respond to a simple request until they couldn’t anymore. Then when they were brought back to consciousness as the dose was later lessened, they became able to respond again. All the while, neural rhythms reflecting their brain activity were recorded with electroencephalogram (EEG) electrodes, providing a direct, real-time link between measured brain activity and exhibited unconsciousness. More

  • in

    Hologram experts can now create real-life images that move in the air

    They may be tiny weapons, but Brigham Young University’s holography research group has figured out how to create lightsabers — green for Yoda and red for Darth Vader, naturally — with actual luminous beams rising from them.
    Inspired by the displays of science fiction, the researchers have also engineered battles between equally small versions of the Starship Enterprise and a Klingon Battle Cruiser that incorporate photon torpedoes launching and striking the enemy vessel that you can see with the naked eye.
    “What you’re seeing in the scenes we create is real; there is nothing computer generated about them,” said lead researcher Dan Smalley, a professor of electrical engineering at BYU. “This is not like the movies, where the lightsabers or the photon torpedoes never really existed in physical space. These are real, and if you look at them from any angle, you will see them existing in that space.”
    It’s the latest work from Smalley and his team of researchers who garnered national and international attention three years ago when they figured out how to draw screenless, free-floating objects in space. Called optical trap displays, they’re created by trapping a single particle in the air with a laser beam and then moving that particle around, leaving behind a laser-illuminated path that floats in midair; like a “a 3D printer for light.”
    The research group’s new project, funded by a National Science Foundation CAREER grant, goes to the next level and produces simple animations in thin air. The development paves the way for an immersive experience where people can interact with holographic-like virtual objects that co-exist in their immediate space.
    “Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage,” Smalley said. “This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.”
    To demonstrate that principle, the team has created virtual stick figures that walk in thin air. They were able to demonstrate the interaction between their virtual images and humans by having a student place a finger in the middle of the volumetric display and then film the same stick finger walking along and jumping off that finger.
    Smalley and Rogers detail these and other recent breakthroughs in a new paper published in Nature Scientific Reports this month. The work overcomes a limiting factor to optical trap displays: wherein this technology lacks the ability to show virtual images, Smalley and Rogers show it is possible to simulate virtual images by employing a time-varying perspective projection backdrop.
    “We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is,” Rogers said. “This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”
    Video: https://www.youtube.com/watch?v=N12i_FaHvOU&list=TLGGbyUMLSISdIswNzA1MjAyMQ&t=1s
    Story Source:
    Materials provided by Brigham Young University. Original written by Todd Hollingshead. Note: Content may be edited for style and length. More

  • in

    Mangrove forests on the Yucatan Peninsula store record amounts of carbon

    Coastal mangrove forests are carbon storage powerhouses, tucking away vast amounts of organic matter among their submerged, tangled root webs.

    But even for mangroves, there is a “remarkable” amount of carbon stored in small pockets of forest growing around sinkholes on Mexico’s Yucatan Peninsula, researchers report May 5 in Biology Letters. These forests can stock away more than five times as much carbon per hectare as most other terrestrial forests.

    There are dozens of mangrove-lined sinkholes, or cenotes, on the peninsula. Such carbon storage hot spots could help nations or companies achieve carbon neutrality — in which the volume of greenhouse gas emissions released into the atmosphere is balanced by the amount of carbon sequestered away (SN: 1/31/20).

    At three cenotes, researchers led by Fernanda Adame, a wetland scientist at Griffith University in Brisbane, Australia, collected samples of soil at depths down to 6 meters, and used carbon-14 dating to estimate how fast the soil had accumulated at each site. The three cenotes each had “massive” amounts of soil organic carbon, the researchers report, averaging about 1,500 metric tons per hectare. One site, Casa Cenote, stored as much as 2,792 metric tons per hectare.

    Mangrove roots make ideal traps for organic material. The submerged soils also help preserve carbon. As sea levels have slowly risen over the last 8,000 years, mangroves have kept pace, climbing atop sediment ported in from rivers or migrating inland. In the cave-riddled limestone terrain of the Yucatan Peninsula, there are no rivers to supply sediment. Instead, “the mangroves produce more roots to avoid drowning,” which also helps the trees climb upward more quickly, offering more space for organic matter to accumulate, Adame says.

    As global temperatures increase, sea levels may eventually rise too quickly for mangroves to keep up (SN: 6/4/20). Other, more immediate threats to the peninsula’s carbon-rich cenotes include groundwater pollution, expanding infrastructure, urbanization and tourism. More