More stories

  • in

    Smartphone breath alcohol testing devices vary widely in accuracy

    Alcohol-impaired driving kills 29 people a day and costs $121 billion a year in the U.S. After years of progress in reducing alcohol-impaired driving fatalities, efforts began to stall in 2009, and fatalities started increasing again in 2015. With several studies demonstrating that drinkers cannot accurately estimate their own blood alcohol concentration (BAC), handheld alcohol breath testing devices, also known as breathalyzers, allow people to measure their own breath alcohol concentration (BrAC) to determine if they are below the legal limit of .08% before attempting to drive.
    The latest generation of personal alcohol breath testing devices pair with smartphones. While some of these devices were found to be relatively accurate, others may mislead users into thinking that they are fit to drive, according to a new study from the Perelman School of Medicine at the University of Pennsylvania.
    The findings, published today in Alcoholism: Clinical & Experimental Research, compares the accuracy of six such devices with that of two validated alcohol-consumption tests — BAC taken from venipuncture, and a police-grade handheld breath testing device.
    “All alcohol-impaired driving crashes are preventable tragedies,” says lead investigator M. Kit Delgado, MD, MS, an assistant professor of Emergency Medicine and Epidemiology at Penn. “It is common knowledge that you should not drive if intoxicated, but people often don’t have or plan alternative travel arrangements and have difficulty judging their fitness to drive after drinking. Some may use smartphone breathalyzers to see if they are over the legal driving limit. If these devices lead people to incorrectly believe their blood alcohol content is low enough to drive safely, they endanger not only themselves, but everyone else on the road or in the car.”
    To assess these devices, researchers engaged 20 moderate drinkers between the ages of 21 and 39. The participants were given three doses of vodka over 70 minutes with the goal of reaching a peak BAC over the legal driving limit of around 0.10%. After each dose, participants’ BrAC was measured using smartphone-paired devices and a police-grade handheld device. After the third dose, their blood was drawn and tested for BAC, the most accurate way of measuring alcohol consumption. Researchers also explored the devices’ ability to detect breath alcohol concentration above common legal driving limits (0.05% and 0.08%). They used statistical analysis to explore differences between the measurements.
    All seven devices underestimated BAC by more than 0.01%, though the some were consistently more accurate than others. Two devices failed to detect BrAC levels of 0.08% as measured by a police-grade device more than half the time. Since the completion of the study, one of the devices was discontinued and is no longer sold, and other models have been replaced by newer technologies. However, two of the other devices had similar accuracy as a police-grade device. These devices have been used to remotely collect accurate measurements of alcohol consumption for research . They could also be used to scale up contingency management addiction treatment programs that have been shown to help promote abstinence among patients with alcohol use disorders. These programs, which have proven to be highly effective, have traditionally provided prizes for negative in person breathalyzer measurements. Smartphone breathalyzer apps allow these programs to be administered remotely as breath alcohol readings can be verified with automatically captured pictures of the person’s face providing the reading and prize redemption could be automated.
    “While it’s always best to plan not to drive after drinking, if the public or addiction treatment providers are going to use these devices, some are more accurate than others. Given how beneficial these breathalyzer devices could be to public health, our findings suggest that oversight or regulation would be valuable,” Delgado concludes. “Currently, the Food and Drug Administration doesn’t require approval for these devices — which would involve clearance based on review of data accuracy — but it should reconsider this position in light of our findings.”
    Story Source:
    Materials provided by University of Pennsylvania School of Medicine. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence makes great microscopes better than ever

    To observe the swift neuronal signals in a fish brain, scientists have started to use a technique called light-field microscopy, which makes it possible to image such fast biological processes in 3D. But the images are often lacking in quality, and it takes hours or days for massive amounts of data to be converted into 3D volumes and movies.
    Now, EMBL scientists have combined artificial intelligence (AI) algorithms with two cutting-edge microscopy techniques — an advance that shortens the time for image processing from days to mere seconds, while ensuring that the resulting images are crisp and accurate. The findings are published in Nature Methods.
    “Ultimately, we were able to take ‘the best of both worlds’ in this approach,” says Nils Wagner, one of the paper’s two lead authors and now a PhD student at the Technical University of Munich. “AI enabled us to combine different microscopy techniques, so that we could image as fast as light-field microscopy allows and get close to the image resolution of light-sheet microscopy.”
    Although light-sheet microscopy and light-field microscopy sound similar, these techniques have different advantages and challenges. Light-field microscopy captures large 3D images that allow researchers to track and measure remarkably fine movements, such as a fish larva’s beating heart, at very high speeds. But this technique produces massive amounts of data, which can take days to process, and the final images usually lack resolution.
    Light-sheet microscopy homes in on a single 2D plane of a given sample at one time, so researchers can image samples at higher resolution. Compared with light-field microscopy, light-sheet microscopy produces images that are quicker to process, but the data are not as comprehensive, since they only capture information from a single 2D plane at a time.
    To take advantage of the benefits of each technique, EMBL researchers developed an approach that uses light-field microscopy to image large 3D samples and light-sheet microscopy to train the AI algorithms, which then create an accurate 3D picture of the sample.
    “If you build algorithms that produce an image, you need to check that these algorithms are constructing the right image,” explains Anna Kreshuk, the EMBL group leader whose team brought machine learning expertise to the project. In the new study, the researchers used light-sheet microscopy to make sure the AI algorithms were working, Anna says. “This makes our research stand out from what has been done in the past.”
    Robert Prevedel, the EMBL group leader whose group contributed the novel hybrid microscopy platform, notes that the real bottleneck in building better microscopes often isn’t optics technology, but computation. That’s why, back in 2018, he and Anna decided to join forces. “Our method will be really key for people who want to study how brains compute. Our method can image an entire brain of a fish larva, in real time,” Robert says.
    He and Anna say this approach could potentially be modified to work with different types of microscopes too, eventually allowing biologists to look at dozens of different specimens and see much more, much faster. For example, it could help to find genes that are involved in heart development, or could measure the activity of thousands of neurons at the same time.
    Next, the researchers plan to explore whether the method can be applied to larger species, including mammals. More

  • in

    Researchers develop artificial intelligence that can detect sarcasm in social media

    Computer science researchers at the University of Central Florida have developed a sarcasm detector.
    Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.
    That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion — either positive, negative or neutral — associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.
    The team’s findings were recently published in the journal Entropy.
    Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
    “The presence of sarcasm in text is the main hindrance in the performance of sentiment analysis,” says Assistant Professor of engineering Ivan Garibay ’00MS ’04PhD. “Sarcasm isn’t always easy to identify in conversation, so you can imagine it’s pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”
    The team, which includes computer science doctoral student Ramya Akula, began working on this problem under a DARPA grant that supports the organization’s Computational Simulation of Online Social Behavior program.
    “Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions and gestures that cannot be represented in text,” says Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O). “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
    This is one of the challenges Garibay’s Complex Adaptive Systems Lab (CASL) is studying. CASL is an interdisciplinary research group dedicated to the study of complex phenomena such as the global economy, the global information environment, innovation ecosystems, sustainability, and social and cultural dynamics and evolution. CASL scientists study these problems using data science, network science, complexity science, cognitive science, machine learning, deep learning, social sciences, team cognition, among other approaches.
    “In face-to-face conversation, sarcasm can be identified effortlessly using facial expressions, gestures, and tone of the speaker,” Akula says. “Detecting sarcasm in textual communication is not a trivial task as none of these cues are readily available. Specially with the explosion of internet usage, sarcasm detection in online communications from social networking platforms is much more challenging.”
    Garibay is an assistant professor in Industrial Engineering and Management Systems. He has several degrees including a Ph.D. in computer science from UCF. Garibay is the director of UCF’s Artificial Intelligence and Big Data Initiative of CASL and of the master’s program in data analytics. His research areas include complex systems, agent-based models, information and misinformation dynamics on social media, artificial intelligence and machine learning. He has more than 75 peer-reviewed papers and more than $9.5 million in funding from various national agencies.
    Akula is a doctoral scholar and graduate research assistant at CASL. She has a master’s degree in computer science from Technical University of Kaiserslautern in Germany and a bachelor’s degree in computer science engineering from Jawaharlal Nehru Technological University, India.
    Story Source:
    Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length. More

  • in

    Algorithms show accuracy in gauging unconsciousness under general anesthesia

    Anesthestic drugs act on the brain but most anesthesiologists rely on heart rate, respiratory rate, and movement to infer whether surgery patients remain unconscious to the desired degree. In a new study, a research team based at MIT and Massachusetts General Hospital shows that a straightforward artificial intelligence approach, attuned to the kind of anesthetic being used, can yield algorithms that assess unconsciousness in patients based on brain activity with high accuracy and reliability.
    “One of the things that is foremost in the minds of anesthesiologists is ‘Do I have somebody who is lying in front of me who may be conscious and I don’t realize it?’ Being able to reliably maintain unconsciousness in a patient during surgery is fundamental to what we do,” said senior author Emery N. Brown, Edward Hood Taplin Professor in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT, and an anesthesiologist at MGH. “This is an important step forward.”
    More than providing a good readout of unconsciousness, Brown added, the new algorithms offer the potential to allow anesthesiologists to maintain it at the desired level while using less drug than they might administer when depending on less direct, accurate and reliable indicators. That can improve patient’s post-operative outcomes, such as delirium.
    “We may always have to be a little bit ‘overboard’,” said Brown, who is also a professor at Harvard Medical School. “But can we do it with sufficient accuracy so that we are not dosing people more than is needed?”
    Used to drive an infusion pump, for instance, algorithms could help anesthesiologists precisely throttle drug delivery to optimize a patient’s state and the doses they are receiving.
    Artificial intelligence, real-world testing
    To develop the technology to do so, postdocs John Abel and Marcus Badgeley led the study, published in PLOS ONE [LINK TBD], in which they trained machine learning algorithms on a remarkable data set the lab gathered back in 2013. In that study, 10 healthy volunteers in their 20s underwent anesthesia with the commonly used drug propofol. As the dose was methodically raised using computer controlled delivery, the volunteers were asked to respond to a simple request until they couldn’t anymore. Then when they were brought back to consciousness as the dose was later lessened, they became able to respond again. All the while, neural rhythms reflecting their brain activity were recorded with electroencephalogram (EEG) electrodes, providing a direct, real-time link between measured brain activity and exhibited unconsciousness. More

  • in

    Hologram experts can now create real-life images that move in the air

    They may be tiny weapons, but Brigham Young University’s holography research group has figured out how to create lightsabers — green for Yoda and red for Darth Vader, naturally — with actual luminous beams rising from them.
    Inspired by the displays of science fiction, the researchers have also engineered battles between equally small versions of the Starship Enterprise and a Klingon Battle Cruiser that incorporate photon torpedoes launching and striking the enemy vessel that you can see with the naked eye.
    “What you’re seeing in the scenes we create is real; there is nothing computer generated about them,” said lead researcher Dan Smalley, a professor of electrical engineering at BYU. “This is not like the movies, where the lightsabers or the photon torpedoes never really existed in physical space. These are real, and if you look at them from any angle, you will see them existing in that space.”
    It’s the latest work from Smalley and his team of researchers who garnered national and international attention three years ago when they figured out how to draw screenless, free-floating objects in space. Called optical trap displays, they’re created by trapping a single particle in the air with a laser beam and then moving that particle around, leaving behind a laser-illuminated path that floats in midair; like a “a 3D printer for light.”
    The research group’s new project, funded by a National Science Foundation CAREER grant, goes to the next level and produces simple animations in thin air. The development paves the way for an immersive experience where people can interact with holographic-like virtual objects that co-exist in their immediate space.
    “Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage,” Smalley said. “This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.”
    To demonstrate that principle, the team has created virtual stick figures that walk in thin air. They were able to demonstrate the interaction between their virtual images and humans by having a student place a finger in the middle of the volumetric display and then film the same stick finger walking along and jumping off that finger.
    Smalley and Rogers detail these and other recent breakthroughs in a new paper published in Nature Scientific Reports this month. The work overcomes a limiting factor to optical trap displays: wherein this technology lacks the ability to show virtual images, Smalley and Rogers show it is possible to simulate virtual images by employing a time-varying perspective projection backdrop.
    “We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is,” Rogers said. “This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”
    Video: https://www.youtube.com/watch?v=N12i_FaHvOU&list=TLGGbyUMLSISdIswNzA1MjAyMQ&t=1s
    Story Source:
    Materials provided by Brigham Young University. Original written by Todd Hollingshead. Note: Content may be edited for style and length. More

  • in

    Mangrove forests on the Yucatan Peninsula store record amounts of carbon

    Coastal mangrove forests are carbon storage powerhouses, tucking away vast amounts of organic matter among their submerged, tangled root webs.

    But even for mangroves, there is a “remarkable” amount of carbon stored in small pockets of forest growing around sinkholes on Mexico’s Yucatan Peninsula, researchers report May 5 in Biology Letters. These forests can stock away more than five times as much carbon per hectare as most other terrestrial forests.

    There are dozens of mangrove-lined sinkholes, or cenotes, on the peninsula. Such carbon storage hot spots could help nations or companies achieve carbon neutrality — in which the volume of greenhouse gas emissions released into the atmosphere is balanced by the amount of carbon sequestered away (SN: 1/31/20).

    At three cenotes, researchers led by Fernanda Adame, a wetland scientist at Griffith University in Brisbane, Australia, collected samples of soil at depths down to 6 meters, and used carbon-14 dating to estimate how fast the soil had accumulated at each site. The three cenotes each had “massive” amounts of soil organic carbon, the researchers report, averaging about 1,500 metric tons per hectare. One site, Casa Cenote, stored as much as 2,792 metric tons per hectare.

    Mangrove roots make ideal traps for organic material. The submerged soils also help preserve carbon. As sea levels have slowly risen over the last 8,000 years, mangroves have kept pace, climbing atop sediment ported in from rivers or migrating inland. In the cave-riddled limestone terrain of the Yucatan Peninsula, there are no rivers to supply sediment. Instead, “the mangroves produce more roots to avoid drowning,” which also helps the trees climb upward more quickly, offering more space for organic matter to accumulate, Adame says.

    As global temperatures increase, sea levels may eventually rise too quickly for mangroves to keep up (SN: 6/4/20). Other, more immediate threats to the peninsula’s carbon-rich cenotes include groundwater pollution, expanding infrastructure, urbanization and tourism. More

  • in

    Mathematical model predicting disease spread patterns

    Early on in the COVID-19 pandemic, health officials seized on contact tracing as the most effective way to anticipate the virus’s migration from the initial, densely populated hot spots and try to curb its spread. Months later, infections were nonetheless recorded in similar patterns in nearly every region of the country, both urban and rural.
    A team of environmental engineers, alerted by the unusual wealth of data published regularly by county health agencies throughout the pandemic, began researching new methods to describe what was happening on the ground in a way that does not require obtaining information on individuals’ movements or contacts. Funding for their effort came through a National Research Foundation RAPID research grant (CBET 2028271).
    In a paper published May 6 in the Proceedings of the National Academy of Sciences, they presented their results: a model that predicts where the disease will spread from an outbreak, in what patterns and how quickly.
    “Our model should be helpful to policymakers because it predicts disease spread without getting into granular details, such as personal travel information, which can be tricky to obtain from a privacy standpoint and difficult to gather in terms of resources,” explained Xiaolong Geng, a research assistant professor of environmental engineering at NJIT who built the model and is one of the paper’s authors.
    “We did not think a high level of intrusion would work in the United States so we sought an alternative way to map the spread,” noted Gabriel Katul, the Theodore S. Coile Distinguished Professor of Hydrology and Micrometeorology at Duke University and a co-author.
    Their numerical scheme mapped the classic SIR epidemic model (computations based on a division of the population into groups of susceptible, infectious and recovered people) onto the population agglomeration template. Their calculations closely approximated the multiphase COVID-19 epidemics recorded in each U.S. state. More

  • in

    In graphene process, resistance is useful

    A Rice University laboratory has adapted its laser-induced graphene technique to make high-resolution, micron-scale patterns of the conductive material for consumer electronics and other applications.
    Laser-induced graphene (LIG), introduced in 2014 by Rice chemist James Tour, involves burning away everything that isn’t carbon from polymers or other materials, leaving the carbon atoms to reconfigure themselves into films of characteristic hexagonal graphene.
    The process employs a commercial laser that “writes” graphene patterns into surfaces that to date have included wood, paper and even food.
    The new iteration writes fine patterns of graphene into photoresist polymers, light-sensitive materials used in photolithography and photoengraving.
    Baking the film increases its carbon content, and subsequent lasing solidifies the robust graphene pattern, after which unlased photoresist is washed away.
    Details of the PR-LIG process appear in the American Chemical Society journal ACS Nano.
    “This process permits the use of graphene wires and devices in a more conventional silicon-like process technology,” Tour said. “It should allow a transition into mainline electronics platforms.”
    The Rice lab produced lines of LIG about 10 microns wide and hundreds of nanometers thick, comparable to that now achieved by more cumbersome processes that involve lasers attached to scanning electron microscopes, according to the researchers.
    Achieving lines of LIG small enough for circuitry prompted the lab to optimize its process, according to graduate student Jacob Beckham, lead author of the paper.
    “The breakthrough was a careful control of the process parameters,” Beckham said. “Small lines of photoresist absorb laser light depending on their geometry and thickness, so optimizing the laser power and other parameters allowed us to get good conversion at very high resolution.”
    Because the positive photoresist is a liquid before being spun onto a substrate for lasing, it’s a simple matter to dope the raw material with metals or other additives to customize it for applications, Tour said.
    Potential applications include on-chip microsupercapacitors, functional nanocomposites and microfluidic arrays.
    Story Source:
    Materials provided by Rice University. Note: Content may be edited for style and length. More