More stories

  • in

    3D hand pose estimation using a wrist-worn camera

    Researchers at Tokyo Institute of Technology (Tokyo Tech) working in collaboration with colleagues at Carnegie Mellon University, the University of St Andrews and the University of New South Wales have developed a wrist-worn device for 3D hand pose estimation. The system consists of a camera that captures images of the back of the hand, and is supported by a neural network called DorsalNet which can accurately recognize dynamic gestures.
    Being able to track hand gestures is of crucial importance in advancing augmented reality (AR) and virtual reality (VR) devices that are already beginning to be much in demand in the medical, sports and entertainment sectors. To date, these devices have involved using bulky data gloves which tend to hinder natural movement or controllers with a limited range of sensing.
    Now, a research team led by Hideki Koike at Tokyo Tech has devised a camera-based wrist-worn 3D hand pose recognition system which could in future be on par with a smartwatch. Their system can importantly allow capture of hand motions in mobile settings.
    “This work is the first vision-based real-time 3D hand pose estimator using visual features from the dorsal hand region,” the researchers say. The system consists of a camera supported by a neural network named DorsalNet which can accurately estimate 3D hand poses by detecting changes in the back of the hand.
    The researchers confirmed that their system outperforms previous work with an average of 20% higher accuracy in recognizing dynamic gestures, and achieves a 75% accuracy of detecting eleven different grasp types.
    The work could advance the development of controllers that support bare-hand interaction. In preliminary tests, the researchers demonstrated that it would be possible to use their system for smart devices control, for example, changing the time on a smartwatch simply by changing finger angle. They also showed it would be possible to use the system as a virtual mouse or keyboard, for example by rotating the wrist to control the position of the pointer and using a simple 8-key keyboard for typing input.
    They point out that further improvements to the system such as using a camera with a higher frame rate to capture fast wrist movement and being able to deal with more diverse lighting conditions will be needed for real world use.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Targeting the shell of the Ebola virus

    As the world grapples with the coronavirus (COVID-19) pandemic, another virus has been raging again in the Democratic Republic of the Congo in recent months: Ebola. Since the first terrifying outbreak in 2013, the Ebola virus has periodically emerged in Africa, causing horrific bleeding in its victims and, in many cases, death.
    How can we battle these infectious agents that reproduce by hijacking cells and reprogramming them into virus-replicating machines? Science at the molecular level is critical to gaining the upper hand — research you’ll find underway in the laboratory of Professor Juan Perilla at the University of Delaware.
    Perilla and his team of graduate and undergraduate students in UD’s Department of Chemistry and Biochemistry are using supercomputers to simulate the inner workings of Ebola, observing the way molecules move, atom by atom, to carry out their functions. In the team’s latest work, they reveal structural features of the virus’s coiled protein shell, or nucleocapsid, that may be promising therapeutic targets, more easily destabilized and knocked out by an antiviral treatment.
    The research is highlighted in the Tuesday, Oct. 20 issue of the Journal of Chemical Physics, which is published by the American Institute of Physics, a federation of societies in the physical sciences representing more than 120,000 members.
    “The Ebola nucleocapsid looks like a Slinky walking spring, whose neighboring rings are connected,” Perilla said. “We tried to find what factors control the stability of this spring in our computer simulations.”
    The life cycle of Ebola is highly dependent on this coiled nucleocapsid, which surrounds the virus’s genetic material consisting of a single strand of ribonucleic acid (ssRNA). Nucleoproteins protect this RNA from being recognized by cellular defense mechanisms. Through interactions with different viral proteins, such as VP24 and VP30, these nucleoproteins form a minimal functional unit — a copy machine — for viral transcription and replication.

    advertisement

    While nucleoproteins are important to the nucleocapsid’s stability, the team’s most surprising finding, Perilla said, is that in the absence of single-stranded RNA, the nucleocapsid quickly becomes disordered. But RNA alone is not sufficient to stabilize it. The team also observed charged ions binding to the nucleocapsid, which may reveal where other important cellular factors bind and stabilize the structure during the virus’s life cycle.
    Perilla compared the team’s work to a search for molecular “knobs” that control the nucleocapsid’s stability like volume control knobs that can be turned up to hinder virus replication.
    The UD team built two molecular dynamics systems of the Ebola nucleocapsid for their study. One included single-stranded RNA; the other contained only the nucleoprotein. The systems were then simulated using the Texas Advanced Computing Center’s Frontera supercomputer — the largest academic supercomputer in the world. The simulations took about two months to complete.
    Graduate research assistant Chaoyi Xu ran the molecular simulations, while the entire team was involved in developing the analytical framework and conducting the analysis. Writing the manuscript was a learning experience for Xu and undergraduate research assistant Tanya Nesterova, who had not been directly involved in this work before. She also received training as a next-generation computational scientist with support from UD’s Undergraduate Research Scholars program and NSF’s XSEDE-EMPOWER program. The latter has allowed her to perform the highest-level research using the nation’s top supercomputers. Postdoctoral researcher Nidhi Katyal’s expertise also was essential to bringing the project to completion, Perilla said.
    While a vaccine exists for Ebola, it must be kept extremely cold, which is difficult in remote African regions where outbreaks have occurred. Will the team’s work help advance new treatments?
    “As basic scientists we are excited to understand the fundamental principles of Ebola,” Perilla said. “The nucleocapsid is the most abundant protein in the virus and it’s highly immunogenic — able to produce an immune response. Thus, our new findings may facilitate the development of new antiviral treatments.”
    Currently, Perilla and Jodi Hadden-Perilla are using supercomputer simulations to study the novel coronavirus that causes COVID-19. Although the structures of the nucleocapsid in Ebola and COVID-19 share some similarities — both are rod-like helical protofilaments and both are involved in the replication, transcription and packing of viral genomes — that is where the similarities end.
    “We now are refining the methodology we used for Ebola to examine SARS-CoV-2,” Perilla said. More

  • in

    Interactions within larger social groups can cause tipping points in contagion flow

    Contagion processes, such as opinion formation or disease spread, can reach a tipping point, where the contagion either rapidly spreads or dies out. When modeling these processes, it is difficult to capture this complex transition, making the conditions that affect the tipping point a challenge to uncover.
    In the journal Chaos, from AIP Publishing, Nicholas Landry and Juan G. Restrepo, from the University of Colorado Boulder, studied the parameters of these transitions by including three-person group interactions in a contagion model called the susceptible-infected-susceptible model.
    In this model, an infected person who recovers from an infection can be reinfected. It is often used to understand the propagation of things like the flu but does not typically consider interactions between more than two people.
    “With a traditional network SIS model, when you increase the infectivity of an idea or a disease, you don’t see the explosive transitions that you often see in the real world,” Landry said. “Including group interactions in addition to individual interactions has a profound effect on the system or population dynamics” and can lead to tipping point behavior.
    Once the rate of infection or information transfer between individuals passes a critical point, the fraction of infected people explosively jumps to an epidemic for high enough group infectivity. More surprisingly, if the rate of infection decreases after this jump, the infected fraction does not immediately decrease. It remains an epidemic past that same critical point before moving back down to a healthy equilibrium.
    This results in a loop region in which there may or may not be high levels of infection, depending on how many people are infected initially. How these group interactions are distributed affects the critical point at which an explosive transition occurs.
    The authors also studied how variability in the group connections — for example, whether people with more friends also participate in more group interactions — changes the likelihood of tipping point behavior. They explain the emergence of this explosive behavior as the interplay between individual interactions and group interactions. Depending on which mechanism dominates, the system may exhibit an explosive transition.
    Additional parameters can be added to the model to tune it for different processes and better understand how much of an individual’s social network must be infected for a virus or information to spread.
    The work is currently theoretical, but the researchers have plans to apply the model to actual data from physical networks and consider other structural characteristics that real-world networks exhibit.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Material found in house paint may spur technology revolution

    The development of a new method to make non-volatile computer memory may have unlocked a problem that has been holding back machine learning and has the potential to revolutionize technologies like voice recognition, image processing and autonomous driving.
    A team from Sandia National Laboratories, working with collaborators from the University of Michigan, published a paper in the peer-reviewed journal Advanced Materials that details a new method that will imbue computer chips that power machine-learning applications with more processing power by using a common material found in house paint in an analog memory device that enables highly energy-efficient machine inference operations.
    “Titanium oxide is one of the most commonly made materials. Every paint you buy has titanium oxide in it. It’s cheap and nontoxic,” explains Sandia materials scientist Alec Talin. “It’s an oxide, there’s already oxygen there. But if you take a few out, you create what are called oxygen vacancies. It turns out that when you create oxygen vacancies, you make this material electrically conductive.”
    Those oxygen vacancies can now store electrical data, giving almost any device more computing power. Talin and his team create the oxygen vacancies by heating a computer chip with a titanium oxide coating above 302 degrees Fahrenheit (150 degree Celsius), separate some of the oxygen molecules from the material using electrochemistry and create vacancies.
    “When it cools off, it stores any information you program it with,” Talin said.
    Energy efficiency a boost to machine learning
    Right now, computers generally work by storing data in one place and processing that data in another place. That means computers have to constantly transfer data from one place to the next, wasting energy and computing power.

    advertisement

    The paper’s lead author, Yiyang Li, is a former Truman Fellow at Sandia and now an assistant professor of materials science at the University of Michigan. He explained how their process has the potential to completely change how computers work.
    “What we’ve done is make the processing and the storage at the same place,” Li said. “What’s new is that we’ve been able to do it in a predictable and repeatable manner.”
    Both he and Talin see the use of oxygen vacancies as a way to help machine learning overcome a big obstacle holding it back right now — power consumption.
    “If we are trying to do machine learning, that takes a lot of energy because you are moving it back and forth and one of the barriers to realizing machine learning is power consumption,” Li said. “If you have autonomous vehicles, making decisions about driving consumes a large amount of energy to process all the inputs. If we can create an alternative material for computer chips, they will be able to process information more efficiently, saving energy and processing a lot more data.”
    Research has everyday impact
    Talin sees the potential in the performance of everyday devices.
    “Think about your cell phone,” he said. “If you want to give it a voice command, you need to be connected to a network that transfers the command to a central hub of computers that listen to your voice and then send a signal back telling your phone what to do. Through this process, voice recognition and other functions happen right in your phone.”
    Talin said the team is working on refining several processes and testing the method on a larger scale. The project is funded through Sandia’s Laboratory Directed Research and Development program.

    Story Source:
    Materials provided by DOE/Sandia National Laboratories. Note: Content may be edited for style and length. More

  • in

    With deep learning algorithms, standard CT technology produces spectral images

    Bioimaging technologies are the eyes that allow doctors to see inside the body in order to diagnose, treat, and monitor disease. Ge Wang, an endowed professor of biomedical engineering at Rensselaer Polytechnic Institute, has received significant recognition for devoting his research to coupling those imaging technologies with artificial intelligence in order to improve physicians’ “vision.”
    In research published today in Patterns, a team of engineers led by Wang demonstrated how a deep learning algorithm can be applied to a conventional computerized tomography (CT) scan in order to produce images that would typically require a higher level of imaging technology known as dual-energy CT.
    Wenxiang Cong, a research scientist at Rensselaer, is first author on this paper. Wang and Cong were also joined by coauthors from Shanghai First-Imaging Tech, and researchers from GE Research.
    “We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis,” said Wang, who is also the director of the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer.
    Conventional CT scans produce images that show the shape of tissues within the body, but they don’t give doctors sufficient information about the composition of those tissues. Even with iodine and other contrast agents, which are used to help doctors differentiate between soft tissue and vasculature, it’s hard to distinguish between subtle structures.
    A higher-level technology called dual-energy CT gathers two datasets in order to produce images that reveal both tissue shape and information about tissue composition. However, this imaging approach often requires a higher dose of radiation and is more expensive due to needed additional hardware.
    “With traditional CT, you take a grayscale image, but with dual-energy CT you take an image with two colors,” Wang said. “With deep learning, we try to use the standard machine to do the job of dual-energy CT imaging.”
    In this research, Wang and his team demonstrated how their neural network was able to produce those more complex images using single-spectrum CT data. The researchers used images produced by dual-energy CT to train their model and found that it was able to produce high-quality approximations with a relative error of less than 2%.
    “Professor Wang and his team’s expertise in bioimaging is giving physicians and surgeons ‘new eyes’ in diagnosing and treating disease,” said Deepak Vashishth, director of CBIS. “This research effort is a prime example of the partnership needed to personalize and solve persistent human health challenges.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Torie Wells. Note: Content may be edited for style and length. More

  • in

    A new approach to artificial intelligence that builds in uncertainty

    They call it artificial intelligence — not because the intelligence is somehow fake. It’s real intelligence, but it’s still made by humans. That means AI — a power tool that can add speed, efficiency, insight and accuracy to a researcher’s work — has many limitations.
    It’s only as good as the methods and data it has been given. On its own, it doesn’t know if information is missing, how much weight to give differing kinds of information or whether the data it draws on is incorrect or corrupted. It can’t deal precisely with uncertainty or random events — unless it learns how. Relying exclusively on data, as machine-learning models usually do, it does not leverage the knowledge experts have accumulated over years and physical models underpinning physical and chemical phenomena. It has been hard to teach the computer to organize and integrate information from widely different sources.
    Now researchers at the University of Delaware and the University of Massachusetts-Amherst have published details of a new approach to artificial intelligence that builds uncertainty, error, physical laws, expert knowledge and missing data into its calculations and leads ultimately to much more trustworthy models. The new method provides guarantees typically lacking from AI models, showing how valuable — or not — the model can be for achieving the desired result.
    Joshua Lansford, a doctoral student in UD’s Department of Chemical and Biomolecular Engineering, and Prof. Dion Vlachos, director of UD’s Catalysis Center for Energy Innovation, are co-authors on the paper published Oct. 14 in the journal Science Advances. Also contributing were Jinchao Feng and Markos Katsoulakis of the Department of Mathematics and Statistics at the University of Massachusetts-Amherst.
    The new mathematical framework could produce greater efficiency, precision and innovation for computer models used in many fields of research. Such models provide powerful ways to analyze data, study materials and complex interactions and tweak variables in virtual ways instead of in the lab.
    “Traditionally in physical modelings, we build a model first using only our physical intuition and expert knowledge about the system,” Lansford said. “Then after that, we measure uncertainty in predictions due to error in underlying variables, often relying on brute-force methods, where we sample, then run the model and see what happens.”
    Effective, accurate models save time and resources and point researchers to more efficient methods, new materials, greater precision and innovative approaches they might not otherwise consider.

    advertisement

    The paper describes how the new mathematical framework works in a chemical reaction known as the oxygen reduction reaction, but it is applicable to many kinds of modeling, Lansford said.
    “The chemistries and materials we need to make things faster or even make them possible — like fuel cells — are highly complex,” he said. “We need precision…. And if you want to make a more active catalyst, you need to have bounds on your prediction error. By intelligently deciding where to put your efforts, you can tighten the area to explore.
    “Uncertainty is accounted for in the design of our model,” Lansford said. “Now it is no longer a deterministic model. It is a probabilistic one.”
    With these new mathematical developments in place, the model itself identifies what data are needed to reduce model error, he said. Then a higher level of theory can be used to produce more accurate data or more data can be generated, leading to even smaller error boundaries on the predictions and shrinking the area to explore.
    “Those calculations are time-consuming to generate, so we’re often dealing with small datasets — 10-15 data points. That’s where the need comes in to apportion error.”
    That’s still not a money-back guarantee that using a specific substance or approach will deliver precisely the product desired. But it is much closer to a guarantee than you could get before.

    advertisement

    This new method of model design could greatly enhance work in renewable energy, battery technology, climate change mitigation, drug discovery, astronomy, economics, physics, chemistry and biology, to name just a few examples.
    Artificial intelligence doesn’t mean human expertise is no longer needed. Quite the opposite.
    The expert knowledge that emerges from the laboratory and the rigors of scientific inquiry is essential, foundational material for any computational model. More

  • in

    An ultrasonic projector for medicine

    A chip-based technology that generates sound profiles with high resolution and intensity could create new options for ultrasound therapy, which would become more effective and easier. A team of researchers led by Peer Fischer from the Max Planck Institute for Intelligent Systems and the University of Stuttgart has developed a projector that flexibly modulates three-dimensional ultrasound fields with comparatively little technical effort. Dynamic sound pressure profiles can thus be generated with higher resolution and sound pressure than the current technology allows. It should soon be easier to tailor ultrasound profiles to individual patients. New medical applications for ultrasound may even emerge.
    Ultrasound is widely used as a diagnostic tool in both medicine and materials science. It can also be used therapeutically. In the US, for example, tumours of the uterus and prostate are treated with high-power ultrasound. The ultrasound destroys the cancer cells by specific heating of the diseased tissue. Researchers worldwide are using ultrasound to combat tumours and other pathological changes in the brain. “In order to avoid damaging healthy tissue, the sound pressure profile must be precisely shaped,” explains Peer Fischer, Research Group Leader at the Max Planck Institute for Intelligent Systems and professor at the University of Stuttgart. Tailoring an intensive ultrasound field to diseased tissue is somewhat more difficult in the brain. This is because the skullcap distorts the sound wave. The Spatial Ultrasound Modulator (SUM) developed by researchers in Fischer’s group should help to remedy this situation and make ultrasound treatment more effective and easier in other cases. It allows the three-dimensional shape of even very intense ultrasound waves to be varied with high resolution — and with less technical effort than is currently required to modulate ultrasound profiles.
    High intensity sound pressure profiles with 10,000 pixels
    Conventional methods vary sound fields with several individual sound sources, the waves of which can be superimposed and shifted against each other. However, because the individual sound sources cannot be miniaturized at will, the resolution of these sound pressure profiles is limited to 1000 pixels. The sound transmitters are then so small that the sound pressure is sufficient for diagnostic but not therapeutic purposes. With the new technology, the researchers first generate an ultrasonic wave and then modulate its sound pressure profile independently, essentially killing two birds with one stone. “In this way, we can use much more powerful ultrasonic transducers,” explains postdoctoral fellow Kai Melde, who is part of the team that developed the SUM. “Thanks to a chip with 10,000 pixels that modulates the ultrasonic wave, we can generate a much finer-resolved profile.”
    “In order to modulate the sound pressure profile, we take advantage of the different acoustic properties of water and air,” says Zhichao Ma, a post-doctoral fellow in Fischer’s group, who was instrumental in developing the new SUM technology: “While an ultrasonic wave passes through a liquid unhindered, it is completely reflected by air bubbles.” The research team from Stuttgart thus constructed a chip the size of a thumbnail on which they can produce hydrogen bubbles by electrolysis (i.e. the splitting of water into oxygen and hydrogen with electricity) on 10,000 electrodes in a thin water film. The electrodes each have an edge length of less than a tenth of a millimetre and can be controlled individually.
    A picture show with ultrasound
    If you send an ultrasonic wave through the chip with a transducer (a kind of miniature loudspeaker), it passes through the chip unhindered. But as soon as the sound wave hits the water with the hydrogen bubbles, it continues to travel only through the liquid. Like a mask, this creates a sound pressure profile with cut-outs at the points where the air bubbles are located. To form a different sound profile, the researchers first wipe the hydrogen bubbles away from the chip and then generate gas bubbles in a new pattern.
    The researchers demonstrated how precisely and variably the new projector for ultrasound works by writing the alphabet in a kind of picture show of sound pressure profiles. To make the letters visible, they caught micro-particles in the various sound pressure profiles. Depending on the sound pattern, the particles arranged themselves into the individual letters.
    Organoid models for drug testing
    For similar images, the scientists collaborating with Peer Fischer, Kai Melde, and Zhichao Ma previously arranged micro-particles with sound pressure profiles, which they modelled using a slightly different technique. They used special plastic stencils to deform the pressure profile of an ultrasonic wave like a hologram and arrange small particles — as well as biological cells in a liquid — into a desired pattern. However, the plastic holograms only provided still images. For each new pattern, they had to make a different plastic template. Using the ultrasound projector, the Stuttgart team is able to generate a new sound profile in about 10 seconds. “With other chips, we could significantly increase the frame rate,” says Kai Melde, who led the hologram development team.
    The technique could be used not only for diagnostic and therapeutic purposes but also in biomedical laboratories. For example, to arrange cells into organoid models. “Such organoids enable useful tests of active pharmaceutical ingredients and could therefore at least partially replace animal experiments,” says Fischer. More

  • in

    Detecting early-stage failure in electric power conversion devices

    Power electronics regulate and modify electric power. They are in computers, power steering systems, solar cells, and many other technologies. Researchers are seeking to enhance power electronics by using silicon carbide semiconductors. However, wear-out failures such as cracks remain problematic. To help researchers improve future device designs, early damage detection in power electronics before complete failure is required.
    In a study recently published in IEEE Transactions on Power Electronics, researchers from Osaka University monitored in real time the propagation of cracks in a silicon carbide Schottsky diode during power cycling tests. The researchers used an analysis technique, known as acoustic emission, which has not been previously reported for this purpose.
    During the power cycling test, the researchers mimicked repeatedly turning the device on and off, to monitor the resulting damage to the diode over time. Increasing acoustic emission corresponds to progressive damage to aluminum ribbons affixed to the silicon carbide Schottsky diode. The researchers correlated the monitored acoustic emission signals to specific stages of device damage that eventually led to failure.
    “A transducer converts acoustic emission signals during power cycling tests to an electrical output that can be measured,” explains lead author ChanYang Choe. “We observed burst-type waveforms, which are consistent with fatigue cracking in the device.”
    The traditional method of checking whether a power device is damaged is to monitor anomalous increases in the forward voltage during power cycling tests. Using the traditional method, the researchers found that there was an abrupt increase in the forward voltage, but only when the device was near complete failure. In contrast, acoustic emission counts were much more sensitive. Instead of an all-or-none response, there were clear trends in the acoustic emission counts during power cycling tests.
    “Unlike forward voltage plots, acoustic emission plots indicate all three stages of crack development,” says senior author Chuantong Chen. “We detected crack initiation, crack propagation, and device failure, and confirmed our interpretations by microscopic imaging.”
    To date, there has been no sensitive early-warning method for detecting fatigue cracks that lead to complete failure in silicon carbide Schottsky diodes. Acoustic emission monitoring, as reported here, is such a method. In the future, this development will help researchers determine why silicon carbide devices fail, and improve future designs in common and advanced technologies.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More