More stories

  • in

    Mathematical modelling to prevent fistulas

    It is better to invest in measures that make it easier for women to visit a doctor during pregnancy than measures to repair birth injuries. This is the conclusion from two mathematicians at LiU, using Uganda as an example.
    A fistula is a connection between the vagina and bladder or between the vagina and the rectum. It can arise in women during prolonged childbirth or as a result of violent rape. The connection causes incontinence, in which the patient cannot control either urine or faeces, which in turn leads to several further medical problems and to major physical, mental and social suffering. The medical care system in Uganda is well-developed, particularly in metropolitan areas, but despite this the occurrence of fistulas during childbirth is among the highest in Africa. It has been calculated that between 1.63 and 2.25 percent of women of childbearing age, 15-49 years, are affected.
    Betty Nannyonga, postdoc at LiU who also works at Makerere University in Kampala, Uganda, and Martin Singull, associate professor in mathematical statistics at the Department of Mathematics at LiU, have published an article in the scientific journal PLOS ONE that demonstrates how the available resources can be put to best use.
    “We have tried to construct a mathematical model to show how to prevent (obstetric) fistulas in women during prolonged childbirth. This is hugely important in a country such as Uganda,” says Martin Singull.
    The study analysed data from the Uganda Demographic Health Survey 2016. This survey collected information from 18,506 women aged 15-49 years and living in 15 regions in Uganda. Some special clinics, known as “fistula camps,” have been set up in recent years to provide surgery for affected women. Data from two of these, in different parts of the country, were also included in the study.
    The research found that significantly fewer women have received surgery at the clinics than expected.

    advertisement

    “Our results show that Uganda has a huge backlog of women who should be treated for fistula. In one of the regions we looked at, we found that for each woman who had undergone an operation, at least eight more should have received care,” says Betty Nannyonga.
    The researchers have looked at the relationship between the resources required to provide surgery for the women after being injured and the corresponding resources used to give the women access to professional care during the pregnancy, which includes the availability of delivery by Caesarean section if required. Another chilling statistic that must be considered is that not only do the women suffer from fistula, but that the baby survives in only 9% of such cases.
    The mathematical models demonstrate that the number of women who suffer from fistula decreases most rapidly if the resources are put into preventive maternity care, and making it possible for the women to give birth in hospital.
    The authors point out, however, that several difficulties contribute to the relatively high occurrence of fistula.
    “Even if professional healthcare and medical care are available in Uganda, most women do not enjoy good maternity care during their pregnancy. In some cases, this is because the distance to the healthcare providers is too far. Other reasons are the women do not have the money needed, or that they require permission from their husbands. It won’t do any good to invest money in health centres if the women don’t attend,” says Betty Nannyonga.
    The regions differ considerably in how they provide care. It is twice as likely that a woman in a town will see a doctor than it is for someone who lives in rural areas. Another factor is education: those with upper secondary education are twice as likely as those without, and having higher education than upper secondary increases the probability by a further factor of two. Social status also plays a major role: the probability of seeing a doctor on some occasion during the pregnancy is directly linked to income.
    One positive trend shown by the statistics is an increase in the fraction of women who receive professional care during the delivery itself (although the figures are for only those cases in which a child is born alive). This has risen from 37% in the period before 2001, to 42% in 2006, 58% in 2011 and 74% in 2016.
    “Our results show that professional care and surgery by themselves cannot prevent all cases of fistula: other measures will be needed,” concludes Betty Nannyonga.

    Story Source:
    Materials provided by Linköping University. Original written by Monica Westman Svenselius. Note: Content may be edited for style and length. More

  • in

    World's smallest ultrasound detector created

    Researchers at Helmholtz Zentrum München and the Technical University of Munich (TUM) have developed the world’s smallest ultrasound detector. It is based on miniaturized photonic circuits on top of a silicon chip. With a size 100 times smaller than an average human hair, the new detector can visualize features that are much smaller than previously possible, leading to what is known as super-resolution imaging.
    Since the development of medical ultrasound imaging in the 1950s, the core detection technology of ultrasound waves has primarily focused on using piezoelectric detectors, which convert the pressure from ultrasound waves into electric voltage. The imaging resolution achieved with ultrasound depends on the size of the piezoelectric detector employed. Reducing this size leads to higher resolution and can offer smaller, densely packed one or two dimensional ultrasound arrays with improved ability to discriminate features in the imaged tissue or material. However, further reducing the size of piezoelectric detectors impairs their sensitivity dramatically, making them unusable for practical application.
    Using computer chip technology to create an optical ultrasound detector
    Silicon photonics technology is widely used to miniaturize optical components and densely pack them on the small surface of a silicon chip. While silicon does not exhibit any piezoelectricity, its ability to confine light in dimensions smaller than the optical wavelength has already been widely exploited for the development of miniaturized photonic circuits.
    Researchers at Helmholtz Zentrum Mu?nchen and TUM capitalized on the advantages of those miniaturized photonic circuits and built the world’s smallest ultrasound detector: the silicon waveguide-etalon detector, or SWED. Instead of recording voltage from piezoelectric crystals, SWED monitors changes in light intensity propagating through the miniaturized photonic circuits.
    “This is the first time that a detector smaller than the size of a blood cell is used to detect ultrasound using the silicon photonics technology,” says Rami Shnaiderman, developer of SWED. “If a piezoelectric detector was miniaturized to the scale of SWED, it would be 100 million times less sensitive.”
    Super-resolution imaging

    advertisement

    “The degree to which we were we able to miniaturize the new detector while retaining high sensitivity due to the use of silicon photonics was breathtaking,” says Prof. Vasilis Ntziachristos, lead of the research team. The SWED size is about half a micron (=0,0005 millimeters). This size corresponds to an area that is at least 10,000 times smaller than the smallest piezoelectric detectors employed in clinical imaging applications. The SWED is also up to 200 times smaller than the ultrasound wavelength employed, which means that it can be used to visualize features that are smaller than one micrometer, leading to what is called super-resolution imaging.
    Inexpensive and powerful
    As the technology capitalizes on the robustness and easy manufacturability of the silicon platform, large numbers of detectors can be produced at a small fraction of the cost of piezoelectric detectors, making mass production feasible. This is important for developing a number of different detection applications based on ultrasound waves. “We will continue to optimize every parameter of this technology — the sensitivity, the integration of SWED in large arrays, and its implementation in hand-held devices and endoscopes,” adds Shnaiderman.
    Future development and applications
    “The detector was originally developed to propel the performance of optoacoustic imaging, which is a major focus of our research at Helmholtz Zentrum München and TUM. However, we now foresee applications in a broader field of sensing and imaging,” says Ntziachristos.
    While the researchers are primarily aiming for applications in clinical diagnostics and basic biomedical research, industrial applications may also benefit from the new technology. The increased imaging resolution may lead to studying ultra-fine details in tissues and materials. A first line of investigation involves super-resolution optoacoustic (photoacoustic) imaging of cells and micro-vasculature in tissues, but the SWED could be also used to study fundamental properties of ultrasonic waves and their interactions with matter on a scale that was not possible before. More

  • in

    Medical robotic hand? Rubbery semiconductor makes it possible

    A medical robotic hand could allow doctors to more accurately diagnose and treat people from halfway around the world, but currently available technologies aren’t good enough to match the in-person experience.
    Researchers report in Science Advances that they have designed and produced a smart electronic skin and a medical robotic hand capable of assessing vital diagnostic data by using a newly invented rubbery semiconductor with high carrier mobility.
    Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston and corresponding author for the work, said the rubbery semiconductor material also can be easily scaled for manufacturing, based upon assembly at the interface of air and water.
    That interfacial assembly and the rubbery electronic devices described in the paper suggest a pathway toward soft, stretchy rubbery electronics and integrated systems that mimic the mechanical softness of biological tissues, suitable for a variety of emerging applications, said Yu, who also is a principal investigator at the Texas Center for Superconductivity at UH.
    The smart skin and medical robotic hand are just two potential applications, created by the researchers to illustrate the discovery’s utility.
    In addition to Yu, authors on the paper include Ying-Shi Guan, Anish Thukral, Kyoseung Sim, Xu Wang, Yongcao Zhang, Faheem Ershad, Zhoulyu Rao, Fengjiao Pan and Peng Wang, all of whom are affiliated with UH. Co-authors Jianliang Xiao and Shun Zhang are affiliated with the University of Colorado.
    Traditional semiconductors are brittle, and using them in otherwise stretchable electronics has required special mechanical accommodations. Previous stretchable semiconductors have had drawbacks of their own, including low carrier mobility — the speed at which charge carriers can move through a material — and complicated fabrication requirements.
    Yu and collaborators last year reported that adding minute amounts of metallic carbon nanotubes to the rubbery semiconductor of P3HT — polydimethylsiloxane composite — improves carrier mobility, which governs the performances of semiconductor transistors.
    Yu said the new scalable manufacturing method for these high performance stretchable semiconducting nanofilms and the development of fully rubbery transistors represent a significant step forward.
    The production is simple, he said. A commercially available semiconductor material is dissolved in a solution and dropped on water, where it spreads; the chemical solvent evaporates from the solution, resulting in improved semiconductor properties.
    It is a new way to create the high quality composite films, he said, allowing for consistent production of fully rubbery semiconductors.
    Electrical performance is retained even when the semiconductor is stretched by 50%, the researchers reported. Yu said the ability to stretch the rubbery electronics by 50% without degrading the performance is a notable advance. Human skin, he said, can be stretched only about 30% without tearing.

    Story Source:
    Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length. More

  • in

    New data processing module makes deep neural networks smarter

    Artificial intelligence researchers at North Carolina State University have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization (AN). The hybrid module improves the accuracy of the system significantly, while using negligible extra computational power.
    “Feature normalization is a crucial element of training deep neural networks, and feature attention is equally important for helping networks highlight which features learned from raw data are most important for accomplishing a given task,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at NC State. “But they have mostly been treated separately. We found that combining them made them more efficient and effective.”
    To test their AN module, the researchers plugged it into four of the most widely used neural network architectures: ResNets, DenseNets, MobileNetsV2 and AOGNets. They then tested the networks against two industry standard benchmarks: the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark.
    “We found that AN improved performance for all four architectures on both benchmarks,” Wu says. “For example, top-1 accuracy in the ImageNet-1000 improved by between 0.5% and 2.7%. And Average Precision (AP) accuracy increased by up to 1.8% for bounding box and 2.2% for semantic mask in MS-COCO.
    “Another advantage of AN is that it facilitates better transfer learning between different domains,” Wu says. “For example, from image classification in ImageNet to object detection and semantic segmentation in MS-COCO. This is illustrated by the performance improvement in the MS-COCO benchmark, which was obtained by fine-tuning ImageNet-pretrained deep neural networks in MS-COCO, a common workflow in state-of-the-art computer vision.
    “We have released the source code and hope our AN will lead to better integrative design of deep neural networks.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Researchers demonstrate record speed with advanced spectroscopy technique

    Researchers have developed an advanced spectrometer that can acquire data with exceptionally high speed. The new spectrometer could be useful for a variety of applications including remote sensing, real-time biological imaging and machine vision.
    Spectrometers measure the color of light absorbed or emitted from a substance. However, using such systems for complex and detailed measurement typically requires long data acquisition times.
    “Our new system can measure a spectrum in mere microseconds,” said research team leader Scott B. Papp from the National Institute of Standards and Technology and the University of Colorado, Boulder. “This means it could be used for chemical studies in the dynamic environment of power plants or jet engines, for quality control of pharmaceuticals or semiconductors flying by on a production line, or for video imaging of biological samples.”
    In The Optical Society (OSA) journal Optics Express, lead author David R. Carlson and colleagues Daniel D. Hickstein and Papp report the first dual-comb spectrometer with a pulse repetition rate of 10 gigahertz. They demonstrate it by carrying out spectroscopy experiments on pressurized gases and semiconductor wafers.
    “Frequency combs are already known to be useful for spectroscopy,” said Carlson. “Our research is focused on building new, high-speed frequency combs that can make a spectrometer that operates hundreds of times faster than current technologies.”
    Getting data faster
    Dual-comb spectroscopy uses two optical sources, known as optical frequency combs that emit a spectrum of colors — or frequencies — perfectly spaced like the teeth on a comb. Frequency combs are useful for spectroscopy because they provide access to a wide range of colors that can be used to distinguish various substances.

    advertisement

    To create a dual-comb spectroscopy system with extremely fast acquisition and a wide range of colors, the researchers brought together techniques from several different disciplines, including nanofabrication, microwave electronics, spectroscopy and microscopy.
    The frequency combs in the new system use an optical modulator driven by an electronic signal to carve a continuous laser beam into a sequence of very short pulses. These pulses of light pass through nanophotonic nonlinear waveguides on a microchip, which generates many colors of light simultaneously. This multi-color output, known as a supercontinuum, can then be used to make precise spectroscopy measurements of solids, liquids and gases.
    The chip-based nanophotonic nonlinear waveguides were a key component in this new system. These channels confine light within structures that are a centimeter long but only nanometers wide. Their small size and low light losses combined with the properties of the material they are made from allow them to convert light from one wavelength to another very efficiently to create the supercontinuum.
    “The frequency comb source itself is also unique compared to most other dual-comb systems because it is generated by carving a continuous laser beam into pulses with an electro-optic modulator,” said Carlson. “This means the reliability and tunability of the laser can be exceptionally high across a wide range of operating conditions, an important feature when looking at future applications outside of a laboratory environment.”
    Analyzing gases and solids
    To demonstrate the versatility of the new dual-comb spectrometer, the researchers used it to perform linear absorption spectroscopy on gases of different pressure. They also operated it in a slightly different configuration to perform the advanced analytical technique known as nonlinear Raman spectroscopy on semiconductor materials. Nonlinear Raman spectroscopy, which uses pulses of light to characterize the vibrations of molecules in a sample, has not previously been performed using an electro-optic frequency comb.

    advertisement

    The high data acquisition speeds that are possible with electro-optic combs operating at gigahertz pulse rates are ideal for making spectroscopy measurements of fast and non-repeatable events.
    “It may be possible to analyze and capture the chemical signatures during an explosion or combustion event,” said Carlson. “Similarly, in biological imaging the ability to create images in real time of living tissues without requiring chemical labeling would be immensely valuable to biological researchers.”
    The researchers are now working to improve the system’s performance to make it practical for applications like real-time biological imaging and to simplify and shrink the experimental setup so that it could be operated outside of the lab.

    Story Source:
    Materials provided by The Optical Society. Note: Content may be edited for style and length. More

  • in

    Physicists develop basic principles for mini-labs on chips

    Colloidal particles have become increasingly important for research as vehicles of biochemical agents. In future, it will be possible to study their behaviour much more efficiently than before by placing them on a magnetised chip. A research team from the University of Bayreuth reports on these new findings in the journal Nature Communications. The scientists have discovered that colloidal rods can be moved on a chip quickly, precisely, and in different directions, almost like chess pieces. A pre-programmed magnetic field even enables these controlled movements to occur simultaneously.
    For the recently published study, the research team, led by Prof. Dr. Thomas Fischer, Professor of Experimental Physics at the University of Bayreuth, worked closely with partners at the University of Poznán and the University of Kassel. To begin with, individual spherical colloidal particles constituted the building blocks for rods of different lengths. These particles were assembled in such a way as to allow the rods to move in different directions on a magnetised chip like upright chess figures — as if by magic, but in fact determined by the characteristics of the magnetic field.
    In a further step, the scientists succeeded in eliciting individual movements in various directions simultaneously. The critical factor here was the “programming” of the magnetic field with the aid of a mathematical code, which in encrypted form, outlines all the movements to be performed by the figures. When these movements are carried out simultaneously, they take up to one tenth of the time needed if they are carried out one after the other like the moves on a chessboard.
    “The simultaneity of differently directed movements makes research into colloidal particles and their dynamics much more efficient,” says Adrian Ernst, doctoral student in the Bayreuth research team and co-author of the publication. “Miniaturised laboratories on small chips measuring just a few centimetres in size are being used more and more in basic physics research to gain insights into the properties and dynamics of materials. Our new research results reinforce this trend. Because colloidal particles are in many cases very well suited as vehicles for active substances, our research results could be of particular benefit to biomedicine and biotechnology,” says Mahla Mirzaee-Kakhki, first author and Bayreuth doctoral student.

    Story Source:
    Materials provided by Universität Bayreuth. Note: Content may be edited for style and length. More

  • in

    Security software for autonomous vehicles

    Before autonomous vehicles participate in road traffic, they must demonstrate conclusively that they do not pose a danger to others. New software developed at the Technical University of Munich (TUM) prevents accidents by predicting different variants of a traffic situation every millisecond.
    A car approaches an intersection. Another vehicle jets out of the cross street, but it is not yet clear whether it will turn right or left. At the same time, a pedestrian steps into the lane directly in front of the car, and there is a cyclist on the other side of the street. People with road traffic experience will in general assess the movements of other traffic participants correctly.
    “These kinds of situations present an enormous challenge for autonomous vehicles controlled by computer programs,” explains Matthias Althoff, Professor of Cyber-Physical Systems at TUM. “But autonomous driving will only gain acceptance of the general public if you can ensure that the vehicles will not endanger other road users — no matter how confusing the traffic situation.”
    Algorithms that peer into the future
    The ultimate goal when developing software for autonomous vehicles is to ensure that they will not cause accidents. Althoff, who is a member of the Munich School of Robotics and Machine Intelligence at TUM, and his team have now developed a software module that permanently analyzes and predicts events while driving. Vehicle sensor data are recorded and evaluated every millisecond. The software can calculate all possible movements for every traffic participant — provided they adhere to the road traffic regulations — allowing the system to look three to six seconds into the future.
    Based on these future scenarios, the system determines a variety of movement options for the vehicle. At the same time, the program calculates potential emergency maneuvers in which the vehicle can be moved out of harm’s way by accelerating or braking without endangering others. The autonomous vehicle may only follow routes that are free of foreseeable collisions and for which an emergency maneuver option has been identified.
    Streamlined models for swift calculations
    This kind of detailed traffic situation forecasting was previously considered too time-consuming and thus impractical. But now, the Munich research team has shown not only the theoretical viability of real-time data analysis with simultaneous simulation of future traffic events: They have also demonstrated that it delivers reliable results.
    The quick calculations are made possible by simplified dynamic models. So-called reachability analysis is used to calculate potential future positions a car or a pedestrian might assume. When all characteristics of the road users are taken into account, the calculations become prohibitively time-consuming. That is why Althoff and his team work with simplified models. These are superior to the real ones in terms of their range of motion — yet, mathematically easier to handle. This enhanced freedom of movement allows the models to depict a larger number of possible positions but includes the subset of positions expected for actual road users.
    Real traffic data for a virtual test environment
    For their evaluation, the computer scientists created a virtual model based on real data they had collected during test drives with an autonomous vehicle in Munich. This allowed them to craft a test environment that closely reflects everyday traffic scenarios. “Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop,” Althoff sums up.
    The computer scientist emphasizes that the new security software could simplify the development of autonomous vehicles because it can be combined with all standard motion control programs.

    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    Machine learning models identify kids at risk of lead poisoning

    Machine learning can help public health officials identify children most at risk of lead poisoning, enabling them to concentrate their limited resources on preventing poisonings rather than remediating homes only after a child suffers elevated blood lead levels, a new study shows.
    Rayid Ghani, Distinguished Career Professor in Carnegie Mellon University’s Machine Learning Department and Heinz College of Information Systems and Public Policy, said the Chicago Department of Public Health (CDPH) has implemented an intervention program based on the new machine learning model and Chicago hospitals are in the midst of doing the same. Other cities also are considering replicating the program to address lead poisoning, which remains a significant environmental health issue in the United States.
    In a study published today in the journal JAMA Network Open, Ghani and colleagues at the University of Chicago and CDPH report that their machine learning model is about twice as accurate in identifying children at high risk than previous, simpler models, and equitably identifies children regardless of their race or ethnicity.
    Elevated blood lead levels can cause irreversible neurological damage in children, including developmental delays and irritability. Lead-based paint in older housing is the typical source of lead poisoning. Yet the standard public health practice has been to wait until children are identified with elevated lead levels and then fix their living conditions.
    “Remediation can help other children who will live there, but it doesn’t help the child who has already been injured,” said Ghani, who was a leader of the study while on the faculty of the University of Chicago. “Prevention is the only way to deal with this problem. The question becomes: Can we be proactive in allocating limited inspection and remediation resources?”
    Early attempts to devise predictive computer models based on factors such as housing, economic status, race and geography met with only limited success, Ghani said. By contrast, the machine learning model his team devised is more complicated and takes into account more factors, including 2.5 million surveillance blood tests, 70,000 public health lead investigations, 2 million building permits and violations, as well as age, size and condition of housing, and sociodemographic data from the U.S. Census.
    This more sophisticated approach correctly identified the children at highest risk of lead poisoning 15.5% of the time — about twice the rate of previous predictive models. That’s a significant improvement, Ghani said. Of course, most health departments currently aren’t identifying any of these children proactively, he added.
    The study also showed that the machine learning model identified these high-risk children equitably. That’s a problem with the current system, where Black and Hispanic children are less likely to be tested for blood lead levels than are white children, Ghani said.
    In addition to Ghani, the research team included Eric Potash and Joe Walsh of the University of Chicago Harris School of Public Policy; Emile Jorgensen, Nik Prachand and Raed Manour of CDPH; and Corland Lohff of the Southern Nevada Health District. The Robert Wood Johnson Foundation supported this research.

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More